본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI News

  • KISDI Publishes Report on “Policy Directions for Realizing an Inclusive AI Basic Society”

    • Pub date 2025-10-28
    • File There are no registered files.
※ URL (Korean): https://www.kisdi.re.kr/bbs/view.do?bbsSn=114762&key=m2101113055776&pageIndex=2&sc=&sw=

KISDI Publishes Report on “Policy Directions for Realizing an Inclusive AI Basic Society”

Presents policy directions based on parallel pursuit of innovation and inclusion and cooperation among government, business, and citizens -

The Korea Information Society Development Institute (KISDI, President Sangkyu Rhee) recently published “Policy Directions for Realizing an Inclusive AI Basic Society: Universal Access, Safe Services, and Responsible Use,” the third report (Special Issue No. 3) in the special edition of KISDI Premium Report: Advancing National AI and Digital Policy Agendas.

The report presents policy alternatives and implementation directions that government, business, and citizens can institutionalize in Korea to realize an inclusive AI basic society, in line with the government’s initiative to ensure universal access so that all citizens can benefit from AI.

The report outlines policy directions for establishing AI as a foundational infrastructure accessible to all citizens rather than a resource limited to specific industries or groups. An analysis of policy trends in seven major countries—the United States, the United Kingdom, France, Canada, Japan, China, and Singapore—shows that these countries pursue strategies that combine innovation and inclusion. Their approaches include strengthening AI research and technological innovation and industrial competitiveness, building national AI infrastructure and ecosystems, enhancing human resources and capabilities, promoting global cooperation and leadership in AI, and expanding accessibility while reducing digital divides.

To achieve universal access, identified as a first step toward an inclusive AI society, the report recommends a comprehensive policy approach. Key measures include expanding public AI infrastructure and supercomputing capacity, developing open large language models (LLMs), fostering domestic talent while attracting global talent, promoting industry-specific AI convergence and innovation, strengthening accessibility for vulnerable groups, and enhancing accountability in AI technologies.

Regarding safe AI services, the report analyzes cases of major domestic and international companies and classifies private-sector AI ethics frameworks into four types: strategic implementation-oriented, technology–policy integrated, ethics advisory-based, and declarative principles-based. It evaluates each framework in terms of authority structures, decision-making systems, and operational sustainability. The report emphasizes that establishing corporate-level ethical principles and institutionalizing implementation processes are essential for safe and responsible AI services. It recommends measures such as developing AI ethics governance structures, building model safety verification systems, establishing user redress and customer support mechanisms, publishing transparency reports, and strengthening in-house AI ethics training. The report notes that systematic governance could support the development of a sustainable digital ecosystem in the private sector grounded in autonomy and innovation.

To promote responsible AI use, the report underscores the importance of strengthening public AI literacy and protecting user rights to foster responsible digital citizenship. It notes that disparities in AI capabilities may contribute to social inequality and proposes building a lifelong education system and ensuring inclusive learning opportunities. According to the report, major countries such as the United States, the EU, Japan, and Singapore have institutionalized AI literacy as a core competency across education systems, from primary and secondary education to adult retraining, and are implementing tailored programs for diverse groups including teachers, public officials, workers, and older adults.

The report also highlights the need to protect citizens’ rights through measures such as preventing AI-related discrimination, providing remedies for misuse, and strengthening protections for vulnerable groups. It recommends advancing institutional measures—including the right to explanation, the right to deletion, redress procedures, and guidelines for protecting children and adolescents—based on existing legal frameworks such as the Framework Act on Artificial Intelligence and the Personal Information Protection Act. The report indicates that such efforts could support the realization of a responsible and inclusive AI society in which citizens can safely benefit from AI technologies.

In addition, the report presents policy priorities for building an inclusive AI society based on a survey of 33 experts from industry, government, academia, and research institutes. Respondents viewed AI not merely as a technology but as a core social infrastructure and identified the establishment of multilayered governance involving government, business, and citizens as essential. The survey assessed the current status of each stakeholder group and analyzed policy importance and readiness levels to identify areas with high importance but low preparedness.

Jung Wook MOON, Director of the Office of Digital Society Strategy, stated that in an era when AI functions as a core social infrastructure, institutional support from government, responsible governance by businesses, and active participation by citizens are all necessary. He added that joint efforts among government, business, and citizens are required to realize an inclusive and sustainable AI basic society.

KISDI will continue to provide strategic research and policy support to facilitate cooperative approaches among government, business, and citizens in advancing national AI and digital policy agendas.