본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI Media Room

  • 2025 AI Ethics Open Seminar

    • Pub date 2025-11-27
    • PlaceEL Tower, Ruby Hall, Yangjae, Seoul
    • EVENT_DATE2025-12-27
    • File There are no registered files.
※ URL (Korean): https://www.kisdi.re.kr/bbs/view.do?bbsSn=114807&key=m2101113056011&pageIndex=1&sc=&sw=&selectedYear=2025

2025 AI Ethics Open Seminar Held

AI ethics policy directions were discussed ahead of the enforcement of the AI Basic Act.
The “2025 AI Ethics Open Seminar” was held, where interim results of the AI recruitment service ethics impact assessment and the draft standard guidelines for private-sector autonomous AI ethics committees were presented for the first time.

■ Date and Time: November 27, 2025 (Thu), 14:00–16:50
■ Venue: EL Tower, Ruby Hall, Yangjae, Seoul

The Korea Information Society Development Institute (KISDI, President Sangkyu Rhee) and the Ministry of Science and ICT held the “2025 AI Ethics Open Seminar” on November 27 at EL Tower in Yangjae, Seoul. The seminar reviewed approaches to establishing AI ethics implementation systems applicable in practice following the enforcement of the AI Basic Act scheduled for next year.

Moon Myungjae, Chair of the AI Ethics Policy Forum (Yonsei University), delivered a keynote presentation on “AI Ethics Policy and Policy Instruments.” He examined changes in domestic and international AI ethics and regulatory environments and emphasized the need to specify policy instruments to ensure ethics and trust. He discussed the importance of practical tools such as guidelines, certification systems, and self-assessment mechanisms applicable in corporate settings and noted that ethics forms a basis for corporate competitiveness amid increasing risks and opportunities associated with AI technologies. He also explained the importance of sector- and service-specific ethical standards and support systems.

In the first presentation, Jung Wook MOON, Director at KISDI, presented the interim results of the “2025 AI Ethics Impact Assessment.” The assessment was conducted based on the ten core requirements of the national “AI Ethics Guidelines” announced in December 2020. It systematically examined impact factors across five ethical areas—privacy protection, inclusiveness, accountability, transparency, and fairness—reflecting the characteristics of AI recruitment services.

Jung Wook MOON explained that AI recruitment services improve access and procedural transparency while also presenting structural risks, including excessive inference of sensitive information, overreliance on AI, uncritical acceptance of outcomes, and accountability gaps. He noted that accessibility for applicants with disabilities, bias verification, and mitigation of language barriers require government support, while personal data security and profiling risks require priority responses. He also stated that effective AI ethics implementation requires the roles of all stakeholders, including government efforts to strengthen legal, standards, and oversight systems; developers’ integration of ethics at the design stage; operators’ accountable management based on human oversight; and individuals’ exercise of information rights.

In the second presentation, Hwihong Kim, Associate Fellow at KISDI, presented the “Draft Standard Guidelines for Private-Sector Autonomous AI Ethics Committees.” He explained that the draft was developed to reflect the purpose of the AI Basic Act in promoting the diffusion and voluntary adoption of AI ethics principles in the private sector. The draft includes minimum operational standards such as committee composition of at least three members including a chair, diversity in expertise and gender and inclusion of external members, conflict-of-interest provisions, and mechanisms for reviewing and monitoring deliberation outcomes. He noted that the draft was designed for flexible implementation to enhance practical usability, particularly for SMEs and startups.

In the panel discussion, moderated by Moon Myungjae, experts from academia, industry, the legal sector, and research institutions examined priority tasks and practical foundations for AI ethics policy following the enforcement of the AI Basic Act. Participants discussed trends in global AI competition that increasingly emphasize ethics and trust alongside technical performance, as well as key issues such as corporate self-regulation, international alignment, implementation capacity for SMEs and startups, and growing public demand for ethical and safe AI. They noted that AI ethics should be implemented through practice-oriented policy systems beyond declaratory standards.

Hyun Kyong LEE, Fellow at KISDI, stated that AI ethics can complement areas difficult to address through the AI Basic Act alone and emphasized the importance of expanding awareness of AI ethics, particularly through education. She also introduced cases in which AI ethics education materials developed by the Ministry of Science and ICT and KISDI have been used domestically and requested internationally, noting that AI ethics constitutes a shared value for society as a whole.