본문영역 바로가기 메인메뉴 바로가기 하단링크 바로가기

KISDI 정보통신정책연구원

KISDI 정보통신정책연구원

검색 검색 메뉴

KISDI Media Room

  • 2nd AI Ethics Policy Debate (Forum) (June 13.2022)

    • Pub date 2022-06-08
    • Place
    • EVENT_DATE2022-06-08
    • File There are no registered files.

The 2nd AI ethics policy debate (forum)

- Announcement of a plan to develop ethics self-check table for the Scatterlab AI chatbot.-

ㅇ Date: June 08.2022(Wed) 14:00-16:00

ㅇ Venue: Seoul Korea Press Center

On June 8 (Wed), KISDI and the Ministry of Science and ICT together hosted the 2nd Ethics Policy Forum at the Korea Press Center in Junggu, Seoul.

The AI Ethics Policy Forum was launched in February of this year to foster discussions on the development and application of ethics in the AI industry. Thirty experts from various industrial sectors, as well as representatives of AI, ethics, education, law and public organizations are currently serving as members of the first forum.

The forum kicked off with a presentation by Scatterlab (CEO Kim Jong-Yoon) on the development status of an ethics self-checking table for ILUDA, its AI-based chatbot. The participants in each debating group shared information about UNESCO’s recommendations on AI ethics, the private sector support plan for securing the reliability of AI technology, and the current status of ethics education in AI, and then engaged in intense discussions on how to introduce an AI ethics policy in the future.

The ethics group (group leader: Moon Jung-wook, head of KISDI’s Center for AI & Social Policy) examined the contents of UNESCO’s ‘Recommendations on AI Ethics’ (adopted in November of last year) and discussed the appropriate response to take at the national level, as well as the idea of introducing the AI ethics self-check table in corporate workplaces to expand its use and adding more details to it.

The technology group (group leader: Chan Soon-il, head of TTA AI Digital Convergence Team Group) shared the field results of test application of the ‘Guidebook on Developing Trustworthy AI’, which could be used to technically verify the reliability of AI products and services during the development process. The group also discussed the effectiveness of verification and issues in using the guidebook in the field, consulting services, and the deployment of self-checking tools that could be provided to companies seeking to improve the reliability of AI.

Finally, the education group (group leader: Professor Byun Soon-yong of Seoul National University of Education’s Department of Ethics Education) looked into the current status of AI ethics education and shared information on the progress of the development of textbooks on ethics for elementary, middle and high school students. The AI ethics textbooks for elementary, middle, and high schools will be developed based on the ‘Guidelines on developing AI ethics contents’ (December 2021). As the developers of the textbooks will take into account the learning characteristics of students, the textbooks will consist of an activities section, an experience-oriented learning section, and a subject learning section.