摘要
随着ChatGPT的问世及应用,生成式人工智能(AIGC)展现出巨大发展潜力和光明前景,同时也带来了AIGC技术伦理危机及其治理的重要课题,“负责任”成为AIGC发展的关键诉求之一。AIGC技术伦理危机的生成具有独特诱发机制,主要包括:AIGC算法运行透明度低导致的沟通“不理解”、缺乏主观善意导致的决策“不可控”、人工智能幻觉导致的安全性“不可靠”、人机伙伴关系尚未建立导致的合作前景“不可持续”。针对诱发机制,制定“负责任”的人工智能准则和监管框架需要多主体协同发力:学界应推进人机互动中的可解释性研究,政府部门要加强立法和监管力度,人工智能行业应建立自治自律机制,用户要提升算法素养。
With the emergence of OpenAI’s ChatGPT,generative artificial intelligence has demonstrated its profound potential to develop.However,the problems about its unclear technologically ethical standards,which arise from the algorithmic rules,have ignited intense global discussions and apprehensions.Therefore,“being responsible”has become a pivotal demand for its development.Nonetheless,nowadays the existing documents discussing the regulation and governance of artificial intelligence from a“responsible”perspective are inadequate compared to the technological advancements themselves.In particular,there is a lack of profound research into the triggering mechanisms of the AIGC(AI-generated content)technology ethical crisis,as well as the formulation of deep-level investigations into“responsible”AI guidelines and regulatory frameworks.This article primarily adopts an approach that involves analyzing policies and data released by the Artificial Intelligence and Economic Society Research Center of the China Academy of Information and Communications Research,the official websites of OpenAI,the EuropeanCommission,and others.By drawing inspiration from the latest research achievements in“responsible”artificial intelligence in the journal Nature,this article explores and uncovers the legislative efforts made by governments worldwide related to“responsibility”in artificial intelligence,scholarly research on the topic,and the latest governance practices implemented by AI enterprises.This research investigates the current state of“responsible”artificial intelligence,its regulatory domains,and the adopted governance measures.The research reveals that the ethical crisis of“unintelligibility,uncontrollability,unreliability,and unsustainability”triggered by AIGC is not only harming humanity but also hindering the advancement of AI technology itself.Thus,it is imperative for humans to regulate and govern artificial intelligence responsibly for the benefit of humanity.As of now,there is a basic consensus among governments worldwide,the academia,and global AI enterprises on the development of responsible artificial intelligence,which is to create a practical foundation for the responsible application of AI to humanity.In comparison to previous literature,this article extends the discourse in two main aspects:firstly,it explores the triggering mechanisms of the AIGC technology ethical crisis,mainly including low algorithmic operational transparency,which will lead to“misunderstanding”;a lack of subjective goodwill,which will trigger“uncontrollability”;AI illusions,which will cause security“unreliability”,and the absence of established human-machine partnerships,which will invite“unsustainability”.Secondly,in response to the ethical crisis of“misunderstanding,”“uncontrollability,”“unreliability,”and“unsustainability,”the article proposes targeted approaches for academia,government,AI enterprises,and users to address these issues.These include promoting academic research in interpretability from a“technology responsibility”perspective,ethical agreements and regulations with a“social responsibility”approach,enterprise autonomy from a“user responsibility”perspective,and enhancing algorithmic literacy with a“self-responsibility”approach,respectively.The core content of this article unveils the triggering mechanisms underlying the AIGC technology ethical crisis and proposes targeted governance strategies.Particularly,the detailed research analyses of foreign legislation related to generative artificial intelligence and AI enterprise autonomy might provide useful insights for China’s legislation and autonomous corporate development in this field.Additionally,the study of“responsible”artificial intelligence aims to stimulate academic exchange and embrace intellectual collisions while humbly seeking guidance from relevant experts in this field.
作者
陈建兵
王明
CHEN Jianbing;WANG Ming(School of Marxism,Xi’an Jiaotong University,Xi’an 710049,China)
出处
《西安交通大学学报(社会科学版)》
CSSCI
北大核心
2024年第1期111-120,共10页
Journal of Xi'an Jiaotong University:Social Sciences
基金
国家社会科学基金项目(2020MYB038)
教育部高校思想政治工作队伍培训研修中心重点项目(21KT02)。
关键词
生成式人工智能
ChatGPT
技术伦理
“负责任”治理
artificial intelligence generated content
ChatGPT
technological ethics
“responsible”governance
作者简介
陈建兵(1976-),男,西安交通大学马克思主义学院副院长,教授,博士生导师。