摘要
                
                    人工智能算法具有复杂性、类人性与危险性,在应用的过程中给社会造成不同类型的侵害。根据侵害的严重性不同,可分为较为轻微的精神性一般人格权侵害、物理世界较为严重的人身和财产损害、物理世界严重的人身财产损害。又可以细分为:信息茧房、对人自由表达的限制、算法歧视、大数据杀熟、算法瑕疵决策侵害、算法错误。针对这些侵害,可以尝试让算法侵权的集体管理机构代行职权、物理世界的人身财产损害实行危险责任、明确算法侵害的集合式责任。
                
                Artificial intelligence algorithms are complex, humanoid and dangerous, and cause different types of harm to society in the process of application. According to the severity of the infringement, it can be divided into relatively minor infringement of general personality rights of the mental nature, more serious personal and property damage in the physical world, and serious personal and property damage in the physical world. It can be subdivided into: information cocoon, restrictions on people’s free expression, algorithm discrimination, big data killing, algorithm flaws, decision-making infringement, and algorithm errors. In view of these infringements, we can try to let the collective management institution of algorithm infringement act on its behalf, implement dan-gerous liability for personal and property damage in the physical world, and clarify the collective liability for algorithm infringement.
    
    
    
    
                出处
                
                    《法学(汉斯)》
                        
                        
                    
                        2023年第3期1542-1550,共9页
                    
                
                    Open Journal of Legal Science