Exploring the ideas and methods for regulating generative artificial intelligence algorithms (Explor

 

1. Artificial intelligence software chat GPT

Original Title: Exploring the Ideas for Regulating Generative Artificial Intelligence Algorithms. Recently, the "Generative Artificial Intelligence Algorithm Regulation -" Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions) "academic seminar organized by the Law School of Renmin University of China and the Future Rule of Law Research Institute of Renmin University of China was held. Experts from relevant government departments, university research institutions, and the artificial intelligence industry participated in this seminar.

2. Artificial Intelligence Movies

Zhang Jiyu, Executive Dean of the Future Rule of Law Research Institute at Renmin University of China: The series of concerns on large-scale language models represented by ChatGPT can be summarized in three key words: development, security, and rule of law. Firstly, generative artificial intelligence technology is an epoch-making development, and large models may form new levels in the architecture of Internet and data, profoundly affecting the development of industries and future technological progress.

3. List of artificial intelligence stock leaders

Secondly, from the perspectives of data, algorithms, and system integration, the development and innovation of generative artificial intelligence are accompanied by risks and challenges. Thirdly, in order to regulate the conflict between development and security, and build a people-centered intelligent social legal order, it is of great significance to promote the healthy and orderly development of artificial intelligence technology. ■ Wu Mengyi, Vice President of Baidu Company:

4. Artificial intelligence AI

In combination with several views in the Measures for the Management of Generative AI Services (Draft for Comments): first, identifying open APIs and API retrieval functions as content producers may increase the requirements for compliance obligations; second, identifying generative AI as a Internet Information Services with the attribute of public opinion and the ability of social mobilization may generalize the scope of application.

5. Artificial intelligence computing power

Thirdly, from a practical perspective, there is a conflict between the accuracy of content generation and the principles of generative artificial intelligence technology. Therefore, in the initial stage of regulation, the focus should shift from pursuing content accuracy to cracking down on illegal exploitation. Fourthly, it is necessary to dialectically view the quality of data. Generative artificial intelligence has a certain degree of creativity, and massive training data is conducive to improving the speed of technological development, We hope that the focus of regulation is on the quality of generative artificial intelligence products, and it is not advisable to make overly detailed regulations on training data and technical routes.

6. Artificial Intelligence English

The fifth is to comprehensively consider the human-computer dialogue characteristics of generative AI services and simplify the requirements for real identity information. ■ Wang Rong, chief data legal policy expert of Tencent Research Institute, made several suggestions on the regulation of generative AI. First, generative AI may be the most basic tool in the new information age in the future, which goes beyond simple Internet Information Services and should be viewed from a new perspective.

7. Artificial Intelligence GPT

Secondly, it is necessary to establish regulations, but the current legal norms themselves may still require further discussion of new issues; There is a strong market driving force in terms of the quality of output information; In the era of big models, the development of underlying network security and data security is given higher priority. Thirdly, the three month model optimization training period is difficult to implement in practice, so it is advisable to consider various other technical means.

8. Artificial Intelligence Stocks

Finally, in the field of AI development, the emergence of risks is gradual, and market and regulatory entities should collaborate to solve problems under a common goal and close stance. Professor Wang Liming from the Law School of Renmin University of China: We should face the problems caused by ChatGPT and consider how to actively respond in law. Firstly, we should face the issues of personality rights and intellectual property rights caused by generative artificial intelligence, Legislation that is too advanced may hinder the development and innovation of technology. Accumulating experience through the issuance of management measures or the formulation of relevant measures, and legislating after the conditions are ripe, may be a relatively safe approach.

9. Artificial Intelligence Big Data

Five suggestions are put forward for the infringement issues caused by ChatGPT: firstly, actively support the development of artificial intelligence products in terms of value orientation; secondly, ChatGPT is different from general products such as autonomous driving. Allowing service providers to bear no fault liability will hinder technological development and do not comply with the value orientation of encouraging technological innovation.

10. Ranking of the Best Schools in Artificial Intelligence

Thirdly, the rules for reducing liability for medical accidents can be used for reference. When vulnerabilities that are difficult to eliminate due to technological limitations are encountered, the responsibility of service providers can be appropriately reduced or even exempted. Fourthly, the security obligations of service providers for personal privacy information can be strengthened and strengthened. Fifthly, regarding the allocation of infringement liability caused by the hallucinatory response of ChatGPT, a distinction should be made between large-scale platform generation and malicious user inducement.

■ Liu Xiaochun, associate professor of the University of the Chinese Academy of Social Sciences, put forward suggestions on regulation from four aspects: first, the necessity of regulation. If the original system can solve most of the problems in the new technology scene, there is no need for special regulation. The important risk point of generative AI is at the content level. If the generated content is not disseminated, there is still doubt whether there is a risk; If it has already spread, it is necessary to consider whether the existing governance system can solve the problem.

The second is the effectiveness of regulation. The core issue is whether the governance or intervention of public power in the industry can truly and effectively achieve the purpose of risk governance based on the necessity of regulation. The third is to consider the application scenario of regulatory technology in combination with the existing industrial background in China. Not every enterprise may develop a large model, but there is a great possibility of expanding business at the application level, and space should be left for the development of the business model at the application level.

Especially in the definition of content producers, coordination should not be broadened. Fourth, coordination should be considered from the perspective of legislative basis. For example, coordination with cybersecurity law should be considered on the issue of real name registration, and coordination should also be based on existing mechanisms on algorithm evaluation and filing. Associate Professor of Foreign Economic and Trade, License: "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)" and previous laws There are four major contradictions between existing practice and technology.

First, the contradiction between the new law and the old law: what are the similarities and differences between the deep synthesis technology in the Administrative Provisions on the Deep Synthesis of Internet Information Services and the deep AI technology in the Administrative Measures for Generative AI Services (Draft for Comments)? In the future, there may be contradictions in the process of law enforcement. Second, there may be contradictions between extraterritorial effects and territorial jurisdiction.

The content involved in the Measures for the Management of Generative AI Services (Draft for Comments) is beyond the coverage of the Personal Information Protection Law, and there are doubts about whether departmental regulations can establish extraterritorial jurisdiction beyond the upper law. Third, the contradiction between the security management of network information content and the regulation of General technology restricts the universal AI to the content of network information, which may lead to mismatch between regulatory tools and regulatory objectives.

In fact, in different scenarios, the risks of generative artificial intelligence vary depending on the form. How to determine their risks requires modular judgment. Fourthly, the contradiction between the safety requirements of the entire process and the inherent characteristics of technology. The management of the entire process does not match the characteristics of generative artificial intelligence. For example, the legitimacy of pre trained data is difficult to achieve, and there is no direct mapping relationship between the data and the final information generation.

■ Zhang Xin, associate professor of the University of International Business and Economics: In terms of regulatory objects, China's AI enterprises were previously concentrated on the application layer, and were relatively less distributed in the basic layer and the technical layer. In terms of regulatory methods, the characteristics of the industrial chain oriented to generative AI can improve the interoperability and consistency of regulation. On the one hand, in terms of regulatory innovation, attention should be paid to regulatory resilience, On the other hand, new regulatory methods represented by modular regulation should be actively explored.

Four revisions to the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)": Firstly, further clarify the definition of "generative artificial intelligence" in Article 2. Secondly, clarify the scope of "product generated content producers" in Article 5 based on the characteristics of the industrial chain, and it is not advisable to make all actors bear the responsibility of product content producers through a one size fits all approach.

Thirdly, Article 7 regarding the accuracy of pre training data and optimization of training data sources can be adjusted appropriately. As long as the organization and enterprise fulfill corresponding obligations under acceptable technical conditions, it can be considered in line with the accuracy principle of artificial intelligence. Fourthly, Article 15 focuses on the technical process, which is difficult to achieve through current technical means, It can be transformed into preventing the recurrence of illegal and compliant content reported by users from the perspective of result supervision.

Professor Wan Yong from the Law School of Renmin University of China: In terms of the challenge of generative artificial intelligence to the fair use system of copyright law, generative artificial intelligence may involve the right to copy, deduce, and disseminate to the public in copyright law. However, the current types of fair use are difficult to apply to artificial intelligence technology. Secondly, in order to solve related problems, it is necessary to reform the fair use system of the development of the artificial intelligence industry, There are two main solutions: one is to reshape the theoretical foundation and propose the concepts of "works based use" and "non works based use". When using works for data mining purposes, only some cases belong to "non works based use"; The second is to reform institutional norms, including adding specific exception clauses or introducing open exception clauses.

Suggest amending the Implementation Regulations of the Copyright Law to introduce special exceptions that take into account industrial development and the rights of copyright owners. Professor Su Yu from the Law School of People's Public Security University of China: In addition to risk management, domestic generative artificial intelligence is also in great need of institutional support. Generative artificial intelligence models face difficulties in algorithm interpretation, algorithm auditing, algorithm standard formation, algorithm impact assessment, and algorithm certification.

The main consideration of the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" is the issue of information content security. In terms of security protection, a "six overlapping" mechanism has been designed, including control of generation results at the output end, data sources and limitations on data information content at the input end, expanded content producer responsibility, combination of user reporting and active supervision, and broad information provision obligations Limited connection with existing legislation such as algorithm recommendation and deep synthesis.

Some of these mechanisms have varying degrees of security redundancy. In addition, there are governance points that need to be fully considered, including the separate classification of generated code, necessary differentiation of training data, type prompts for output results, and specific definitions of data source legitimacy. Overall, for the legal governance of generative artificial intelligence, significantly reducing ineffective or inefficient risk redundancy should be an important goal of mechanism design.

Cheng Ying, Senior Engineer of China Academy of Information and Communications: The first major feature of generative artificial intelligence is its universal purpose as a new underlying platform. The AIGC supply chain has been extended, and whether developers should take responsibility. How developers, platform owners, B-end users, C-end users, etc. share responsibility has become a key issue. Article 5 of the draft for soliciting opinions should provide a detailed division of each subject.

The second feature is content generation, which brings about problems such as intellectual property rights and false information. In the future, generative AI represents a change in the way knowledge is called. It will master the vast majority of information sources. Compared with the key identification obligations of deep synthesis, it has higher obligations such as algorithm evaluation and self-censorship. However, The one size fits all requirement for the authenticity of training data and generated results may conflict with the technical essence of generative artificial intelligence.

The third feature is data dependence, which has been a typical feature of AI for a long time, but presents new forms, such as cultural prejudice caused by insufficient input of Chinese corpus, data leakage risk caused by data siphon effect, etc. Relevant legal obligations should pay attention to maintaining consistency with the requirements of the upper law ■ Liu Xinyi, assistant researcher of the Institute of Scientific and Technical Information of China:

The regulatory framework in the UK is based on artificial intelligence application scenarios to regulate artificial intelligence applications, rather than regulating artificial intelligence technology. It does not set rules or risk levels for the entire industry or technology. Based on regulatory basis, it explores quantifiable and qualitative cost benefits and impacts in uncertain technology applications and governance, which has enlightening significance for the introduction of regulatory policies in China.

The difficulties in current large-scale model regulation mainly include three aspects: firstly, the limitations of technology make it difficult to meet high standards of governance; Secondly, it is difficult to grasp the methods and limitations of large model governance; Thirdly, the current diversified governance tools have not fully played their role. With the development and widespread application of multimodal large models, future risk issues will gradually deepen.

It is recommended to build a differentiated regulatory mechanism throughout the entire lifecycle based on coordinated development and security, strengthen the supervision of key links in key areas of artificial intelligence, and advocate for "technology based governance" to carry out governance of secure and trustworthy technologies (Jincan) reporting/feedback

为您推荐

Exploring the ideas and methods for regulating generative artificial intelligence algorithms (Explor

Exploring the ideas and methods for regulating generative artificial intelligence algorithms (Explor

catalog: 1. Artificial intelligence software chat GPT 2. Artificial I...

2023-05-24 栏目:科技派

当前非电脑浏览器正常宽度,请使用移动设备访问本站!