How to establish values (how to integrate human values into artificial intelligence? In 1988, photos

 

1. Artificial Intelligence Image Robot

Photo source @ Visual Chinese | Ask NextQuestion: As artificial intelligence (AI) becomes more powerful and more deeply integrated into our lives, how to use and deploy AI becomes more and more important. For applications such as autonomous vehicle, online content recommendation systems and social robots, how to make the ethics and values of AI systems consistent with human beings has become an unavoidable problem.

2. Artificial intelligence image generation

For more powerful AI, they will assume increasingly important economic and social functions in the future, and the above contradictions will become more prominent. Specifically, we need to consider what values can guide AI? Who do these values belong to? How was it selected? The above questions illustrate the role played by AI principles - the fundamental values that drive AI to make decisions of all sizes.

3. High definition artificial intelligence image materials

For humans, principles help shape our way of life and right and wrong perspectives. For AI, principles can shape the methods that AI adopts when making decisions that require balancing, such as making choices between prioritizing productivity and helping those most in need. In a recent paper published in the Proceedings of the National Academy of Sciences (PNAS), researchers drew inspiration from philosophy, Try to find a better way to establish AI principles.

4. Artificial intelligence image creativity

Specifically, the researchers discussed the possibility of applying the so-called "veil of ignorance" - a thought experiment aimed at exploring the fairness principle of group decision-making in a symbiotic society - to AI,.

5. Artificial intelligence image materials

One is moral "intuitionism", which aims to obtain people's (including experts and laymen) moral intuition on AI to help guide the development of AI technology. The second kind of method is "theory led", starting from a preferred moral theory (such as utilitarianism or American ethics), and then reflecting on the impact of this theory on AI.

6. Artificial intelligence image high-definition

Through this approach, advocates of these specific philosophical positions can more clearly describe the meaning of AI being "sufficiently kind" or "promoting the best interests". Although both methods provide novel insights, they also have certain limitations. On the one hand, moral intuitions about technology may conflict with each other, leading to trade-offs or so-called "difficult choices".

7. Artificial Intelligence Picture Cartoon

In addition, this method may capture highly accidental or morally problematic preferences. On the other hand, when applied to technologies that operate at the social level, the philosophical expertise required for methods dominated by moral theory presents a tense relationship with participating values, and there is a risk of creating unacceptable forms of value imposition.

One of the methods for artificial intelligence images is to use object detection

Furthermore, although any specific moral theory may be popular among its followers, there is no guarantee that it will receive widespread support among people with different belief systems. Given the profound impact of these technologies on people's lives, we also do not want AI developers to simply encode certain values as being higher than others based on their individual preferences or moral beliefs.

One of the methods for artificial intelligence film reading is to use object detection

On the contrary, the differences in values, interests and views in a pluralistic society indicate that a fair process is needed to help determine the appropriate principles applicable to AI in the whole society. In this context, the third method aims to determine the fair principles of AI management by using the "veil of ignorance" (Vol).

The process of artificial intelligence film reading is reflected in

The "veil of ignorance" was originally proposed by the philosopher John Rawls, and now it has become the basis of political philosophy thought experiment. On the basis of the tradition of social contract, the "veil of ignorance" experiment requires individuals to choose the principle of justice for the society, but individuals will not understand the potential information about their position in the society. If they do not understand their own or others' situation, they will exclude the possibility of argument based on prejudice or self-interest

Since no one is at an unfair advantage because of this selection mechanism, the resulting choice of principles is widely regarded as fair  Image source: PNAS draws on this framework, Gabriel suggests using the "veil of ignorance" selection principle to manage AI, rather than looking at the impact of the mechanism on case selection. One advantage of focusing on the selection principle is that, compared with a complex dataset containing a large number of specific case selections, Principles can be described in terms that are easier to understand.

Therefore, the principle is more likely to be evaluated, debated and recognized by the public. The principle also tends to integrate different values into an operational scheme, so as to avoid problems caused by numerical or data point conflicts. In this experiment, researchers found that the "veil of ignorance" approach encourages people to make decisions based on what they think is fair, whether it directly benefits them or not.

In addition, when participants are reasoning behind the "veil of ignorance", they are more likely to choose an AI that can help the most disadvantaged people use. These insights can help researchers and policymakers choose principles for AI assistants in a fair way to all parties. The veil of ignorance (right) is a way to reach consensus on decision-making when there are different opinions in the group (left).

Image source: Deepmind's path to fairness: Making AI decisions more fair. A key goal of AI researchers is to align AI systems with human values. However, there is no consensus on what set of human values or preferences should be used to manage AI - we live in a world where different people have different backgrounds, resources, and beliefs.

Given the significant differences in human values, how should we choose principles for AI technology? Although the challenge of AI has gradually emerged in the past decade, the discussion on how to make fair decisions has a long philosophical origin. In the 1970s, political philosopher Rawls put forward the concept of "veil of ignorance" to solve the above problem.

Rawls believed that when people choose the principles of justice for a society, they should imagine themselves making choices without knowing their specific position in society. Here, "position" includes their social status or wealth level. Without this information, people cannot make decisions in a selfish way, but should choose the principle of fairness to all relevant personnel.

For example, considering how to make cake slicers at birthday parties fair, the secret to ensuring fair cake distribution is to have cake slicers ultimately choose this method of hidden information, which may seem simple, but has wide applications in fields such as psychology and political science, which can help people reflect on their decisions from a less self interested perspective. Based on this, previous research by DeepMind has pointed out that,

The fairness of the veil of ignorance may help promote fairness in the process of aligning the AI system with human values. Researchers have designed a series of experiments to test the impact of the veil of ignorance on people's choice of principles to guide the AI system. The "veil of ignorance" can be used for the principle of aligning AI choices with human morality under the uneven distribution of people's positions.

The benchmark distribution of resources of a group is shown in the figure. The advantages of individual locking positions are different (marked as 1 to 4 here). This group will accept the potential help of the AI system (marked as "AI assistant" here). One group of decision makers who understand their position in the team will choose a principle to guide the assistant and another group of decision makers after the "veil of ignorance" will choose a principle without knowing their position.

Once a principle is selected, the AI assistant will develop its own action principles based on it and correspondingly increase the resource allocation asterisk (*) to indicate potential areas that can affect judgments and decisions based on fair reasoning. Image source: PNAS efficiency first vs. fairness first? In an online "logging game", researchers asked participants to team up with three other computer players to play the game, with each player's goal being to collect wood by logging trees in different areas.

Each group has some lucky players assigned to a favorable position: in a densely wooded field, they can efficiently collect wood. Other team members are at a disadvantage: their fields are sparse and require more effort to collect trees. Each team is assisted by an AI system that can spend time helping each team member harvest trees.

Researchers require participants to choose between two principles to guide the behavior of AI assistants. According to the "efficiency first" principle, AI assistants will mainly serve fields with denser trees to improve the overall team's harvest. Under the "fairness first" principle, AI assistants will focus on helping players in vulnerable fields.

 The sketch map of "logging game" in which players (shown in red) either occupy a dense area (the top two quadrants) that is easier to harvest, or occupy a sparse area that requires more efforts to collect trees Source: Deepmind researchers put half of the participants behind the veil of ignorance: they are faced with the choice of different moral principles, But they don't know which field belongs to them - so they don't know how big their strengths or weaknesses are, while the other half of the participants know that their situation is better or worse than others when making choices.

Encouraging fairness in decision-making research has found that if participants do not know their position, they always prefer the "fairness first" principle, which supports AI assistants to help members of vulnerable groups. This model has appeared in five different game variants and crosses social and political boundaries: regardless of participants' risk or political preferences, they all show a tendency to choose the "fairness first" principle.

In contrast, participants who know where they are are more likely to choose the principle that is most beneficial to them, whether it is "fairness first" or "efficiency first" ⏵ The above figure shows the impact of the "veil of ignorance" on the possibility of choosing the principle of "fairness first". Under this principle, AI assistants will help those who are in a worse situation and do not know where they are, participants are more likely to support this principle to manage AI behavior.

Image source: When PNAS researchers asked participants why they made their own choices, those who were not aware of their own stance were particularly likely to express concerns about fairness. They often explained that AI systems were right to focus on helping people with poorer conditions in the group. In contrast, participants who understood their own stance discussed their choices more frequently from a personal interest perspective.

After the logging game ended, the researchers proposed a hypothesis to the participants: if they played the game again and knew they were in a different field, would they choose the same principles as before? Interestingly, some people benefit from their choice, but they will not make the same choice in the new game ⏵ "veil of ignorance" increases the possibility of participants to maintain their principled choice (reflective recognition), especially those who will benefit from changing their choice.

The error line in the figure reflects that reasoning after the "veil of ignorance" of 95% confidence interval increases the possibility of participants to maintain their principle choice unchanged, especially if they are faced with the motivation of "changing the choice of lecturers for their own profits" (A) participants have completed the descriptive version of the game (participants do not have real-time components to "cut" trees; P=. 005; logistic regression).

(B) Participants completed an immersive version of the game (participants "harvested" trees through real-time virtual avatars; P=. 036; logistic regression) Image source: PNAS research has found that people who previously made choices without knowing their location are more likely to continue to support their principles - even if they know that the previous principles in the new game may no longer benefit them.

This provides additional evidence, indicating that the "veil of ignorance" encourages participants to make fair decisions and guides them to formulate principles they are willing to follow, even if they no longer benefit directly from it to find more equitable principles for AI. AI technology has had a profound impact on our lives. The principle of controlling AI has led to these impacts and affected the underlying potential benefit distribution.

This research focuses on such a case: the choice of different principles will have a relatively obvious impact on the experiment. This is not always the case: AI will be deployed in various fields, which usually rely on a large number of rules to guide them, which may have complex interactions. However, the "veil of ignorance" may still affect the choice of principles, helping to ensure that the rules we choose are fair to all parties.

为确保可以构建造福所有人的AI系统,人们还需要进行广泛的研究,收集来自跨学科及社会各界的各种输入、方法和反馈“无知之幕”为选择AI的原则提供了一个起点参考文献:Weidinger, L., McKee, K. R., Everett, R., Huang, S., Zhu, T. O., Chadwick, M. J., ... & Gabriel, I. (2023). Using the Veil of Ignorance to align AI systems with principles of justice. Proceedings of the National Academy of Sciences, 120(18), e2213709120.

D. Ross, W. D. Ross, The Right and the Good (Oxford University Press, 2002). E. Awad et al., The moral machine experience Nature 563, 59-64 (2018)

L. Jiang et al., Delphi: Towards machine ethics and norms ArXiv (2021) http://arxiv.org/abs/2110.07574. (Accessed 1 June 2022)

A. A. I. Principles, Future of Life Institute https://futureoflife.org/open-letter/ai-principles/. Accessed 24 March 2023

L. Floridi et al., Ai4people An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations Minds Mach 28, 689-707 (2018)

T. Hagendorff, A virtual based framework to support putting AI ethics into practice Philosop Technol 35, 1-24 (2022)

C. Cloos, "The Utillibot project: An autonomous mobile robot based on utilitarianism" in 2005 AAAI Fall Symposium on Machine Ethics (2005), pp. 38-45

W. A. Bauer, Virtuous vs. utilitarian artistic moral agents AI Soc. 35, 263-271 (2020). R. Dobbe, T. K. Gilbert, Y. Mintz, Hard choices in artificial intelligence. Artif. Intell. 300, 103555 (2021).

B. Goodman, “Hard choices and hard limits in artificial intelligence” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (2021), pp. 112–121.

V. Prabhakaran, M. Mitchell, T. Gebru, I. Gabriel, A human rights-based approach to responsible AI. arXiv (2022). http://arxiv.org/abs/2210.02667. (Accessed 1 December 2022).

I. Gabriel, Artificial intelligence, values, and alignment. Minds Mach. 30, 411–437 (2020).S. Mohamed, M. T. Png, W. Isaac, Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosop. Technol. 33, 659–684 (2020).

J. Rawls, A theory of justice (Oxford Paperbacks, 1973).I. Gabriel, Artificial intelligence, values, and alignment. Minds Mach. 30, 411–437 (2020).

为您推荐

How to establish values (how to integrate human values into artificial intelligence? In 1988, photos

How to establish values (how to integrate human values into artificial intelligence? In 1988, photos

Directory: 1. Artificial Intelligence Image Robot 2. Artificial intel...

2023-05-30 栏目:科技派

当前非电脑浏览器正常宽度,请使用移动设备访问本站!