White House ‘AI Bill of Rights’ explains how to make artificial intelligence safer

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there are very few policies or regulations governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this area, potentially leading to decisions and situations that have drawn criticism.

Google dismissed an employee who have publicly raised concerns about how a certain type of AI may contribute to environmental and social issues. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they were shown strengthen existing policies based on racial bias.

There are governments recommendations and tips regarding the use of AI. But in early October 2022, the White House Office of Science and Technology Policy significantly added to the federal guidelines by issuing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says the protections outlined in the document should be applied to all automated systems. The plan sets out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can serve as a guide to help prevent AI systems from limiting the rights of US residents.

As a computer scientist who studies the ways in which people interact with AI systems – and in particular how anti-Blackness mediates these interactions – I find this guide to be a step in the right direction, even if it has some holes and n is not enforceable.

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk that AI promotes discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by them. systems. Exploited and marginalized communities often have to deal with the consequences of AI systems without having much to say about their development. Research has shown that the direct and genuine involvement of the community in the development process is important to deploy technologies that have a positive and lasting impact on these communities.

The second principle concerns the known algorithmic discrimination issue within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks companies to develop AI systems that do not treat people differently based on their race, gender or otherwise protected class status. He suggests companies use tools like equity assessments that can help assess the impact of an AI system on members of exploited and marginalized communities.

These first two principles address the major issues of bias and equity encountered in the development and use of AI.

Confidentiality, transparency and control

The final three principles describe ways to give people more control when interacting with AI systems.

The third principle concerns data confidentiality. It aims to ensure that people have more say in how their data is used and are protected against abusive data practices. This section aims to address situations where, for example, companies use misleading design to manipulate users give their data. The blueprint calls for practices such as not taking a person’s data unless they consent and asking in a way that person understands.

The next principle deals with “notice and explanation”. This highlights the importance of transparency – people need to know how an AI system is being used as well as how an AI contributes to outcomes that might affect them. Take, for example, the New York City Administration for Children’s Services. Research has shown that the agency uses outsourced AI systems to predict child abusesystems that most people don’t realize are being used, even when under investigation.

The AI ​​Bill of Rights provides a directive that people in New York in this example who are affected by the AI ​​systems in use must be informed that an AI was involved and have access to an explanation of what the AI ​​system was doing. did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The final principle of the AI ​​Bill of Rights outlines a framework for human alternatives, consideration, and feedback. The section clarifies that individuals should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, consider the case of a person applying for a mortgage. They would be informed if an AI algorithm was used to review their application and would have the option to opt out of this use of AI in favor of a real person.

Smart guidelines, no enforceability

The five principles set out in the AI ​​Bill of Rights address many of the issues raised by academics regarding the design and use of AI. Nevertheless, it is a non-binding and non-enforceable document currently.

It may be too much to hope that industry and government agencies will put these ideas into practice in the exact way advocated by the White House. If the ongoing regulatory battle over data privacy offers any guidance, tech companies keep pushing for self-regulation.

Another problem I see with the AI’s bill of rights is that it doesn’t directly call systems of oppression – As racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in healthcare have led to worst care for black patients. I argued that anti-black racism should be directly addressed when developing AI systems. While the AI ​​Bill of Rights addresses the ideas of bias and fairness, the lack of attention to systems of oppression is a notable hole and a known issue in AI development.

Despite these shortcomings, this plan could be a positive step toward better AI systems, and perhaps the first step toward regulation. A document like this, while not a policy, can be a powerful reference for people advocating for changes in how an organization develops and uses AI systems.

Christophe DancyAssociate Professor of Industrial and Manufacturing Engineering and Computer Science and Engineering, Penn State

This article is republished from The conversation under Creative Commons license. Read it original article.

Comments are closed.