Fujitsu offers a new resource toolkit to offer guidance on the ethical impact of AI systems

Fujitsu offers a new resource toolkit to offer guidance on the ethical impact of AI systems

Fujitsu announced the development of a resource toolkit offering developers guidance for assessing the impact and ethical risks of AI systems based on international guidelines on AI ethics. Fujitsu will offer these resources for free from February 21, 2022 to promote the safe and secure deployment of AI systems in society.

The toolkit consists of a variety of case studies and reference materials, including a newly developed method for clarifying ethical requirements in AI ethics guidelines written in natural language, as well as for apply ethical requirements to real AI systems. Japanese resources can be downloaded here, with English availability to follow in the near future.

With these tips, Fujitsu aims to prevent misunderstandings and potential risks caused by differences in the interpretation of descriptions in the guidelines, providing developers and AI system operators with new tools to thoroughly identify and prevent possible ethical issues from the start of the development process, in line with international best practices. .

Dr. Christoph Lutge of the Technical University of Munich, a leading authority on responsible AI research and business ethics, comments: “In Europe there is a growing debate about the regulation of AI. AI, and one of the key questions is how do we close the gap between principles and practices, or ‘what’ and ‘how?’ I believe the results of this research are very significant in that they allow us to practice on a principled basis. I would also like to express my deep appreciation for the decision to open up the research findings and stimulate discussion around the world.”

Going forward, Fujitsu will actively partner with government agencies, private companies and leading researchers to refine and promote its newly developed methodology and aims to release an expanded version of the Resource Toolkit during the next year. fiscal year 2022.

context

In April 2021, the European Commission published a draft regulatory framework calling for a comprehensive ethical response for developers, users and stakeholders of AI systems in response to growing concerns about algorithmic biases and discriminatory decision-making. in AI and machine learning applications.

To fully commit to the responsible use of technology and earn society’s trust in AI systems and the businesses and organizations involved in this space, Fujitsu formulated its own AI Commitment in 2019, as well as a new AI Ethics and Governance Office to develop and enforce robust policies for AI ethics, promote ethical governance of organizational AI to ensure their effectiveness. Now, Fujitsu will move from principle to practice by regularly implementing best practices in the real world to ensure the realization of ethical, safe and transparent AI and machine learning technologies.

At present, it is common practice in the development of AI systems to identify possible ethical risks in AI systems based on the ethical guidelines for AI issued by government authorities and institutions. companies. These guidelines are written in natural language, which contributes to possible differences in interpretation and misunderstandings between designers and developers that can lead to inappropriate or insufficient measures. Under this method, it is also difficult to judge whether the content of the guidelines has been thoroughly and appropriately reviewed.

However, many challenges remain and a possible misinterpretation of the guidelines during the design phase of new technologies can potentially lead to insufficient or inappropriate measures to counter the risks.

New assessment method and resources to help clarify guidelines and ethical risk

When preparing this new toolkit and guidance for developers, Fujitsu performed analyzes of past AI-related incidents collected from the Partnership on AI International Consortium’s AI Incident Database (1) . This process led to the conclusion that ethical issues related to AI systems can be contextualized with the exchange of information (“interactions”) between discrete elements within an AI system and between an AI system. AI and its users and other stakeholders.

Based on these results, Fujitsu has successfully developed an assessment method to systematically identify relevant ethical issues related to AI systems, enabling the creation of AI ethics models (2) capable of clarifying the interpretation of AI ethics guidelines.

Fujitsu applied its new assessment method to 15 representative cases of the AI ​​incident

Database (164 global cases registered as of February 21, 2022, examples included cases in areas such as the financial and human resources sector). By applying the new method, all ethical issues that have arisen in real use cases have been successfully identified as risks in advance of verification testing, the results of which have been published.

Comments are closed.