AI Experts propose guidelines for safe systems
New guidelines for developing artificial intelligence products safely have been released by a group of AI experts and data scientists around the world.
In addition to its 25,000 members, the World Ethical Data Foundation employs staff from companies such as Meta, Google, and Samsung.
In the framework, developers are provided with 84 questions to consider before starting an AI project.
Additionally, the Foundation is inviting the public to submit their own questions.
At its next annual conference, it will consider all of them.
The framework was released in an open letter, which seems to be the preferred format for the AI community. The document has been signed by hundreds of people.
Artificial intelligence allows computers to act and respond almost like humans.
Using computers, we can feed them huge amounts of information and train them to recognize patterns, to be able to make predictions, solve problems, and even learn from our mistakes.
As well as data, AI relies on algorithms – lists of rules that must be followed in the correct order.
It was launched in 2018 and is a non-profit global group that brings together technology experts and academics to work on new technologies.
For developers, it asks how they will avoid incorporating bias into AI products, and how they would handle situations in which a tool’s results result in law-breaking.
Yvette Cooper, shadow home secretary, said Labour would criminalize those who use AI tools for terrorist purposes.
Ian Hogarth, a tech entrepreneur and AI investor, has been appointed to lead an AI taskforce by Prime Minister Rishi Sunak. Hogarth told me this week he wanted “to better understand the risks associated with these frontier AI systems” and hold the companies responsible.
Among the other factors taken into account are the data protection laws of different countries, whether it is clear to a user that they are interacting with artificial intelligence, and whether human workers who input or tag data used to train the product were treated fairly.
A full list of questions is divided into three chapters: questions for individual developers, questions for teams, and questions for testers.
A Glasgow-based recruitment platform, Willo, has recently launched an AI tool to go with its service.
To build it, the company said it took three years to collect sufficient data.
The firm paused its development at one point due to ethical concerns raised by customers, according to co-founder Andrew Wood.
In his words, “We don’t use our AI capabilities to make any decisions. They are solely left to the employer.”.
We may be able to utilize AI in certain areas, such as scheduling interviews, but we will always rely on humans to make the final decision whether to hire a candidate.”
The Foundation framework is based on transparency to users, according to co-founder Euan Cameron.
It’s not possible to sneak AI through the backdoor and pretend it was created by human beings, he said.
It needs to be made clear that it was done using artificial intelligence. That really caught my attention.”