javascript contador Skip to content

Researchers are developing a method to teach notions of fairness to AI

Justicia

Although an agreement can be reached on a definition of justice as a word, in concrete terms, its concrete application can be the subject of further analysis.

Just as the definition of what is fair or not can be a real dilemma for people, for the artificial intelligence it is also a challenge, which seeks to be facilitated thanks to a new initiative that emerged at Michigan State University.

Fairness classes for AI algorithms

Considering that artificial intelligence systems are increasingly present in day-to-day activities and services, it is necessary to provide sufficient impartiality to the platforms involved in deciding who receives adequate medical care, who is eligible for a bank loan or who is assigned a job.

With funds from Amazon and the National Science Foundation, Pang-Ning Tan, a researcher and professor in the Department of Computer Science and Engineering at the aforementioned North American House of Studies, dedicated the last year to training artificial intelligence algorithms, to help them discern between the justice and injustice of their own actions.

We are trying to design AI systems that are not just for computer science, but also bring value and benefits to society. So I started to think about what are the areas that are really challenging for society right now?, the researcher pointed out about the foundations of his initiative.

This project raises the need to develop initiatives with a direct impact on its users. Developing this same point, Tan also commented that Fairness is a very big issue, especially as we become more reliant on AI for everyday needs, like healthcare, but also things that seem mundane, like filtering spam or putting stories in your news section.

Even being automated systems, AI algorithms can transmit certain inherited biases from the data used in their training or even transmitted directly by their creators. For example, according to a survey conducted by the same Tan research team, there are cases of AI systems that discriminate racially when managing medical care and sex segregation against women in job application systems.

On this reality, Abdol-Hossein Esfahanian, a member of Tan’s research team, commented that algorithms are created by people and people usually have biases, so those biases are filtered out (…) we want to have justice everywhere, and we want to have a better understanding of how to evaluate it.

With the support of theories from the social sciences, Tan and his team seek to approximate the most universal notion of equity possible. To achieve this end, the principles of justice transmitted to the algorithm will not come from a single vision, posing as a challenge to decide between opposing or contradictory positions.

We are trying to make AI justice conscious and to do that you have to tell it what is fair. But how do you design a measure of equity that is acceptable to all? Tan pointed out, adding that we are seeing how a decision affects not only people, but also their communities and social circles.

The work is ambitious and despite progress, it is just beginning. This is a very ongoing investigation. There are many problems and challenges. How is equity defined? How can you help people trust these systems that we use every day? Tan reflected, adding that our job as researchers is to find solutions to these problems.

The full report This research can be found on the Michigan State University website.