Top Ad unit 728 × 90

Breaking News

random

The right to explainability of artificial intelligence

Artificial intelligence technology is not new. Its concept was developed at least 7 decades ago;  1956 to be more precise. And, although over the years it has had periods of ups and downs, today thanks to advances in computer processing and access to a huge amount of available data, more than at any other time in history, have made it possible for AI to begin to revolutionize the lifestyle of humanity at an alarming rate.



Do we understand how artificial intelligence works? Do we know the ethical and legal implications of using high-impact predictive systems in society in matters of justice or social security? In this article, we will address what the explainability of artificial intelligence is and why it should be promoted as a right in democratic societies in today's world. 


What does AI explainability refer to?

Explainability in Artificial Intelligence (AI) refers to the ability to understand and justify how AI algorithms and systems arrive at their decisions and outcomes. In a legal context, this characteristic becomes of fundamental importance to lawyers, as it allows them to analyze and evaluate the reasoning behind AI conclusions in court cases or situations involving this technology.

Unlike human decisions, where the factors and criteria used to reach a certain conclusion can be examined, AI models often operate as black boxes, making it difficult to understand their internal logic.


What is the black box? 

The black box, also known in English as  Black Box  , is a phenomenon that occurs in artificial intelligence algorithms of neural networks, since they work through layers: the lower layer takes the input data and distributes it to one of the following layers that make up the neural network, eventually, the upper layer produces a response.

When this process is repeated enough times, it allows the neural network to learn the differences between objects to classify them more accurately. 

The problem is that,  just like in the human brain, learning is encoded in the strength of multiple connections, rather than stored in specific locations, as in a conventional computer.

Pierre Baldi, a machine learning researcher at the University of California, referring to the black box problem, says: “Where is the first digit of your phone number stored in your brain? It’s probably in a bunch of synapses, probably not too far from the other digits,” and Jeff Clune of the University of Wyoming adds: “Even though we built these networks, we can’t understand them any better than we understand the human brain.” 

Not knowing how a neural network algorithm comes to produce a certain result represents a major problem in defending or challenging a result obtained by AI.

It is controversial to think of a scenario where a lawsuit is brought against something that was the result of the intervention of AI technology and, in order to explain this phenomenon, lawyers do not know how to explain that such a product was achieved.  It is not enough to say, “that is how AI did it.” In public administration contexts, this is even more disturbing. 

AI explainability in public administration contexts

In constitutional democratic societies, the exercise of public power is subject to legal limits that can be found, depending on the country, in constitutions, laws or even in international conventions.

Many of these regulations include the principles of  justification and motivation, which consist of the fact that the acts of all public entities must express the regulatory provisions applicable to each specific case and the reasons or arguments that justify their actions.

And this applies to something as simple as a traffic ticket, to the issuance of sentences in courts and tribunals. In Mexico, these principles are contained in Article 16 of the Constitution.

The dilemma begins when artificial intelligence technology begins to be used in public administration, especially deep learning algorithms, whose neural networks bring with them the black box problem. 

For example, the SAT is currently known  to use certain types of AI to detect taxpayers who evade taxes or simulate transactions  , and it may not be long before other Mexican state entities also use AI for their interactions with individuals. Let's assume the following hypothetical case: 

CONACYT uses a new AI system to determine the allocation of scholarships for postgraduate studies abroad. The AI ​​system is informed of the criteria for carrying out the evaluation and is subsequently presented with the data of each of the applicants. In this way, the AI ​​determines the list of applicants selected for a postgraduate scholarship abroad. 


Also Read: Automation technology: what it is, examples and types


A person who had participated in this competition and had not been selected could easily submit a request for information to CONACYT, requesting a detailed explanation of the reasons why his or her application was not selected for a postgraduate scholarship.

In this exercise of the right of access to information, CONACYT could say: “Because the requirements were not met” or “Because the system determined that the requirements were not met.” However, the request for a detailed explanation implies that CONACYT at least provide the reasons for such a determination: grades from the degree, diplomas and additional training courses, up to mastery of foreign languages.

However,  if AI was not programmed to explain its decisions, the black box phenomenon would prevent us from knowing the reasoning behind the selection of applicants for a postgraduate scholarship. 

Would a person who was legally affected by a decision made by an AI have the right to have the reasons taken into account by the AI ​​​​to reach that decision explained to him/her? 

It is quite disturbing that the above scenario, hypothetical as it may be, does have its real-world equivalents.

In Baltimore, a civil rights attorney representing an elderly disabled person in a case where her client had been inexplicably turned down for Medicaid discovered in the midst of litigation that the state of Maryland had recently  integrated a new algorithm and could not explain why the algorithm had decided to cut off Medicaid to an elderly disabled person .

Similar cases like this one led the Biden administration to publish the “Guide to an AI Bill of Rights,” which includes the right to AI explainability.


The right to AI explainability

As AI begins to affect the rights of more and more people around the world, it is worth asking the following questions: Do people have the right to know when a decision affecting them has been made by AI? And if so, do they have the right to know how AI has made that decision? Is the right to explainability a new right or can it come from existing human rights?

In the absence of an express normative consolidation of the right to AI explainability, to answer the above questions we may have to look at the closest thing we have today, at least in Mexico and the signatory countries of the American Convention on Human Rights: the right to information. According to researcher Ernesto Villanueva, the right to information is  the human right of every person to obtain information, inform and be informed.  In this sense, he establishes that this right is composed of at least the following facets: 

  1. Right to access information: This is the possibility of accessing public files. 

  2. Right to inform: Freedom of expression and the press. 

  3. Right to be informed: Receive objective, timely and complete information. 

If all people have the right to access information that concerns them, including how decisions that affect their lives are made, through the right to information; in the context of AI, this implies that  people have the right to know how an artificial intelligence system reaches a certain conclusion or decision that may have an impact on their rights, freedoms or interests.

On the other hand, and specifically, in the case of public administration, the principles of justification and motivation require that decisions taken by institutions and affecting the legal sphere of individuals be based on clear and justified criteria. In the case of AI, this translates into  the need for AI algorithms and systems to provide a coherent and transparent explanation of how data has been processed and how a certain conclusion has been reached.

The convergence of these two elements gives rise to the right to AI explainability. This right implies that  people have the legitimate expectation of understanding the reasoning behind automated decisions made by AI systems . In this way, accountability is guaranteed and possible unintentional biases, errors or discriminations in the decision-making process can be detected.

Regulating AI and XAI explainability

As AI law becomes more established as an independent branch of law, then we might see how human-made law will try to regulate things created by AI.

It is enough to look at the example of the proposed  European Union Artificial Intelligence Law (AI Act),  which, although still far from having a version approved by the European Council, already addresses the concept of explainability by stating that artificial intelligence systems designed for law enforcement should be considered “high risk” when they may “impede the exercise of important fundamental procedural rights, such as the right to effective judicial protection and an impartial judge, as well as the rights of defence and the presumption of innocence, especially when  such AI systems are not sufficiently transparent, explainable or well documented”. 

Faced with the black box problem, some artificial intelligence developers around the world have proposed the  glass box methodology as a solution, as well as the so-called explainable artificial intelligence (XAI) . On the one hand, the glass box model is a  simplified version of a neural network that allows the traceability of the data that affects the neural network . However, if something needs to be learned from disordered data, it requires deep neural networks, with many layers and, consequently, opaque (black box). 

On the other hand, XAI is about helping people understand what features of the data a neural network takes into account, for example  by training a neural network system that can explain, in natural language, the reasoning behind a response , as a human would do. Although it is still not clear that a machine learning system will always be able to provide an explanation of its actions in natural language. 

[cta_hubspot id=14222]

Frequently asked questions about the right to AI explainability

What is the right to AI explainability?

It refers to the right of individuals to understand the reasoning behind automated decisions made by AI systems.

Where does the right to AI explainability come from?

It is based on the convergence of the right to information and the principles of justification and motivation when the legal sphere of a person is affected. 

What is the black box phenomenon in AI?

The phenomenon that occurs in neural network artificial intelligence algorithms, which makes it impossible to determine how an AI obtained a certain result. 

What is XAI?

They are neural network systems that can explain, through natural language, the reasoning behind a response, as a human being would do. 


The right to explainability of artificial intelligence Reviewed by GM on November 02, 2024 Rating: 5

No comments:

Contact Form

Name

Email *

Message *

Powered by Blogger.