Explainable, Responsible, and Trustworthy Artificial Intelligence

What it is and how to measure

About a year and a half ago, I wrote a blog post titled “What Is Explainable Artificial Intelligence and Is It Needed?” In the post, I discussed how transparent and explainable the decision-making process of humans is.

On the other hand, I gave examples of the balance between the performance of AI applications and the decrease in explainability.

There is a general opinion here that a trade-off should be made. Now, I thought it might be useful to go more deeply into the same subject because we need to start looking from a new perspective to increase both technical and business value where the “business value” is how AI impacts businesses.

In daily life, people benefit from AI, whether they are aware of it or not. How so? You start to write an e-mail with robots that automatically complete your sentences. The malicious e-mails you do not want to see automatically fall into the spam box.

You tell your problems to the chatbot that appears when you visit your bank’s website. Your bank recognizes you by your voice. You do not need a password for a visited site. You open your mobile phone using the face recognition system.

Recommendation systems know which movie you will like. Whether or not you are interested in the field of AI, each of us is a user in the end.

Whether we are a user or a developer of AI, it is useful to know that there are a number of principles that we must follow. In this article, I am going to focus on these.

What is Trustworthy AI?

At the international level, there is currently a consensus on six dimensions of what makes “Trustworthy AI:”

  • Justice
  • Accountability
  • Value
  • Robustness
  • Repeatability
  • Explainability

While justice, accountability, and value embody our social responsibility, robustness, repeatability, and explainability pose enormous technical challenges for us. Explainability is a step that is technically the most difficult to understand and affects the business process of companies so let’s try to understand this subject first.

Well, let’s remember the topic of explainable AI (XAI);

As I mentioned in detail in my previous article that one-pixel attacks can be made to image processing applications with deep learning approaches. While our eyes can still interpret the image correctly when we change only one pixel in the image, the model we created with deep learning methods can lead to a huge misinterpretation. Generative Adversarial Networks also have significant success rates in other areas such as natural language processing and speech recognition. The disadvantages of deep neural networks brought about by attacks cast doubt on the learned decision-making processes underlying these methods. Therefore, in order for neural network approaches to gain widespread public trust and to ensure that these systems are truly fair, we need to have humanly understandable explanations for the decisions made by these models.

  • First, explanations are required to provide justification to end-users, such as patients or customers, who will benefit from systems where decisions are made with AI technologies. At the same time, in some countries, for example, within the scope of the GDPR adopted in the European Union in 2016, those who process personal data as data controllers have an obligation to inform the relevant persons whose personal data is processed regarding this processing activity.
  • Second, it is also important for professions that will use the explanation system. For example, professionals such as doctors and judges will want to better understand the strengths and limitations of a system, and to rely on and enforce the system’s predictions.
  • Third, explainability can be an excellent source of knowledge discovery. Neural networks are known to be particularly good at finding patterns in data. Therefore, being able to explain algorithms learned by neural networks can also expose valuable information that would be difficult for humans to extract from vast datasets.
  • Finally, being able to explain neural networks may allow developers/researchers of these methods to better understand and improve AI systems. For example, after seeing a series of explanations, it can be seen that even non-machine learning experts can suggest new methods.

Where does the need for explainability of AI technologies in business come from?

Even if AI is a $15 trillion transformational opportunity for business and society, as it gets more complex, algorithmic “black box” decisions surround us. At the same time, it makes explainability even more critical.

PwC’s 2017, 67% of business leaders participating in the Global CEO Survey believe that AI will adversely affect the level of stakeholder confidence in the automation industry in the next five years.

Criteria for measuring the required level of interpretability

  • AI Type: The interpretability is the main factor in the applicability level and which techniques can be applied. There is an important distinction between rules-based and non-rule-based systems.
  • Interpretability Type: Is associated with the principles of interpretability, transparency, and explainability. Explainability is based on the understanding of the decisions made by AI by organizations. Transparency sheds light on the so-called black-box models.
  • Usage: Is associated with the principles of interpretability, transparency, and explainability. Explainability is based on the understanding of the decisions made by AI by organizations. Transparency sheds light on the so-called black-box models.

The more critical the use case, the more interpretability will be required. However, the need to get inside the black box can limit the scope of the AI system. How can you balance trade-offs in areas such as improving accuracy and performance while improving interpretability? A comprehensive assessment of the benefit and risk of a use case provides information on a set of recommendations on interpretability and risk management. This helps managers make decisions to optimize performance and return on investment.

🔴Revenue: The sum of the economic impact of a single estimate and intelligence derived from a global understanding is the process being modeled.

🔴Rate: Is the approximate number of decisions an AI model has to make in a given period of time.

🔴Rigor: To make highly robust generalizations for data whose accuracy and properties cannot be seen.

🔴Regulation: Is the proper use and level of AI systems to be determined and verified by regulations.

🔴Reputation: The impact of AI applications on a business reputation for specific use cases in their interaction with stakeholders and society.

🔴Risk: Covers ethics and the workforce. Risks are the negative and potential damages that the algorithm can cause.

In the six areas mentioned above the main use case, priority evaluation criteria are outlined. In practice, the criticality of explainability in the use case is predominantly driven by three economic factors:

1️⃣ The potential economic impact of a single forecast;

2️⃣ The economic benefit of understanding why the prediction is made based on the choice of actions that can be taken as a result of the forecast;

3️⃣ The economic utility of information is achieved by understanding trends and patterns across multiple predictions. However, organizations place a higher value on the importance of factors beyond economic and technical factors such as managerial risk, business reputation, and rigor.

Explainable AI, a framework that has been created for Responsible AI, helps organizations to serve responsibly on AI.

We can interpret this framework as follows: If an organization is beyond the required level for disclosure capability, it means that the organization may sacrifice some degree of additional explanation for increased model accuracy. This is demonstrated by a gap analysis. In the same context, for a scenario in which the ability to explain falls below the required level, the model will result in reduced prediction accuracy in order to gain a greater explanation.*

In Technical Perspective

Deep learning is the current name of artificial neural networks that use many nonlinear functions. The reason artificial neural networks have surpassed their popularity in the 1990s is that they have achieved significant successes reaching the human level.

These achievements were especially realized in object recognition and classification, playing games such as chess and chatbots. Intuitively, multiple layers of nonlinear functions learn properties at various levels of abstraction between the network’s raw data and prediction.

Learned hierarchical features from a deep learning algorithm. Each feature can be thought of as a filter, which filters the input image for that feature (a nose). If the feature is found, the responsible unit or units generate large activations, which can be picked up by the later classifier stages as a good indicator that the class is present.

The figure shows features generated by a deep learning algorithm that generates easily interpretable features. This is rather unusual. Features are normally difficult to interpret, especially in deep networks like recurrent neural networks and LSTMs or very deep convolutional networks.

However, the complexity of many nonlinear functions producing results has become difficult for human beings to interpret and understand (sometimes even the functions’ own developer struggles). Especially in health applications such as disease diagnosis, banking applications such as loan allowance, or criminal applications in law in applications that require transparency in terms of security, models that can be explained more easily such as regression or decision trees are still used instead of deep learning.

Why is AI sometimes difficult to use?

Doubts about the decision-making processes of artificial neural networks and some machine learning approaches are not unfair.

Correlations in datasets are used when making decisions in such systems. Sometimes statistical biases in the dataset can lead to a generation of fake correlations. For example, pneumonia can be diagnosed with much success (or can achieve far better results) using deep learning instead of using other methods.

However, using a deep learning diagnosis may have unexpected results. In connection with this analysis, it states that patients with the “history of asthma” in the training set have a lower risk of dying from pneumonia. However, this pattern reveals that asthmatic patients are actually at higher risk, as they are given faster and more attention (hence lower mortality rates).

In other words, the risk of death is determined as low as it is well-handled with asthma patients. The risk of death is heavily affected by how well and how soon patients with asthma are taken care of.

In fact, pneumonia is an extremely important factor for asthmatic patients when not biased. In short, there are dangers in using a model based on such correlations in actual medical practice.

No need to be pessimistic!

Despite these and similar difficulties, there are AI techniques that are successful and preferred.

The explainable AI scale of different algorithm classes and learning techniques:

Explainability means the formation of a consensus on a particular outcome seen as post-hoc comments. It depends on how easy or difficult it is for the end-user to be able to show why a model makes a particular decision, as most of the models listed below do not give a direct explanation as to why or how the results were achieved. Each of these learning techniques has different structures that show how they learn from new information.

Conclusion

Undetected bias is one of the main difficulties to explain in systems that extract patterns from data. The sexism in Google translate and Amazon’s HR services, and Microsoft’s Tay Chatbot’s racism and swearing are just some examples. Scientific studies are ongoing on technical approaches to explainability.

But in this process, we have to follow six basic principles in the data processing and decision-making processes of AI models used in the business.

If we want to stay on this route and adopt AI innovation and regulation in a transparent, inclusive, principles-based, and collaborative manner, we can exceed our expectations for the value that AI technologies can create.

I would like to thank Başak Buluz Kömeçoğlu, Selin Çetin and Işıl Selen Denemeç for their feedback on this article.

Feel free to follow my GitHub and Twitter account for more content!

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square