What explainable AI can teach us about good policies
What explainable AI can teach us about good policies
Imagine that you are in court having to make a case. Your license to continue operation as a business is on the line. You are painfully aware that there is a large, complex, legal framework consisting of many (many) laws, regulations, policies and accepted practices that you need to carefully navigate. Your evidence is mixed, of various types, and has both relevant and irrelevant elements (and you are not sure which is which). Your interests and exposures span several jurisdictions. You want to do the right thing, you believe you are doing the right thing, but now you need to prove it.
Other than losing sleep, what do you do?
Many would suggest that, along with calling a real good lawyer, you should press the magic AI button.
But this button brings concerns. The answer can’t be a “black box”, we will have to explain how we are making our case, what evidence we are using and how that supports our position in the context of relevant regulations.
This type of example makes evident the need for “explainable AI”: an approach that aims to make the decision-making processes of AI systems understandable and transparent to humans.
Towards the end of 2024 we were working with a team of experts who were building a proof of concept that made good use of explainable AI for a client (and in fact, we hope, the rest of the world...). Along the way we used a version of this courtroom scenario to help explain how we were using AI, why it needed to provide a chain of evidence, and why the answers needed to be “explainable”.
The experts from PyxGlobal that we were working with are ahead of most on explainable AI. They understood that there are issues to do with IP, commercial confidentiality and rights to access. To continue our court room metaphor, we will have to prove that we have the right to use the evidence that we base our case on, that we have gathered the evidence legally, and (depending on the nature of our trial), we may even reserve the right not to disclose evidence that we have gathered.
This work (which I'll explain in another post, and which we are very much looking forward to continuing with in 2025) taught me many things. One of them is that the rigor that we demand of AI - that the answers it provides should be explainable, and have a chain of evidence (citations etc.) that we can explore and test - well those requirements seem applicable to each and every “system” that comes up with determinations and makes decisions.
I think we should look for “explainable” from every system where it isn’t immediately evident why a decision or conclusion is made. This is not limited to technology systems. We should also expect to have explainable policy decisions, explainable governance decisions, explainable regulatory decisions, explainable human decisions.
Good designs for all such systems, be they human, technology assisted, or autonomous technology, should consider how their logic can be explained.
Get Real
Sometimes of course we might require the ability to explain after the event, not before confirming the decision. I want my automatic braking system to brake, not to ask me if I think it should brake based on the evidence of an impending crash that it presents to me in compelling logic on the dash. By all means (in fact, ‘please’) brake now, but let me have the chance to ask questions and examine the evidence later.
Whether we demand explanations before confirming a decision, or as part of exploring a decision that has been made, we should run the “explainable” ruler across all manner of life impacting decision systems. For example:
- Licenses
- Insurance
- Loans
- Citizenship
- Policy and regulatory determinations
But you can't handle the truth...
Then there is of course the reality that complex systems are too complex for most of us to understand. I think that’s true, in fact I think that’s true for all systems and all of us. Think of how many PhDs and lifetimes you would need to “understand” the full explanation of an airplane autopilot, a car cruise control, or even a mobile phone notification system.
But we don’t need the “whole truth”, in the sense that we don’t need to understand everything everywhere, all at once. Explainability and understanding have the quality of sufficiency about them. We don’t need all knowledge, we need enough knowledge to satisfy our needs, and we need to know that more knowledge is available if we want to dig (and learn) further.
Another angle on "the truth" is that we should still expect that AI systems, all systems, can make "mistakes". The fact that they can and should explain their logic doesn't make their logic right, it makes it explainable. Facts can be omitted and/or selectively chosen, to explain an action. And as (at least) one Judge memorably explained to a defendant, your defense explains your actions, it doesn't justify them.
So should we apply the rigor that we expect of AI to explain itself to other systems? I certainly think so. What do you think?