AI Speeds Up Our Rendezvous With Complexity
Intelligent systems are augmenting our ability to make decisions. This ability to make decisions is based on machine learning models or deep learning models that use large amounts of data that are non-parametric that fit into non-linear functions. These models are hard to comprehend and explain. We are plugging a lot of these models into business processes in every industry. Soon it will be hard to monitor and justify the root of a decision made because of these models.
Take, for example, a step in a process where a human decides. We can ask the human about the reason for the decision. There is also discretion on the decision based on the ethics of the one who decides. The decision might not be as consistent or efficient to maximize any benefit after that decision, but at least somebody can explain the details about the decision and understand the reason behind it. Now let's take the data behind past human choices and create a model using AI in an intelligent system. We optimize it, update it, and train it so that it will maximize any benefit after that decision. We lose the discretion of a human as we defer to an intelligent system to make that decision; we might be able to explain the decision-making process a little bit. There is always a part in there that we might not be able to explain because it might be sophisticated enough for us not be able to understand it as humans or the data behind it is just not humanly possible to understand. Imagine having an ensemble of these models in intelligent systems. The decision process will now be dependent on these models, and we might not be able to explain all the steps that happened for that decision.
Take, for example, a step in a process where a human decides. We can ask the human about the reason for the decision. There is also discretion on the decision based on the ethics of the one who decides. The decision might not be as consistent or efficient to maximize any benefit after that decision, but at least somebody can explain the details about the decision and understand the reason behind it. Now let's take the data behind past human choices and create a model using AI in an intelligent system. We optimize it, update it, and train it so that it will maximize any benefit after that decision. We lose the discretion of a human as we defer to an intelligent system to make that decision; we might be able to explain the decision-making process a little bit. There is always a part in there that we might not be able to explain because it might be sophisticated enough for us not be able to understand it as humans or the data behind it is just not humanly possible to understand. Imagine having an ensemble of these models in intelligent systems. The decision process will now be dependent on these models, and we might not be able to explain all the steps that happened for that decision.
AI in Intelligent Systems Will Increase The Rate of Our World's Complexity
Do you remember AlphaGo move 37? (https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/) Even experts were unable to explain the move. Lee Sedol has to counter that move with a move that only an expert can make. That is what an Intelligent System can do, and we will be able to find the niches, mundane areas of a domain because we won't discover them with our human abilities. Take the practice of law, for example, citing other cases in a brief or complaint has been the norm. We want good law to be the foundation of a brief or complaint. Law is a complex landscape to navigate because of the increase in the complexity of our social structures. We have to create new laws to accommodate complex situations to bring about social order. Add AI in an intelligent system to the mix, and we will discover the niches of the law that are either right, inadequate or lacking. The citation graphs of lawyer's brief or complaint will increase in complexity as they try to win arguments using niches of law undiscoverable before intelligent systems. How about finance? An area that is highly regulated. Decisions can be hard to evaluate because of the complexity of a global economic landscape. What if an intelligent system in finance made a similar move like move 37 by AlphaGo? A strategic move that no expert in finance can explain, how would we even fathom the effects of such a move to the global economy?
Complexity Should Motivate Us to Explain AI Models in Intelligent Systems
The world we move in grows more complex every day (Entropy increases over time). Intelligent systems can simplify some of the processes through automated decision making, but to mimic human judgment, we need complex models. Complex models might be partially explainable. We need transparency and explainability from this AI and ML models deployed in intelligent systems so that we can manage the complexity that it brings to the domains where it is applied. As well as being explainable, we need to be able to trust an Intelligent System's judgment and how it will affect us. Explainability of the models in an intelligent system will aid us in learning new insightful things through all the complexity. Like what happened in AlphaGo, the effect of move 37 was the infamous move 78 by Lee Sedol. If an intelligent system's decision affects our daily lives, we should be able to appeal a wrong decision made by an intelligent system. We can only evaluate the wrong decision if we can explain the process of the decision.
Embrace Complexity
Ok, so now let's imagine we do not use AI in intelligent systems. We will slowly march towards complexity, whether we like it or not because our world grows increasingly complex. We have to adapt to our processes and make decisions from data from a complex environment. AI in intelligent systems will help us manage those complexities but at the same time, exposes us to those complexities early rather than later and at a much faster rate than we could manage as humans. We are finite humans in a world with finite resources. Discovering complexities early so that we can make better decisions with our resources is extremely helpful to our condition, but at the same time, we might not be ready to deal with such complexity because we are not capable of solving issues about it yet.
Deal With Complexity
We will need AI in intelligent systems to deal with the increasing complexity of our business process in different industries. To counter the effect of the complexity brought by AI to these processes, we should be able to explain AI in Intelligent Systems for the sake of managing complexity in such a system. The explainability of intelligent systems should be everyone's concern to build social confidence and navigate the complexity of such technology. At the same time, we should diligently evaluate the decisions made by such systems so that we can have a handle on its effects and how it might introduce new problems.
Comments