Ethics and Fairness in Artificial Intelligence
We are at a time where leveraging Artificial Intelligence becomes commonplace. Every business organization is now looking into its data and building intelligent agents that predict, recommend, and even decide on organizational transactions. Automated intelligent systems are under scrutiny because of the recent biases uncovered from systems rolled out by some of the giants in tech right now. It is being proposed that such intelligent agents are regulated, developed, and implemented ethically. Equality, non-discrimination, accountability, and safety are being suggested as guiding principles when building such intelligent agents. Regulating automated intelligent systems is going to be huge in impact in every industry.
Need To Be Accountable
Most intelligent systems are an ensemble of Machine Learning Algorithms or Deep Learning Neural Networks. Which fundamentally is fitting a function over past data. Some are from a math formula developed through simulation or past data. Ultimately, we are deferring to complex math functions to make decisions for us. As we try to automate some of our tasks, we blindly defer some of our decisions to math functions. Using math to make decisions has been around for a long time. Ancient civilizations that have a firm understanding of math for decision making have us in awe of their ancient creations and inventions. Take, for example, the pyramids of ancient Egypt. Early mathematicians decided the shape and structure of the pyramids because of a good understanding of a function that precisely calculates a ratio (e.g., https://www.goldennumber.net/great-pyramid-giza-complex-golden-ratio/). Humans decided to go to the moon because we understand how the Earth and Moon interact because of math functions. (e.g., https://www.nasa.gov/pdf/377727main_Lunar_Math.pdf). Math functions are also being used for trading stocks in high frequency since the 1970s. Is math itself unethically biased? (e.g., https://www.k12.wa.us/sites/default/files/public/socialstudies/pubdocs/Math%20SDS%20ES%20Framework.pdf) (That is why the Seattle Public school system have come up with this Math Ethnic Studies Framework). An understanding of functions that give us beautiful aesthetic ratios does not necessarily mean that we have to enslave people to create the pyramids. Being able to decide quickly using a math function to do high-frequency trading does not necessarily that we should enrich ourselves using that strategy. If you can break encryption using a math function, would you read all private messages? The functions that we model using AI and Machine Learning are not inherently unethical nor biased; in fact, it is atypical. It is up to the people and industry that uses AI and ML to have a moral guide so as not to use AI and ML that might be a detriment to society. But most automated decisions do not involve a personal moral guide once we deploy our AI and ML functions in an intelligent system.
Need For A Framework on Ethics and Fairness in Algorithms
To make sure that our AI and ML algorithms are fair, ethical, and unbiased. We should have a framework to follow while developing it so that we make sure that fairness is built in from the training to deployment. There is ORCAA (http://www.oneilrisk.com/) that provide services that audit algorithms are fair and unbiased. ORCAA is a private group that serves as a consultant. There is a need for an open-source framework or guide that helps AI and ML developers on how to develop fair and unbiased algorithms.
Challenges of Ethics and Fairness in Algorithms
Most AI and ML algorithms are inherently discriminative. There are discriminative and generative algorithms. (https://medium.com/@mlengineer/generative-and-discriminative-models-af5637a66a3) Both types of algorithms use data to come up with a model. When we classify something, we need to compare it with previous examples to group it accordingly. Our classifier needs to be discriminative, to group similar examples. The underlying data would tell us how things would be grouped. We need to find the bias (the characteristics of the group) in the data to group things effectively. Finding the bias is counter indicative of being unbiased. So how should we make a model fair when the objective is to be discriminative? This is the dilemma that most AI and ML practitioners are facing. There have been explorations for removing bias (debiasing methods) - but these methods do affect performance. (https://www.aclweb.org/anthology/N19-1062.pdf)
Parting Thoughts
Every AI and ML practitioner should be aware of how easily an algorithm be biased and unfair when developing a model. Practitioners should also be aware of the pitfalls in unfairness and bias when a model is used in an intelligent system that makes automated decisions. We should treat unbiasedness and fairness in algorithms like we treat security when developing software.
Comments