The inevitability of AI dominance

     The issue of ethics in the development of computer science and machine learning is not new and is perhaps one of the most researched areas in the field of ethics in recent years. This is due to the sheer level of advancement that AI has seen in the past two decades. Johnson's paper provides a scathing account of the rather naive notion that our society, inherently rife with bias and bigotry, could create computer programs that are inherently free of bias. Yet it begs the question, what is the alternative? 

    The development of AI has advanced to such a degree, that it has become nearly impossible to artificially correct biases in systems. During the 90s and early 2000s, machine algorithms and data sets were primitive enough that we can artificially play with datasets (such as the weighting of certain data, etc.) to correct for the outcome. This is no longer possible in the status quo due to the prevalence of the internet. When algorithms are constantly absorbing data from the internet, spitting out values, and learning from them, it becomes near impossible to correct any biases artificially.  

    However, not only is this inevitable, one could even make the argument that this is better than the alternative. What an algorithm does is that it seeks to produce the most correct result with the data it is presented. This is not unlike what a human (such as a judge) does - we try to make the best decision with the data that are presented. The difference between us and AI is that while we can only process a finite amount of stimulus, the AI's capacity for information processing is only bound by our technology, which improves essentially every year. While it may be true that we and AI both suffer from societal bias, an algorithm's bias is actually capable of being corrected and improved, whereas our biases are static. For example, let's take Johnson's account of the criminal recidivism algorithm, and compare it to the only viable alternative, which is that the decision to grant bail/parole is at the judge's discretion. When we see that the algorithm unfairly punishes African Americans, we can easily attribute that mistake to a misevaluation of the data that fed its systems. This means that we could be proactive, and correct, or at least try to correct such biases. While this is obviously easier said than done, the fact that we could speak volumes. Especially when we compare this to a racist judge. A judge is said to be impartial, but we all know that everybody has biases. This is especially the case with racist judges, who suffer from confirmation bias, stereotype bias, among other objectively unfair metrics that they can use to make life-changing decisions. Whereas we can fix an AI, we cannot fix an entrenched racist mind. Furthermore, to replace a judge is nearly impossible, and as such these individuals would remain in the criminal justice system unless we have extremely solid proof that such judges are acting on racist impulses, which is extremely difficult to prove. 

    As such, I think the takeaway is as follows. Society is biased, which leads to the things we produce, whether that be algorithms or software, to be biased as well. I agree with Johnson that such things must be recognized. What also must be recognized, however, is that the alternative is for a human to decide and act in lieu of the algorithms, and I would argue that the human mind is perhaps the most biased software of them all. 

Comments

  1. I agree with the idea presented that because we are biased we create things which are also biased. However, I want to propose a third way of looking at the issue presented which would constitute a hybrid model. When we take the mentioned example of the judge we find that if we solely rely on his judgment and he is expressing racist attitudes then there is no escape from that issue. When we have an algorithm, as it is mentioned, we can correct it but I think we can never achieve neutrality. As there has to be some input and that will always be biased in some way, even if it goes unnoticeable for an extended period of time. I believe that to create a more value free society, we can bring these two approaches together and create hybrid models of decision making with the individuals and algorithms informing us of what would be the best outcome. This way the racist judge's judgments would be challenged by an algorithm, which could be corrected, and balance out the verdicts. I can imagine both of these agents coming together and trying to make our societies more value - free. Although it will never be ideal, I think it can be better and we as humans can also better our judgments because of the use of such algorithms. This will especially be true when the people making algorithms used in courts or other public institutions would recognise and pay a lot attention to how the algorithm is created and how it can skew the image of reality. With that I feel a hybrid model could better many individuals and allow them to recognise their shortcomings.

    ReplyDelete

Post a Comment

Popular posts from this blog

Does the social reality imply a natural reality?

Is cancel culture democratic (with a small "d" even though it is also Democratic with a big "D")?

Better Model?