The view from "further away" and path of science
I had our discussion with Professor Toole last week in mind while reading through. I believe it was Kenneth last time who brought up the view from nowhere which we largely agreed was not a possibility for one person to possess as people simply do not and cannot come from nowhere. Human brains produce views that are inherently subjective.
I went a similar direction as Tim did in his post and see this use of artificial intelligence as an "objective" analyst of data as both inevitable and, largely, good, although I have slightly different reasons. Given that a computer program was, at some point, created by a human, it will of course be subject to some of the biases of the humans who created it. However, there are two way in which it will be "better" than the alternative. Let us use the COMPAS example again. The first is that however it is designed to analyze the situation will be thought about in generic terms. Essentially, there will be a set of rules that the computer must adhere to before the actual specific situation is presented which will eliminate biases of various sorts humans form while judging. Secondly, a computer's mind looks at data in a very different way than the human mind. Big numbers/data are crucial to solving questions of risk like these, but often, they are far bigger than our minds can actually conceptualize. We can write and perform calculations with the number 10^23, but what exactly we cannot distinguish between 10^23 and 10^24 in sense of true understanding despite these numbers being very far apart. Math always us to work productively with numbers, but if there needs to be some judgment of risk made by a human, these numbers are almost guaranteed to be fallaciously weighed. Computers, however, are specifically designed to think this way. Everything is viewed mathematically and so they do not fall victim to this specific short-coming of the human mind. While the inherent flaw remains unsolved, computers, even if created by humans, can eliminate some biases.
I think that this pattern broadly resembles the trajectory of "science" and why we consider that a good thing. Johnson talks about the approval of Ambien by the FDA and how it failed to account for metabolic differences between men and women causing an increased risk to the lives of innocent women. This was a failing of our regulatory institutions and it likely cost lives. However, we used drill holes in peoples' heads to relieve headaches. The FDA is a great example as they have changed their minds on so many topics in very recent years. Science is highly imperfect in short-term specific scenarios. The reason we stick to the scientific method is that in the long-term, it finds its way to more and more "correct" findings. 20 years later we found out Ambien was inaccurately dosed, but now we know it was inaccurate. We will likely find something else wrong with it sometime soon, but in the long-term, our knowledge base will grow closer to the truth, even if it may never get there.
So, while "AI" may not be value free, it is more value free than the alternative. While an AI can not offer us the view from nowhere, it looks from further away than we do and in doing so, moves in the right direction.
Comments
Post a Comment