AI & False Consciousness? - Scarlett

 Mostly, I agree with Johnson’s argument that even though AI presents itself as an objective agent, it is still entrenched in human biases, socio-political values, and ethical concerns. And I agree with a lot of other blog posts that argue for the conduciveness and inevitability of AI implementation in decision-making. But I would like to reflect on some other possibilities if we do accept AI fully into our courts and our daily lives.

  1. The “persona” that AI developers have given AI is an objective agent. Thus, public attitudes towards AI contributing to decision-making are often positive, as people want to believe in something that is unprejudiced, or at least sounds so. Especially under the current political environment where there is a lot of distrust towards prejudiced judges and flamboyant politicians (or people of high authority/socio-economic status in general), the public tends to believe that AI is a lot more trustworthy—just look at all the data and quantitative good stuff that won’t lie to us like people do! Thus, if risk-assessment programs like COMPAS are integrated into our courts today, there are two ways this would turn out. First, having an AI implanted in the decision-making process leaves less leverage to the judge to make prejudiced decisions. Judges would be psychologically impacted by the presence of an AI-assessment system as they don’t want to stray away from the “trustworthy objective agent” in the public’s eye. Second, the public might live in a “false consciousness” where they think our courts are objectively monitored by an AI system, so they stop being skeptical about judicial decisions like they do nowadays, and allow the AI to continue spreading the entrenched prejudice in the algorithms. Data might be deceiving sometimes when it’s manipulated to produce certain outcomes. However, the public’s trust in the objectiveness of AI might be leveraged by the courts, our politics, or any body that has an agenda to continue deepening existing biases.
  2. I think some of Johnson’s arguments that point to the implanted subjectivity in algorithms really resonate with Marx’s criticism of the “illusory political community”. Marx argues that man live in a “double existence,” where he is a communal being in the political community, while a completely egoistic and self-serving individual in the civil society (Marx, 34). While Marx regards the political community as spiritual and holy as religion, he contrasts human’s self-serving behaviors in the civil community as profanity which divests human’s status as a real “species-being” (34). Because there is a split in the nature of man between political and civil communities, man as political individuals who are also a part of the contagious civil community are living under the illusory concept that they are equal and free. This notion directly points to Johnson’s worry that people are too deeply affected by the value-free ideal of algorithms, and stop trying to disintegrate entrenched biases. However, I would also want to argue that such an “illusory” value-free ideal of algorithms is almost inevitable. Man’s natural desire for equality and less prejudice prompts our development of AIs and systems that seem to promise better transparency and objectivity. So, consulting data in a way that we believe to be more objective is an inevitable path in human development, in my opinion, which means that we will have to put ourselves through some illusory ideal that Johnson suggests. Like myself, I constantly question the objectivity of things that I believe in, whether this is true, or am I just deceiving myself to make me feel better. In the end, the answer is really unknown.
  3. Lastly, I would like to bring in a little bit of insight into cyber-security and information sharing. I worked at ByteDance the past summer and worked on an AI-Care for Depression algorithm that is to be integrated into Tiktok. Basically, the algorithm detects the content that you usually view and determine your mental status based on that, and then specifically recommends you curing or healing videos to relieve your mental distress. This is highly intrusive of our personal privacy but it can also be argued otherwise that this algorithm is based on the moral high ground that we are caring for our depressed population. I think this can be a good discussions point as it relates to Johnson’s argument that “ethical values have a legitimate and necessary role to play in guiding scientific inference because they establish confidence thresholds for ultimately accepting or rejecting a given hypothesis or prediction”. I would really love to hear Johnson’s opinions on this and the ethical issues around information sharing.

Comments

Popular posts from this blog

Does the social reality imply a natural reality?

Is cancel culture democratic (with a small "d" even though it is also Democratic with a big "D")?

Better Model?