Is Ethical AI Even…Possible?
What are the implications and solutions to biased algorithms?

Americans make up <5% of the world population. Yet incarcerated Americans make up 20% of the world’s incarcerated population.
So how do we solve this intensely disproportionate incarceration rate? To make the conviction process supposedly more efficient, US courtrooms have turned to artificially intelligence to calculate the criminal risk-assessment score to evaluate the likelihood that a convicted individual will commit a subsequent crime.
A higher score can contribute to a harsher sentence and/or getting jailed before trial. Because AI is trained on historical data to make generalizations for the future, low-income or people of darker complexion are more likely to be labeled as “high risk.” At the Data for Black Lives conference, Marbre Stahly-Butts, the Executive Director at Law For Black Lives, stated that:
“Data-driven risk assessment is a way to sanitize and legitimize oppressive systems.”
The problem with using AI as a tool to aid life-changing decisions such as prison sentences is that systemic racial biases are further perpetuated rather than addressed at the root causes, such as educational, financial, and employment inequality.
In order to further understand and mitigate this issue, companies such as IBM have been researching how to implement ethical values into AI that can make “morally correct” choices, free of societal bias to prevent discrimination. But how might this be implemented?
1. Clearly outline our values.
Ethics is defined as “the discipline dealing with what is good and bad and with moral duty and obligation” on Merriam-Webster. We would need to explicitly state what the definitions of “good” and “bad” are in order to implement them into an algorithm.
Consistency is critical. However, ethics is subjective — there will never be one moral code that is universally agreed upon.
2. Training datasets must eliminate discriminatory bias.
A biased dataset is useful when it reflects experience that can be utilized to make accurate predictions, such as the relationship between a doctor’s experience level and their success in diagnosing a certain condition.
However, this must be distinguished from biased datasets that promote practices based on prejudice, such as the criminal risk-assessment score.
According to Lionbridge, there are many different types of data bias. Here are some of the most consequential ones:
- Sample bias: when the data collected does not reflect model’s use case, for example, when facial recognition algorithms train primarily on white men.
- Observer bias: during the labeling of the data, the observer’s subjective views influence their decisions either consciously or unconsciously.
- Association bias: when a dataset reflects cultural norms, such as when a model is trained on data where all doctors and men and all nurses are women. The model may not “know” that women can be doctors and men can be nurses.
This highlights the changes we must make in our own society. But it is also a warning against using AI to repeat the mistakes of our past.
3. Accountability.
How do we know if we are successful in creating an “ethical AI?” Would it ever be possible to create a measurement that quantifies the morality of a system? The answers to these questions are just as philosophical as they are scientific.
In terms of the algorithm itself, the complexity can mask and discourage further investigation into the reasoning behind a decision. We cannot merely accept the output of AI before it’s proven to be accurate and moral, which is why transparency is crucial.
Despite the seemingly objective nature of machines, the dataset will understandably be culled from various biases such as racism, sexism, ageism, etc. because the minds behind the code are just as subject to these biases by mere existence in today’s society. The burden should rest first with the dataset to be truly objective vs. to prove that it is not.
🌱 Possible Solutions.
There have been efforts, however, to remedy some of the obstacles outlined above, through:
- Manually creating diverse datasets with intention (but very tedious and inefficient because people must sift through and analyze each datapoint).
- Cleaning a training dataset before use that supposedly “equalizes” the data by use of statistical optimization.
- Software to rate the level of bias in an AI algorithm, however, this brings up further questions on how to hold this additional layer of software accountable.
💡 Takeaway.
Clearly, there are many obstacles we will face as we navigate the implementation of AI into every pocket of our society — from criminal justice to bank loans. At the end of the day, a truly 100% “moral” machine is unachievable simply because it cannot be defined.
But we should never give up. We must approach this problem from all disciplines, all cultures, and all identities in order to attack this issue that is intertwined in all aspects of our lives.
At the end of the day, we need to identify and reduce discriminatory practices in AI because this will also help us do the same within ourselves.
Do you believe we will ever achieve ethical AI, and is it even a goal worth pursuing? If moral machines were possible, would this ultimately be able to help us identify and combat our own bias?
If you enjoyed this article:
Thanks for reading! See you again soon 👋