Bright noted this was the first time attending such an event in Onitsha, as a finance lawyer. He said he shared his contemplation on the topic “Bias in Generative AI,” acknowledging that it’s not his area of specialty. However, he believes we don’t have to be a specialist in a particular area, as a learned person, one should be able to fit in.
The Imaginative Exercise
In the course of his introduction, he led participants in an “Imaginative exercise”: Imagine being in a room with mirrors that give a good view of yourself and mirrors with distorted views. Of course, this is not a problem for someone with good self-esteem. But what if other people begin to assign your abilities, potentials, and personalities based on these distorted images?
Understanding Bias
Bias is a tendency to favor certain perspectives, often unfairly.
Let’s define some key terms:
- Artificial Intelligence is a field in computer science focused on creating systems that perform tasks using human intelligence.
- Generative AI is a subset of AI that creates content based on existing data.
- Bias in Generative AI refers to giving outputs that show favoritism, based on their algorithm or trained data.
The Nature of AI and Bias
Bright emphasized that AI systems are based on their trained data – it’s a clear “garbage in, garbage out” scenario. AI is a TOOL. Is there a difference between AI and other tools we see around us – like fans, pens, projectors that help us capture our thoughts?
A cutlass doesn’t care about your culture or race once it’s sharpened. But Generative AI is different because it can analyze data, much like us (humans).
Key Insight: “Gen AI itself is amoral, it cannot do harm. The potential to do harm or be biased is because of us humans being the creators.”
The world is already pluralized: there are beliefs and perspectives that some people hold against others, and unknowingly, some AI systems are perpetuating these biases.
Case Studies of Bias
In 2022, Bluebird, a USA-based AI company, conducted an interesting study to measure AI’s impact:
- Professional Representations:
- When generating images of lawyers and architects, the output was predominantly white.
- The CEO generation produced almost all men and white individuals.
- Occupational Stereotyping:
- Janitor images featured more Black individuals.
- Social worker images were mostly females with no white males.
- Fast food worker images skewed towards Black representation.
The Long-Term Impact
These biases are harmful – not just now, but potentially devastating 50 years down the line when these perspectives have deeply settled in our minds.
Mitigating Biases
How can we address these biases? Here are key strategies:
- Focus on a diverse data pool
- Audit data regularly
- Improve data labeling
- Use fairness in algorithms
- Conduct bias testing after development
- Create feedback mechanisms
- Use diverse teams to encourage diversity
- Implement ethical oversight
A Provocative Question
“In generative AI, if the bias stems from the embedded human biases contained in the training data and in the minds of AI engineers, shouldn’t we be focusing on the human element of the whole mix?”
These biases include racial and economic dimensions, and people are working day and night to reduce such biases. You can also contribute in your own little way.