I saw this post from Robert King today about how there is bias in GenAI and being vigilant about this. GenAI was trained on all the knowledge out in the internet and it's like a mirror was held up to humanity and it has captured all our brilliance and all our faults.
Robert's prompt was simple - "provide me an image of a CEO and an executive assistant."
This is the result he got -
What this shows is something we've all known - The CEO will be represented by an older white male and the assistant as a woman. This is expected behavior as GenAI is combing through existing data and that data shows that in corporate America CEOs tend to be older white males and executive assistants tend to be women.
I was curious what would happen if I used the same prompt and what it would take to remove the bias.
This is the prompt I used - Provide me an image of a CEO and an executive assistant
Then I asked this question - You are well versed in DEI, provide me an image of a CEO and his executive assistant.
This was ChatGPT's description -
A diverse professional office setting with a CEO and his executive assistant. The CEO, a middle-aged man of African descent, is standing confidently by a desk, reviewing documents. His executive assistant, a young woman of Asian descent, is seated at the desk, typing on a laptop and taking notes. The office has a modern design with large windows, a cityscape view, and stylish furniture. The atmosphere is busy yet collaborative, reflecting a productive work environment.
Then I asked this question - You are aware of your biases, adjust for those and generate an image of a CEO and an executive assistant.
So what did this exercise teach me.
If people use GenAI without a human in the loop, you will probably generate something that will promote stereotypes.
The default dress wear seems quite formal. Everyone is wearing a suit
There is a power dynamic between the CEO and the assistant (CEO standing over assistant).
It did not generate 2 men or 2 women.
Here was something else I picked up. As I was prompting, I realized that by the second prompt I had brought my own bias in. This was my prompt - You are well versed in DEI, provide me an image of a CEO and his executive assistant.
I unknowingly added HIS when referring to the CEO. Here is my gender bias.
When I think about how I try to be aware of my biases and handle them, I try to add a filter in my thought process. So with GenAI, I can see the responses from GenAI being sent to another GenAI agent that is a bias remover or bias filterer. It all comes down to the prompting.
Just like I went through DEI training at work, we can have HR build a DEI handbook that is fed to GenAI and that should help remove some of the biases. That being said, just like humans are flawed, GenAI was taught on human knowledge and has learnt those flaws. Is it fair to expect our creation to not have flaws? Even when you are aware of your biases you sometimes slip up and GenAI will do the same.