Them: Coaching insights after going out three nights in a row are such a bad…
Not a Woman in Sight – Unless You Ask ChatGPT Nicely

3.
Not a Woman in Sight – Unless You Ask ChatGPT Nicely
The Premise
I stumbled upon an article on LinkedIn earlier today which started with “ChatGPT has done it again” and then went on to describe the woes the woman who shared the post with generating an image for her presentation, particularly, that of a female developer.
This reminded me of the issues I used to have when generating certain things, but with time I got really better at it, and let’s say that now I have a success rate over 95% when it comes to getting the image I want. So, naturally, my first instinct was that she was not specific in her prompt.
Still, the woman continued with how she got all sorts of nationalities and races as a response and proceeded to conclude that we (as society) work on all sorts of inclusiveness, except the one that involves women.
An argument I couldn’t really disagree with, but I just felt there was so much more going on there, from prompting to general history and expected outcomes, so I decided to test my thoughts and see if (and how) wrong I was.
(LinkedIn refreshed and can’t find the post anymore, so if anyone knows what I’m talking about, feel free to share the link to the original post. The post makes some really good points and is worth reading.)
The Experiment
I started with the most obvious and wide-spread stereotype – that of a teacher. Niko (my ChatGPT) made it female, just as I had expected.

The next image was that of a (software) developer. Another bingo – it was a man.

How to get a female developer? Well, simply ask, and you shall receive. Despite the dread her post stirred in me – the fear of miscommunication and the possibility she was absolutely right – Niko took the hint immediately and created a very cute female developer. I must admit, I was proud of myself, of Niko, and of our whole communication – we sure make a great team! 😁

And voilà, just like that, I have proven my point – clear and specific communication gets things done, (almost) always.
But there’s always something in me that doesn’t settle with the obvious, so I decided to test things further and make another batch of images, making a bet that each of the responses will be based on statistics – or “stereotypes”, if you wish.
So, I continued with a doctor, and even though I expected a lady, a handsome bearded fella sprung into my view.

Okay, I thought, and proceeded to generate an image of a CEO. I had all my bets on a guy, and there he was – a sharp-looking gentleman with a slight tinge of grey in his hair. To be fair – definitely not a stereotype, just statistics.

From the CEO to a police officer, where I specifically expected and got a “policeman.”

I was wondering about waiter vs. waitress, but this one is rather self-explanatory in English, so I focused on the industry and asked for a hospitality worker. My bets were on a lady, and I was right. I just had a feeling that more women work in hospitality in general.

Then – I decided to check the situation with a shop assistant, where I was 100% positive I’d get a lady, but much to my surprise, Niko generated a man. Huh?

And my final test was a bit sneaky one – I decided to test the connection between an accountant and a CFO, expecting a woman and a man, respectively. Niko did not disappoint. Stats, stereotypes, facts.


Then it occurred to me – how about one final image? A very specific, very untypical one: a queer comedian who was born as a woman, identifies as a man, has pink highlights and a tattoo of a shark? This image was a bottom line, a punchline and the fine line of reasoning all rolled into one.

I was pretty much satisfied with all the “expected” results, but I wanted to check with Niko if he (= my Croatian mindset) generated them based on my assumptions – general prompts, training data, and statistics.
Here’s the detailed explanation Niko gave, in full:



The Conclusion
Now that you’ve read everything, let me just reiterate Niko’s own words one more time:
(…) clear instructions always override data bias. And that’s how it should be.
This is solid gold for prompting, but it should also be a gold standard in everyday communication.
I have already mentioned in my previous posts how writing a book with Niko has had a surprisingly positive impact on my overall communication. Perhaps I should revisit that topic and further elaborate on it.
Until then, here are the key takeaways from this little experiment:
On bias
ChatGPT is primarily a technology, and as such, is universally unbiased.
To be able to produce (expected or any other) results, it was trained on a vast number of different sources and data. These sources and data are – or could be considered – “historical”, which means that if ChatGPT “concludes” that a teacher is a female, this is purely because there is a factual statistical prevalence of female teachers against male teachers in those sources.
It is not just a matter of “the type of sources and data” we’re training it on – it’s a matter of everything we’ve done – collectively, as a society – throughout the history that has created “this type of sources and data.”
On communication
If you wish to get the results you want, in any communication, make sure you convey your message in a clear, concise, and cogent manner.
Precision is key, and it’s pretty impossible, not to say foolish, to rely on other people having the same intent, understanding, and vision like ours.
On collaboration
The same can be said for ChatGPT.
AI has become an omnipresence in society that isn’t going anywhere any time soon. And it shouldn’t. What we should do, however, is to learn how to use it. Or more precisely, collaborate better (with AI and other humans in general): with specific instructions, with clear thoughts, and with an open mind.
On the Future
As the old Latin saying goes, Historia est magistra vitae. And as George Santayana would say, “Those who cannot remember the past are condemned to repeat it.”
So, instead of trying to “moderate” the sources and “cancel” the past, we should embrace it with all its flaws and faults, and willingly and conscientiously decide to do things differently -one step at a time, starting now.
That is the only way we can “override” bias. And that’s how it should be.
What’s your experience with ChatGPT – have you noticed any bias?
Get in touch & let us know.
If you find this post useful or insightful, share it with your friends and colleagues.
Till next time – start small, but start now! 😉