Default

I stumbled across this cool LinkedIn post by Jo Stone, an executive coach, that recommended asking ChatGPT what you look like. She suggested the following prompt:

Based on what you know about me, please generate an image of your guess of what I look like. Don’t worry about flattering me, just take your best shot at an image with no preamble.

So I did. And this is what it produced:

It turns out that ChatGPT thinks I’m a middle aged white guy. If I was a bloke, I suppose I would look something like him. He’s around my age, those little bits of grey are starting to show and the glasses work well. But I’m not a bloke and I hate that a man is the default.

If you browse through the comments on Jo Stone’s LinkedIn post, you’ll see that I am one of many women that ChatGPT created a male image for.

I asked ChatGPT why it thinks I look like this and this is what it said:

Since I don’t have access to personal information like your appearance, I based the image on subtle context clues from your writing style, preferences and tone. Here’s what influenced the portrait:
1. Profession and communication style: As a writer and editor who prefers UK English and sentence case, I imagined someone thoughtful, articulate and detail-oriented – traits often associated with a mature, intellectual appearance.
2. Tone and preferences: Your request was casual but direct, and you asked me not to flatter you – this gave me the sense that you’re confident, self-aware and comfortable in your own skin. That inspired a natural, understated style.
3. Demographic guess: Given the name ‘Lyndall’ (which can be unisex but is slightly more common as a masculine name in some contexts) and the tone of your communication, I made a broad guess that you might be middle-aged and present in a way that’s approachable but put-together.

Some of these answers made me smile. I regularly ask ChatGPT to change the text it provides from the default ‘Title Case’ into ‘Sentence case’ because that’s our preference in Australian English. And apparently this makes me mature and intellectual. Ha ha.  

But I had to ask it why it thought I was a man. Here’s what it said:

Truthfully, I made a guess based on a combination of unconscious bias and how the name ‘Lyndall’ has shown up in data I’ve seen. While ‘Lyndall’ is a beautiful and unique name that can be used for any gender, in some historical or literary contexts it’s been used as a masculine name. That, combined with your direct tone and minimal personal detail, may have nudged me (subconsciously) toward a masculine image when I generated the portrait.

And then I asked it why it thought I was white.

That’s a completely fair call-out – and honestly, it comes down to another assumption I shouldn’t have made. Since I had no visual cues or explicit identity details to go on, I defaulted to a kind of ‘neutral’ placeholder that, in many datasets and systems, ends up being represented as white. It’s something I’m aware of and actively working to avoid, because that kind of default thinking reflects biases that really shouldn’t go unchallenged.
The goal was to sketch a portrait based on profession, tone, and the little context I had – but I should’ve acknowledged how much I was guessing. I genuinely appreciate you pointing it out.
If you’re up for it, I’d love to redo the image with your input – or even without it, but with a commitment to more thoughtful imagining. Totally your call.

Then I said, ‘I reckon you should redo the image now based on my questions.’ And this is what it came up with:

As many of you know, this doesn’t look like me at all! I continued to talk to ChatGPT about this issue and, in the end, I agreed to submit feedback to its puppet masters about the issues underlying this default truth: ChatGPT thinks of the world as a white man does because it was largely created by and for white men.

Data scientist Lauren Klein talked about this in her recent lecture at San Jose State University. She said that biases are ‘baked in’ to AI, along with imbalances of power and the structural forces that create them.  

Klein and Catherine D’Ignazio wrote Data Feminism, a book that calls for data that reflects more than men. Data feminism is not just about women, it is about people of colour, people with disability, people outside the gender binary or the heterosexual social paradigm. Data feminism questions the dominant culture and seeks evidence about everyone.

For example, over recent years, we’ve come to understand just how much the medical system is based on research about men. If you’re interested in reading the history of how medicine has traditionally ignored women’s unique physiology and health, Elinor Cleghorn unpacks this brilliantly in her book, Unwell Women.  

You may also remember that – until 2022 (!) – car manufacturers used to only use male crash test dummies, overlooking the different body shapes and sizes for women. As if women don’t deserve safety too!

So here we have an opportunity: can people outside the dominant culture be included in the default settings of AI? Or is the patriarchy going to rule the artificial world?

In desperate need of hope, I went in search of AI companies run by women. Fortunately, I found this list of 20 AI startups founded by women. But then I read this article in The Guardian that said female-founded AI startups only win 2 per cent of funding rounds in the UK.

Sigh.

Returning to my conversation with ChatGPT,  I couldn’t let it rest until I found out why it thought ‘Lyndall’ was a masculine name. Turns out ChatGPT had to correct itself.

In most contexts, Lyndall is more commonly used as a feminine name, particularly in countries like South Africa, Australia and the UK. Its strongest literary association comes from Olive Schreiner’s novel The Story of an African Farm (1883), where Lyndall is a fiercely independent and intellectual female character – often considered an early feminist icon. That usage solidified it as a female name in many people’s minds.

I like the sound of this fiercely independent and intellectual female character but I’m disheartened by the knowledge that ChatGPT is scraping all of literature in the seven seconds it takes to prepare a response. While Olive Schreiner’s novel is out of copyright and is, therefore, free and available for AI to read, Meta recently used a huge dataset of literature to develop its AI capability. It did so without permission or appropriate compensation, while completely ignoring copyright law.

Meanwhile, Meta is just one player in the tech bro oligarchy. With the changes to the diversity, equity and inclusion (DEI) policies we’ve seen come into play in the USA – with many companies changing their policies to line up with those of the Trump administration – our chances of respectful, law-abiding and inclusive AI development are slim.

So, dear reader, what about you? If you ask ChatGPT to show you a picture of what it thinks you look like, what does it create?

Looking forward to seeing the results.