The artificial gaze
- Charley Johnson
- Sep 3, 2023
- 6 min read

Imagine if society — for all its messy, entangled, complexity — were one big algorithmic system: what would it be optimized for? What are the implicit objectives that — either consciously or not — shape organizations, societal dynamics, and ourselves? I’ve previously singled out efficiency and the pursuit of scale as one such objective. This essay is about another element of our modern-day optimization function — unrealistic beauty standards — and how artificial intelligence might alter it. Let’s dig in.
Some experts estimate that 90% of all content on the internet will be artificially generated within a few years. Gartner estimates that 30% of marketing content will be AI-generated. One thing we know is that these images will exacerbate existing social biases and propagate stereotypes. Bloomberg recently conducted research using Stable Diffusion and found that the text-to-image generator amplifies stereotypes about race and gender. Here are just a few of the findings:
Across genders, every high-paying job (e.g architect, CEO, lawyer, engineer, doctor, etc.) was dominated by subjects with lighter skin tones while subjects with darker skin tones were generated by prompts like ‘fast-food worker’ and ‘social worker.’
Moreover, men with lighter skin tones constituted the majority of every high-paying job. Women were not only underrepresented in high-paying occupations, they were also overrepresented in low-paying ones.
These results don’t simply double down on problematic stereotypes, they’re also inaccurate:
3 percent of the images returned for ‘judge’ were women, whereas in reality, 34% of U.S. judges are women.
70% of the images generated for ‘fast-food-worker’ were of people with darker skin tones, while 70% of U.S. fast-food workers are actually white.
68% of the images generated for ‘social worker’ had darker skin tones while, in reality, 65% of U.S. social workers are white.
Another thing we know is that these images will contort already unrealistic beauty standards. The Bulimia Project, an eating order awareness group, experimented with Midjourney and found that 40% of the AI-generated images it produced depicted distorted, unrealistic body types. When the Bulimia Project used the prompt, “The ‘perfect’ male body in 2023,” Midjourney returned images like this:

Now, I don’t know how seriously to take these images. Hell, I live in Los Angeles, where you can find some overly engineered body types, and these men would stand out even here. When I look at them, I don’t feel the need to hit the gym or chug a protein shake, I roll my eyes. Nor do you see images like these being used in mainstream marketing campaigns, so it’s possible we’ll all laugh these off, and go about our lives. But just as social media and the use of artificial beauty filters have encouraged people to look like their filtered selves in real life, images like these offer a glimpse at how norms and standards might evolve in a world saturated by synthetic media. They might represent the extremes, but there is insight to be found in the outliers.
As the standards contort and change, so too will the work required to achieve and maintain these standards — or what Elise Hu calls “appearance labor.” In her great new book, Flawless: Lessons in Looks and Culture from the K-Beauty Capital, Hu argues that
“We don’t measure the time, energy, and effort put into skincare, fillers, dental work, hair straightening or coloring and so on. But that should not obscure the fact that appearance labor does require money and time: researching and purchasing products, scheduling and attending appointments, regimenting your body.”
Hu is writing about Korean beauty culture and the societal structures that make this work “both a choice and not a choice.” Right, it’s not a choice because appearance labor is a necessary part of participating in our social and economic system. As Hu puts it:
“As long as particular beauty ideals persist, and as long as class stability and economic and social success are dependent on meeting the standard, it is only logical to put in the work of appearance labor.”
As beauty standards evolve, and algorithms become optimized for even more unattainable conceptions of beauty, the work required to meet them will change. How exactly is hard to predict, but it will likely require more time and money. It’s also not hard to imagine that as culture becomes suffused with artificial images, it will become harder to resist appearance norms. The more the images above pass for normal, the more political and fraught it will be to resist their general aesthetic.
These tools propagate bias and stretch the collective imagination of what a fellow human could look like in the meat space — but unfortunately, they are also being used to render social inequities invisible. Rather than trying to address diversity, equity, and inclusion biases in the real world, a number of companies are starting to create artificial representation in their marketing campaigns. For example, Levi’s is now using AI-generated models in their marketing campaigns to “aid in the brand’s representation of various sizes, skin tones, and ages.” Amy Gershkoff Bolles, Levi’s global head of digital and emerging technology strategy, explained the shift this way:
“We are excited about a world where consumers can see more models on our site, potentially reflecting any combination of body type, age, size, race and ethnicity, enabling us to create a more personal and inclusive shopping experience.”
They aren’t alone; there’s actually quite a market for this. Indeed, there are software firms like LaLaLand and Deep Agency that were started for this very purpose. Michael Musandu, the founder of LaLaLand, says he started the company because he couldn’t find models that looked like him growing up, explaining, “Any good technologist, instead of complaining about a problem, will build a future where you could actually have this representation.” But this is an insane sentiment — representation that exists in digital form only isn’t representation. Using the images of others to create a synthetic black model rather than hiring a real black person amounts to cultural appropriation and exploitation. This raises a whole host of practical and ethical questions, like:
Who will own the rights of the images used to train these systems?
Who can make money off the use of these images?
Will we actually accept this as a kind of representation?
Will we compare ourselves to strangers even if those strangers are synthetic?
Are we to assume that creating synthetic models is actually easier than finding real-life models who are more representative?
So what to do?
If we want to reform a system, we need to identify its — often invisible — optimization function, and then change it. In ‘Bigger isn’t better; it’s harmful,’ I offered subsidiarity as a substitute objective for scale thinking and efficiency, and transformational justice as a mechanism for getting there. So what might we replace unrealistic beauty standards with? Regardless of how synthetic images shape social inequity and appearance labor, Elise Hu offers three ideas to help us turn the corner.
The first is embodiment. Hu sees our entanglement with unrealistic beauty standards as “a struggle for self-determination and our claim to bodily and spiritual integrity,” and I couldn’t agree more. She writes,
“I’ve come to see that the care must be predicated on a reconnection with ourselves. In considering our bodies as sites of work, I realized we can also be alienated from ownership of their labor, and for similar reasons.“
I would add that the project of unrealistic beauty standards is part of the techno-utopian ideal that seeks to devalue and disembody our humanity. Jaron Lanier calls this ideal an “antihuman approach to computation,” in which “bits are presented as if they were alive, while humans are transient fragments.” Reclaiming our humanity starts by embodying it.
The second idea Hu writes about is mutuality. We need to recognize that individuality doesn’t get us where we want to go. The trick of succeeding in our current system, as Hu writes, is that “winning through self-optimization in a hyper capitalist system is a precarious way of life for those at the top. And it relies on the aspirations of the underprivileged to give it power.” This means the only way to change the terms of ‘success’ (e.g. what it means to be beautiful) is through collective action. If we each stop aspiring to what is sold to us from above, we can begin to dismantle beauty standards. Our individual actions shape community expectations. As Hu writes,
“Everything I do to make myself look individually ‘better’ affects the expectations within my community for how we should look. I could feel better about my appearance by Botox-ing again, maybe, but it compounds the problem for everyone else.”
In other words, the only way to shift societal norms is to be concerned with the collective and act in a way that recognizes the responsibilities we have to others.
The third idea —and for Hu, this is the most important one — is worthiness. As Hu beautifully writes, “If you ask me what my dream is, it’s not for everyone to believe they’re beautiful but instead to believe they are worthy, flaws included.” So we need to abolish these underlying societal standards and by extension the idea that our worth is tied to something outside of us.
Solutions that are basically ‘cultivate new social norms’ can oftentimes feel unsatisfying. But striving to achieve an unrealistic beauty standard is arguably more unsatisfying than participating in a cultural shift away from those standards. Identifying the norms of our optimization function is the first step in contesting them — and eventually replacing them with something else.
Comentarios