Back to Humanity is about getting people to embrace humanity in a high-tech world. Today, we’re going to talk about the implications of advances in computer-generated content. If you want to read more content like this, you can subscribe for weekly posts on culture, technology, and travel.
Generative artificial intelligence models—like DALL-E and GPT-3—can create text and images with nothing more than a short prompt. For example, if you tell DALL-E to create an image of “an astronaut riding a horse in a photorealistic style,” it will develop renderings from scratch like this one:
Liam Porr recently used these technologies to create a well-trafficked blog that duped tens of thousands of people into believing that computer-generated posts were written by a human. In The New Normal: The Coming Tsunami of Fakery, Grandy recounts many similar examples of AI’s deceptive skills from releasing computer-generated tracks on Spotify to impersonating people on LinkedIn.
These algorithms are advanced and improving at an exponential rate; we need to urgently think about how humanity responds to a world in which computers create like humans.
Social Media
In the coming years, social media platforms could experience a flippening where computer-generated content gets more engagement than human-generated content. As I wrote in Quitting the Internet, algorithms will not only choose what you see, but create it:
The second trend is artificial intelligence moving from recommending content to automatically creating it. OpenAI’s GPT-3 and DALL-E are already scarily good at manufacturing text, images, and videos; and companies like TikTok are so good at determining users’ preferences that they have an entire generation completely addicted to their Chinese war machine. In the near future, companies will combine these technologies so that artificial intelligence will create the most addictive content for you, without any human intervention, on the fly.
Simultaneously, a growing percentage of the content that isn’t created by the corporations’ algorithms will be made by someone else’s, and many of those actors will be malevolent. As I wrote in Keep Modernity, Exit the Metaverse, we’ve already seen countries and organizations effectively use social media for nefarious purposes:
The Digital Maginot Line beautifully illustrates many of these examples: Facebook becoming the government’s primary weapon for genocide; Russia’s interference in the 2016 US Election; ISIS’s effectiveness in recruiting terrorists online. Organizations will always take advantage of new tools to gain power; however, we are currently ill-prepared to handle these attacks. Until people, companies, and countries learn how to use these tools we should take specific precautions: you shouldn’t give a loaded gun to a four-year-old.
Now, they will have the equivalent of a social nuclear weapon that they can use to impose their will on the world.
Given these realities, companies need to institute a KYC process so that individuals know who (or what) is creating the content that they’re engaging with. Even if you don’t use social media, you should care about these companies’ policies because the digital realm will continue to massively influence society whether or not you exit the metaverse.
Creators
Long-form content such as novels and films will take longer to transition to being entirely machine-made, but many content creators will soon begin to use algorithms during the creative process: essayists will pass their writing to algorithms for editing; novelists will train GPT-3 on their previous work—so that it masters their voices—and then co-write books with it; animators will have DALL-E create fifty renderings of a character, test them all online, and spend their time refining the most popular one.
These changes may result in an improvement in content just as CGI made it possible to create fantastical films; that said, the magic of art is in its constraints—we’re in awe of the David because it was created by a man with a chisel—and we should place as much value on how something is created as the end result itself.
We should start applying conscious consumerism to the content that we engage with: just as we pay a premium for hand-crafted leather goods, we should pay a premium for media that is created without algorithmic intervention. Just as there is a process for certifying organic food, we should come up with similar organizations and regulatory bodies to ensure that those who claim to create work untainted by algorithms are representing themselves honestly.
While audiences should vote with their dollars and choose humans over computers, creators should develop in ways that are hard to replicate for computers: Substack writers should grow thriving communities; novelists should enhance book-reading experiences; and movie directors should create more compelling ways for the audience to interact with the cast.
Man v. Machine
I’ve always thought that contact with an alien civilization could break down some of the barriers between nations and help unite humanity, but technology is increasingly playing that role. Modernity has tasked us all with the responsibility of choosing between humans and machines: Do you want to pay a writer for his heartfelt work or a corporation for their cold compositions? Do you want to give your attention to a director showcasing his range, sensitivity, and mastery or to a movie-maker who spends his time editing a machine’s creation?
We need to become more conscious of what we consume, and we need to be ready to boycott the companies that don’t give us the tools to make mindful decisions. We need to choose man over machine, forever.
Best post so far! This one is really written with your heart. We can feel the implication of your beliefs, all crafted in your poetic style. It’s truly what people need to read. It’s truly what we need to hear. Thank you for sharing this.
I choose writers like you over so-called processed chips, over and over, forever.