AI-generated selfies could be the next Snapchat filters
Like any novel technological development, text-to-image AI models are slowly moving from the R&D stage to the phase of “making weird stuff for social media.”
Case in point is the phenomenon of the AI-generated selfie, which splices your likeness with different artistic styles and themes using machine learning. Think of it like the next Snapchat filter: a quick way to make your face look goofier than normal. Or goofy in a different way.
There are a few methods to make your own AI-generated portrait right now, like the free web tool DrawAnyone. Just head to the site and upload some selfies, let the system process your likeness, and then pick an output style for your new portrait. A fair warning: you’ll probably have to wait a few hours for the site to work through your pictures, and you’ll only have access to a few free image generations before you’ll be asked to pay for more.
The output, though, is impressive — and in my experience, uncannily realistic. Using AI to generate selfies feels like coming face-to-face with endless doppelgängers. The pictures look like you but not wholly like you. Sometimes they get your chin wrong, or your eyes. Or maybe you do really look like that? Who knows when an algorithm is drawing your likeness.
Below, you can see a few examples generated by DrawAnyone based on my own pictures and using (from top to bottom) the site’s “occult,” “soldier,” “royal,” and “Studio Ghibli” styles. (It’s not clear how exactly these categories are defined, as the “Studio Ghibli” setting looks nothing like Studio Ghibli and more like a generic digital “anime” painting.)
DrawAnyone is the creation of software engineer Bonnie Pham and her boyfriend, and is based on open-source text-to-image AI model Stable Diffusion. “Neither of us have a background in AI but we’ve managed to piece things together as we go,” Pham told The Verge over email.
Pham says she and her partner created the site using a fork of Stable Diffusion named DreamBooth (which was itself originally a method created by Google researchers to fine-tune AI art generators) before making “a lot of modifications to make it faster and to generate better results.” She notes that the pair didn’t upload any additional data to train the model’s different styles — all the output was already latent in Stable Diffusion’s system.
The use of Stable Diffusion shows how the project’s open-source nature allows others to quickly build on its capabilities, a dynamic that its creators often tout and that is fueling the fast development of AI art. Indeed, DrawAnyone isn’t the only project using Stable Diffusion in this way. App developer Lightricks, maker of the popular selfie-editing app Facetune, is also using the model to generate AI portraits. As with DrawAnyone, the feature in Facetune requires users to upload a number of selfies for the system to learn their likeness, before users then pick an output style to generate new images.
(A quick aside: both Lightricks and Pham assured The Verge that all selfies uploaded to their systems are only used to train the AI model that generates users’ images; they’re not used for any other data training purpose.)
What’s particularly interesting about Facetune’s implementation, though, is that it shows you the prompt — or text description — used to create the output. This is notable, as it reveals exactly how these tools rely on the work of real artists to generate imagery.
Below, you can see two examples from the Facetune app based on the styles of M.C. Escher and Jean-Michel Basquiat. Compared to DrawAnyone, I’d say the system is much less realistic and more cartoonish. It’s able to pick up on my general features (I’m white, male, with scruffy hair and a mustache) but it doesn’t often create images that look like me.
Other examples from Facetune’s preset descriptions include “intricate color portrait of me in the style of Tom Bagshaw, soft smooth skin, 8k Octane beautifully detailed render” and “me as a gorgeous princess, professionally retouched, muted colors, soft lighting, realistic, smooth face, fully body shot, torso, dress, perfect eyes, sharp focus on eyes, 8k, high definition, insanely detailed, intricate, elegant, art by J. Scott Campbell and Artgerm.”
To be clear: Tom Bagshaw, J. Scott Campbell, and Stanley “Artgerm” Lau are all living artists. Their work was scraped from the web without their consent by the creators of Stable Diffusion in order to train this AI model. Now, Facetune’s developer Lightricks is explicitly utilizing styles that artists honed over decades to create selfies that sell app subscriptions. The current consensus is that this is fine from a legal perspective, but it’s easy to see why many artists are angry about how their work is used to power these systems.
In the meantime, though, tools like DrawAnyone and Facetune’s AI selfie mode will continue to proliferate. Pham says that she thinks these tools “will become a popular way for people to share avatars of themselves across social media” and a “fun alternative for people to show off another side of their personality.”