Remember when I said I wasn’t going to send out a newsletter today?
I lied.
Well, I am on holiday, but turns out I’m not very good at switching off Newsletter Brain, so here we are. Today is a bonus post all about a process I’ve been using to sketch out my fictional characters.
Last week I updated a sketch of Lola Styles, a young detective in Tales from the Triverse, building on an earlier version from a year ago.
I don’t make a habit of including character sketches, partly for time and mostly because it’s good to leave the reader’s imagination to do the hard work, but it occasionally a fun extra for readers. Here’s how I go about it…
Metahuman build
I’ve been trying to get better at drawing, but humans are still a major challenge. I rely heavily on reference material, whether that be photographic or digital.1 I can’t afford to hire models, alas, and stock images tend to be fixed in terms of cameras angles and expressions.
That’s where Metahuman comes in.
Epic is the games company behind Unreal Engine, which powers a huge number of video games. A couple of years ago they released Metahuman, a free tool for designing realistic characters for use in games and 3D animation. If you’ve ever played an RPG, it’s a bit like the character creation process at the beginning of the adventure.
Here’s the Metahuman interface, with some of my characters shown on the left:
This is Lola Styles:
Faces are built initially using starting templates, which can be mixed and merged together, then manually adjusted. It’s quick and easy, and because it’s running in the cloud you don’t need a supercomputer to make it work.
You can switch between various animation loops and expressions, and adjust the camera.
There is a slight drawback, which is that there’s no easy way that I know of to export a high resolution image. The intent for Metahuman is to import the models into Unreal Engine and use them in a game or animation context, after all. To get around this I simply grab a screenshot.
There’s a surprising amount of variety available for lighting, poses and expressions:
The lack of a proper hi-res export doesn’t matter too much for what I’m doing, as I’ll usually be drawing over them anyway. I’ve occasionally used screengrabs from Metahuman without much alteration, but I prefer to convert them to my own sketch when time allows.
Overpaint
I bring the image into Clip Studio Paint and begin to sketch over the top. The screenshot from Metahuman goes in as the base layer and I set the layer colour to a light blue:
It’s then about figuring out which details to emphasise and retain. For Lola, a character in her 20s, I didn’t want to capture too much detail and accidentally make her look older, so I kept it fairly minimal:
I use multiple layers, to help with any subsequent changes I might want to make. The hair, for example, is its own piece.
For shading I had a cross-hatch layer for the darkest points — her mouth and around her eyes — and then another layer with some soft airbrushing to reveal the contours of the face:
And that was Lola v1, created back in September 2023.
Lola v2
The story in Tales from the Triverse has taken some twists and turns of late, with a 5-year time gap. I thought it would be fun to revisit the sketch and update it to reflect what Lola’s been through.
Back in Clip Studio Paint, I added a new layer and drew in her new hair style: longer, braided, slightly messier, plus some new piercings:
Another layer was used for the facial scar. Keeping this to a dedicated layer made it easier to adjust its transparency, as I didn’t want it to look too raw or extreme.
The end result was Lola v2.
Tech & human-made
I started tinkering with Metahuman in 2022, the same year Midjourney appeared. I was quite enthused about both for a while, but in early 2023 abandoned genAI due to its many associated problems.
As the debate around genAI continues, I’ve found myself continuing to really enjoy the use of Metahuman. It’s a technological assist that is still based on human ingenuity (on Epic’s part) and human choices (on my part). Taking it that step further into Clip Studio Paint and converting it to a hand-drawn sketch is a satisfying process, and the end result is better than I could have done on my own.
It’s the approach genAI should have taken from the beginning: bringing artists into the process, both in terms of copyright and permission and in how tools are developed. Instead of trying to enhance the creative process, the tech corps instead decided to subvert it, disrupt it and take it for themselves.
I find far more satisfaction in the kind of process I’ve outlined in this post: using technology to assist, rather than replace, the creative experience.
Thanks for reading!
Don’t forget the free Sparkle Summit this Friday. I’m taking part in a panel at the end of the day alongside
, and host about fiction and Substack which readers of this newsletter definitely will not want to miss. Full schedule is here:Hope to see you there on the day, where you’ll most likely find me lurking down in the comments.
If I ever get round to doing my own comic, I might have to ‘cast’ people in the roles, so that I can use them a photographic reference. Which could be a fun process all by itself.
Thank you for sharing this resource, Simon! I usually end up on Pinterest when I'm trying to find an interesting face to sketch, or use for an indirect reference. But this looks like a great alternative.
Oh man. Why did I have to read this just before bed. Now I'm going to stay up playing with faces! Really cool resource. Thanks for sharing this!