Of course they weren't. How could they be if the AI used was not designed for that purpose and could not be fed with all the necessary input data?
The problem was that I didn't add a clear disclaimer at the beginning of the thread. Something like:
"Here's a half-hour NOT FORENSICALLY CORRECT experiment with MidJourney, to visually dream with the possibilities. No mummies suffered in the process. Come on in and enjoy!"
The fact is that, after spending a year non-stop experimenting with AI, sometimes I forget that not everyone who follows me understands that my intention is none other than to experiment with this technology in all possible ways and tell you all about it.
I enjoy it and many of those who follow me too. Fantasy or not.
Nevertheless. Could AI be used to generate scientifically correct reconstructions?
Of course yes, as in the rest of the fields, AI could also be applied here.
But not with #midjourney: it's not made for this. MJ already does many wonders, don't ask it that much.
We would need a specific AI, trained for this purpose, that could receive all the necessary data from the input. Among others: examinations of the skeletal remains (CT-scan or X-ray tomography), examinations of the preserved soft tissues, anatomical criteria, etc.
Disclaimer (just in case): I'm not an expert in forensic reconstructions, so I can't correctly list all the inputs that would be needed. You have to be careful on Twitter :)
The input data and the results of existing real works would be especially useful for training the model.
This could be done, so that it would be a significant sample to be able to feed the model, with blind tests: tests that are currently carried out in which input data is given to a professional (who does not see the photograph of the person trying to reconstruct )...
...but works with only the input data until it generates a 3d render or illustration. Finally, the result is compared with the real photograph of the person, to see if the process followed was correct or not.
It's not trivial, but a model trained on enough such examples, together perhaps with current diffusion models, could, hypothetically, do a very decent job of face reconstruction.
I don't think it's far away the day we see something like this.
And hey, I don't see anyone's work threatened: even with a great leap in automation and time savings, human supervision, touch-ups, adjustments, etc. will still be needed.
Well, let's go!
Now what I promised you: paint and color with @javilop!
Let's rebuild NON-FORENSICALLY reliable mummies!
The tool I used is midjourney.com. Interestingly, it is used from their Discord channel and/or by talking directly to their bot (if you upgrade to one of their paid plans).
For these "reconstructions" it is necessary to use an init image (the mummy). In this other thread I already explained how this could be done.
You will see that they all follow the same scheme.
1️⃣. Ramses II
Younger! Just change the age in the prompt, et voila!
2️⃣. The surfer guanche.
You were looking forward to it, I know.
By the way, in the professional reconstructions it seems that he actually had brown hair... although in the English Wikipedia they say: "including brown red hair"... so who knows.
3️⃣. Tollund Man.
Or at least, his head.
4️⃣ Lady Rai.
5️⃣. Menmaatre Seti I.
Tip: You can refine the image by using the image generated in the previous step as the input image. The prompt is the same, but in successive iterations it can acquire new nuances:
Utilicé la misma técnica para la segunda imagen de Lady Rai. Mi favorita de todo el hilo.
Thanks for reading!
Can't wait to see your mummies come to life! Show them to me!
And if you want more curious things, sometimes reliable, sometimes not, sign up to my newsletter. An easy RT to the 1st tweet of the thread would also motivate me 🙏 Thx!
• No more AI plastic skins!
• Enhance EVERYTHING in your image, not only the skin!
• 3 different flavours + easy presets: improve light, level or reality, color grading, etc.
Let's dive in + tutorials + tips 🧵👇
First of all, if you can't wait, here you have the link! AVAILABLE NOW on Magnific & rolling out to Freepik users today!
I’ll also randomly grant access to some of you who reply with a interesting message 😘
There's no way Hollywood won't be affected by this.
I created this whole scene in less than 2h using Veo 3 (AI video), Magnific (upscaling), Suno (music, except the first 3s 😉) and CapCut (editing).
The Cambric Explosion of content has already started!
Full tutorial 👇
1. Idea
I've had this idea (a mood) of mixing a 7-eleven at night and a 🐲 for over 2y now.
The concept came to me then, but it wasn't until now that I've been able to bring it to life visually.
Veo 3 feels like being back in Apr 2022, when DALL·E 2 hit my brain like a truck.
2. Video generation using Veo 3 inside Freepik (not yet available but soon)
I used ChatGPT to craft all the prompts and then did all the video generation inside Freepik using Veo 3.
Something I've learned is that Veo 3 can handle really long and complex prompts, so don't hesitate to use very detailed descriptions to express the vision you want to create.
Example:
"Close-up shot of a pair of hands reaching toward a dusty black tome resting on a low shelf inside a dimly lit 7-Eleven. The book has a worn leather cover with a flaming dragon etched in glowing, fiery lines across the front. Above the image, an unreadable title is inscribed in ancient golden runes. The hands pick up the book slowly and carefully, as if sensing its weight and age. At the edges of the frame, part of a red puffy vest is visible over a faded denim jacket and a plaid shirt sleeve, revealing just enough of the young man’s layered clothing to hint at his presence."