Of course they weren't. How could they be if the AI used was not designed for that purpose and could not be fed with all the necessary input data?
The problem was that I didn't add a clear disclaimer at the beginning of the thread. Something like:
"Here's a half-hour NOT FORENSICALLY CORRECT experiment with MidJourney, to visually dream with the possibilities. No mummies suffered in the process. Come on in and enjoy!"
The fact is that, after spending a year non-stop experimenting with AI, sometimes I forget that not everyone who follows me understands that my intention is none other than to experiment with this technology in all possible ways and tell you all about it.
I enjoy it and many of those who follow me too. Fantasy or not.
Nevertheless. Could AI be used to generate scientifically correct reconstructions?
Of course yes, as in the rest of the fields, AI could also be applied here.
But not with #midjourney: it's not made for this. MJ already does many wonders, don't ask it that much.
We would need a specific AI, trained for this purpose, that could receive all the necessary data from the input. Among others: examinations of the skeletal remains (CT-scan or X-ray tomography), examinations of the preserved soft tissues, anatomical criteria, etc.
Disclaimer (just in case): I'm not an expert in forensic reconstructions, so I can't correctly list all the inputs that would be needed. You have to be careful on Twitter :)
The input data and the results of existing real works would be especially useful for training the model.
This could be done, so that it would be a significant sample to be able to feed the model, with blind tests: tests that are currently carried out in which input data is given to a professional (who does not see the photograph of the person trying to reconstruct )...
...but works with only the input data until it generates a 3d render or illustration. Finally, the result is compared with the real photograph of the person, to see if the process followed was correct or not.
It's not trivial, but a model trained on enough such examples, together perhaps with current diffusion models, could, hypothetically, do a very decent job of face reconstruction.
I don't think it's far away the day we see something like this.
And hey, I don't see anyone's work threatened: even with a great leap in automation and time savings, human supervision, touch-ups, adjustments, etc. will still be needed.
Well, let's go!
Now what I promised you: paint and color with @javilop!
Let's rebuild NON-FORENSICALLY reliable mummies!
The tool I used is midjourney.com. Interestingly, it is used from their Discord channel and/or by talking directly to their bot (if you upgrade to one of their paid plans).
For these "reconstructions" it is necessary to use an init image (the mummy). In this other thread I already explained how this could be done.
You will see that they all follow the same scheme.
1️⃣. Ramses II
Younger! Just change the age in the prompt, et voila!
2️⃣. The surfer guanche.
You were looking forward to it, I know.
By the way, in the professional reconstructions it seems that he actually had brown hair... although in the English Wikipedia they say: "including brown red hair"... so who knows.
3️⃣. Tollund Man.
Or at least, his head.
4️⃣ Lady Rai.
5️⃣. Menmaatre Seti I.
Tip: You can refine the image by using the image generated in the previous step as the input image. The prompt is the same, but in successive iterations it can acquire new nuances:
Utilicé la misma técnica para la segunda imagen de Lady Rai. Mi favorita de todo el hilo.
Thanks for reading!
Can't wait to see your mummies come to life! Show them to me!
And if you want more curious things, sometimes reliable, sometimes not, sign up to my newsletter. An easy RT to the 1st tweet of the thread would also motivate me 🙏 Thx!
I've been rendering in MidJourney for over 200 hours but I didn't find out about this trick until yesterday!
MidJourney can receive not just one but SEVERAL input images and it will come up with a new one mixing up the "high level concepts" 🤯🤯🤯
Here's how!👇
If you already know how to use init images in MidJourney, know that you can put multiple init URLs at the beginning of the prompt.
If you don't, follow these steps:
1. Drag & drop the img to your MidJourney bot's Discord and upload them to the chat. 2. Click on each uploaded img > "Open in Browser" > Copy the web address of the img. 3. Use these URLs at the beginning of your prompt (separated by a space), as many as images you want to mix.
I have used Artificial Intelligence for the reconstruction of 5 historical figures based on the photos of their mummified remains 📜
1️⃣. Ramses II (1,303 - 1,213 BC). Approximately 90 years old. His successors and later Egyptians called him the "Great Ancestor".
We can also guess how Ramesses II looked like when he was younger!
2️⃣. Guanche mummy of "Barranco de Herques" (Tenerife, Spain). Of Berber origin (which explains why many of them were blond with blue eyes), the Guanche term comes from the word wanshen, which could be translated as "the men of Ashenshen", which is how they called Tenerife.