I wanted it to come up with some photo-realistic image in the body, using all sorts of tricks, like text conditioning with the words photo, natural, etc., but it was hard.
If I used the prompt "DSLR photo" it simply gave me a photograph of the cat operating a digital camera :)
From Monday, the UK government has said people who have graduated, in the last 5 years, from one of the eligible universities listed on its website, will be able to apply for the UK's "high potential individual" visa.
In a decade, most of the creative content we see will be at least partially created using tools that incorporate machine learning models, simply due to the efficiency in which content can be created, whether we like it or not.
This is similar to the trend of how most illustrators, designers, and artists, professional or amateur, now use software tools now for most creations, and how most photos are taken using smartphone digital cameras, creating the abundance of content that we have now.
But unlike previous trends, machine learning models are constantly updated with new data, which is produced by our collective intelligence reflecting the current state of our culture. If most of this new creative content is made using ML, it will lead to this weird feedback loop.
“The newest GPT-3 version (May 2022) actually did the worst at this task—they kept presenting me with real donuts that they’d seen during their training, and not even particularly weird donuts… The original early-2020 GPT-3 models were more willing to deliver the weirdness.”
There’s definitely a tradeoff (and also some “Efficient Pareto frontier”) between realism/accuracy axis and creative/weirdness axis. A bit similar to what I discussed in this thread:
A good hack might be to use an older generation language model that can come up with ‘weird text’ that’s deliberately not so realistic, and feed the weird text into a text-to-image model: