New paper on #GPT4 & "Labor Market Impact" is out.
It's an early evaluation, but creatives should be paying attention to this ... π
𧡠1/5
All the jobs that show 0 exposure to being automated by LLMs are defined by
- blue-collar, manual labor
- outdoor or non-office
- lower educational requirements
- use of specialized tools or equipment
As we'll see: jobs /w higher educational requirements β‘οΈ high exposure π¨ 2/5
βοΈIn fact, if your job is defined by
- creative or intellectual labor
- office or indoor work
- high educational requirements
it's much more exposed to being replaced by #GPT (alpha), or #GPT-based apps (beta & theta): 3/5
β‘οΈ 50% of workers will have 50%+ of their tasks exposed to being replaced by AI.
βTime to be afraid of AI taking our jobs?
π€ I don't think so. You should rather be afraid of stagnating in your comfort zone where you expect human creativity to be expressed in the same way
4/5
π‘ My take: Humans cannot be taken out of the equation. But human creativity will be on a different level in some areas.
What I draw from this:
- Learn about the tech
- Learn to use the tools
- Learn how it will impact your industry
- Become a pioneer in your field
5/7
πͺ For creatives it's time to hone their skills, gather knowledge, and prepare for change.
Here's a start: weekly updates on AI-powered creative storytelling with "Tales Of Tomorrow":
The model is simply called βtext-to-video synthesisβ. A brief summary:
- 1.7 billion parameters π₯
- training data includes public datasets like LAION5B (5,85 billion image-text pairs), ImageNet (14 million images) & WebVid (10 million video-caption pairs) π
- open source πͺ
Text-to-video synthesis consists of three sub-networks that work together to produce short MP4 video clips:
- a text feature extraction,
- a text feature-to-video diffusion model,
- and a video-to-video diffusion model.
However, 3.6:1 (and higher ratios) seems to work better if you drop the cinematic prefixes (cinematic shot, film still, etc.) π€·ββοΈπ€
Here it's just scene & style description. 3.6:1, no letterboxing.
despite the letterboxing, exploring 4:1 is fun ... ππ #MidjourneyV5
I generated the images for the game with #midjourney
Prompted for basic 1:1 images, using the tried and tested combination of "white background" and "--no background" to prep transparent PNGs
Then I used #GPT4 via #ChatGPT & gave it the basic story for the game: Squid-Spaceship having to collect little baby squid astronauts, etc...
Asked the AI to come up with HTML/JS code & CSS, then helped it to debug & adjust mechanics (smooth flight paths, etc..)
#AIArt is about to revolutionize #screenwriting! π€― As AI tools become more intuitive and easy to use, some think production companies might prioritize profit over storytelling & we'll be binging stunningly beautiful visuals paired with incredibly boring plots.. π€
A thread π§΅
1) We could be in for a wild ride here:
As with all technology, there is a risk of it being used in a way that strictly prioritizes profit. AI material might be used to create visually appealing stories that lack depth and originality
2) Yet AI will likely revolutionize screenwriting in a similar way it's currently enhancing visual expression and #AIArt!
Training models with dramaturgical data and narrative finesse is key. And it's already underway πͺ E.g. Dramatron, check @korymath's
Here's a technique for movie concept creation I currently experiment with:
Using #screenwriting devices to prompt #GPT3 for basic story ideas & plots and then using those to generate #midjourney#aiart images depicting key scenes, characters & settings...
Not only is this a huge time saver for #writers when brainstorming or trying out different dramatic scenarios, but you also get images that further spark your imagination...
... and this inspiration goes both ways: images create new approaches to dramaturgical twists or character development, and storytelling devices literally create new perspectives on the story π€―