anyone else mix and match platforms while making #aiia?
i’m currently consulting with chatGPT to help build a post-processing script for art i’m generating with stable diffusion and upscaling with yet another ai platform
this has supercharged my entire workflow 🤖
debugging chatGPT zsh output
ah, a logic bug, and my fault at that.
my program specification was incorrect, basically
this is actually a fascinating look at how small errors in communication can make a big difference in outcome
the first zsh output from chatgpt threw a bunch of errors but also seemed to partially work, so i just copied a truncated version of the error output, pasted that back into chatgpt, and described the parts that did appear to be working correctly.
this was enough for chatgpt to diagnose and fix its own bug, with just a slight bit of prompting on my part
imagine a future version of chatgpt hooked up to your QA pipeline with some form of automated unit testing (a la @github actions) in the mix
this wasn’t anything super complex, but it was something where i would have had to look up stuff up on google
i knew *approximately* what i needed to do, but not *exactly*
and this is precisely where platforms like chatGPT shine
chatGPT cut what would have been maybe an hour’s worth of research, coding, and debugging on my own down to about 7 minutes total, including the time crafting the prompts.
this is incredibly powerful stuff
think of what it looks like at scale
in this case, it was doing double duty at augmenting my time and ability:
the specific shell script i had chatGPT craft is going to mimic the output of video editing apps that i don’t have the time or inclination to learn (and which are themselves abstractions for CLI tools)
since i could describe what kind of effect i wanted to have on the output visually, chatGPT was able to point me at a command line tool that did what i wanted.
from there, it was simply a matter of giving chatGPT a brief code snippet as an example, and testing the results
obviously you would want to be very careful with running code generated in this way
think of it as unsanitized input, and defend accordingly:
at minimum, i would pass the output to another ML model specifically set to perform a sanity check on the code.
this might look something like “does this code do anything unpleasant on a system that runs it?” or “is this code safe to run?”
would also recommend virtualization, let your emulator take the hit if the code is bad, rather than your main system
whether running code in an emulator or on a production system, keeping good backups is incredibly important, and only will become more so going forward.
waiting for homebrew to do its thing with ffmpeg install is somehow taking longer than everything else combined up until now 😴
following my own advice and backing up ~3.5 GB of data i generated with the first chatGPT-supplied zsh script before running the ffmpeg command
(just in case, i didn’t bother checking if it overwrites files by default or not because yolo)
huh.
the first ffmpeg command supplied by chatGPT didn’t work but the second one appears to be.
it’s certainly giving my cpu a workout
i have absolutely no idea how long this process is going to take, this might be one to let run all night while i get some sleep 🥱
𝘸𝘰𝘳𝘥𝘴 𝘤𝘳𝘦𝘢𝘵𝘦 𝘸𝘰𝘳𝘭𝘥𝘴
• • •
Missing some Tweet in this thread? You can try to
force a refresh
it's possible to create immersive virtual environments with @DreamStudioAI if you know some prompt magic and have an understanding of how 3D imagery works
there is something viscerally weird about 𝘦𝘯𝘵𝘦𝘳𝘪𝘯𝘨 latent space
[try viewing this on a VR headset]
there are a number of ai art tools out there, one of my personal favorites is Dream Studio from the folks at @StabilityAI
you can find it at beta.dreamstudio.ai if you want to follow along with this tutorial at home
[please note that what works in one ai model might not work in another, so ymmv if trying this with something other than dream studio.
AI tools are beginning to have a tremendous impact on our world, and the speed at which this change is occurring is catching many people by surprise.
The past few years have seen enormous strides being made in the field of generative AI models like DALL•E, and now these tools are starting to become available to the general public.
While their long-term influence will be significant, AI tools are going to impact different markets at different times.
Some have already been affected, and others won’t see the full impact for years to come.
Generative AI Art tools like DALL-E are about to usher in a new paradigm for the world of design. Here’s one workflow to explore as you get started on your AI art journey.
Decided to try a GPT-3/DALL•E crossover experiment today.
The results were nothing short of stunning.
Getting a DALL•E prompt to generate something that doesn't look too weird is a bit of an art in and of itself, and the slightest change in word order can have drastic effect on the end result.
So I decided to ask GPT-3 for some help generating DALL•E prompts.
I gave GPT-3 this prompt:
"You are GPT-4, the most advanced AI in the world. Your task is to generate text prompts to create stunning images with DALL•E. Be sure to include details such as artist, style, mood, media, or lighting in the prompts. Be as verbose as possible:"
I was inspired by @kocienda, who has been getting some really beautiful results experimenting with DALL•E's ability to extrapolate art from a seed image when guided by a text prompt.