anyone else mix and match platforms while making #aiia?

i’m currently consulting with chatGPT to help build a post-processing script for art i’m generating with stable diffusion and upscaling with yet another ai platform

this has supercharged my entire workflow 🤖
debugging chatGPT zsh output
ah, a logic bug, and my fault at that.

my program specification was incorrect, basically

this is actually a fascinating look at how small errors in communication can make a big difference in outcome
the first zsh output from chatgpt threw a bunch of errors but also seemed to partially work, so i just copied a truncated version of the error output, pasted that back into chatgpt, and described the parts that did appear to be working correctly.
this was enough for chatgpt to diagnose and fix its own bug, with just a slight bit of prompting on my part

imagine a future version of chatgpt hooked up to your QA pipeline with some form of automated unit testing (a la @github actions) in the mix
this wasn’t anything super complex, but it was something where i would have had to look up stuff up on google

i knew *approximately* what i needed to do, but not *exactly*

and this is precisely where platforms like chatGPT shine
chatGPT cut what would have been maybe an hour’s worth of research, coding, and debugging on my own down to about 7 minutes total, including the time crafting the prompts.

this is incredibly powerful stuff

think of what it looks like at scale
in this case, it was doing double duty at augmenting my time and ability:

the specific shell script i had chatGPT craft is going to mimic the output of video editing apps that i don’t have the time or inclination to learn (and which are themselves abstractions for CLI tools)
since i could describe what kind of effect i wanted to have on the output visually, chatGPT was able to point me at a command line tool that did what i wanted.

from there, it was simply a matter of giving chatGPT a brief code snippet as an example, and testing the results
obviously you would want to be very careful with running code generated in this way

think of it as unsanitized input, and defend accordingly:

at minimum, i would pass the output to another ML model specifically set to perform a sanity check on the code.
this might look something like “does this code do anything unpleasant on a system that runs it?” or “is this code safe to run?”

would also recommend virtualization, let your emulator take the hit if the code is bad, rather than your main system
whether running code in an emulator or on a production system, keeping good backups is incredibly important, and only will become more so going forward.
waiting for homebrew to do its thing with ffmpeg install is somehow taking longer than everything else combined up until now 😴
following my own advice and backing up ~3.5 GB of data i generated with the first chatGPT-supplied zsh script before running the ffmpeg command

(just in case, i didn’t bother checking if it overwrites files by default or not because yolo)
huh.
the first ffmpeg command supplied by chatGPT didn’t work but the second one appears to be.

it’s certainly giving my cpu a workout

i have absolutely no idea how long this process is going to take, this might be one to let run all night while i get some sleep 🥱
𝘸𝘰𝘳𝘥𝘴 𝘤𝘳𝘦𝘢𝘵𝘦 𝘸𝘰𝘳𝘭𝘥𝘴

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with CuddlySalmon | nptacek.eth

CuddlySalmon | nptacek.eth Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @nptacek

Jan 12
𝘸𝘰𝘳𝘥𝘴 𝘤𝘳𝘦𝘢𝘵𝘦 𝘸𝘰𝘳𝘭𝘥𝘴

i told you these LLMs had potential 😏
^this is pure prompt, btw. no post processing, no custom ML models, nothing.

and i’m only just getting started
𝘸𝘰𝘳𝘥𝘴 𝘤𝘳𝘦𝘢𝘵𝘦 𝘸𝘰𝘳𝘭𝘥𝘴

[created by prompt alone using stable diffusion]
Read 5 tweets
Jan 6
it's possible to create immersive virtual environments with @DreamStudioAI if you know some prompt magic and have an understanding of how 3D imagery works

there is something viscerally weird about 𝘦𝘯𝘵𝘦𝘳𝘪𝘯𝘨 latent space

[try viewing this on a VR headset] Image
there are a number of ai art tools out there, one of my personal favorites is Dream Studio from the folks at @StabilityAI

you can find it at beta.dreamstudio.ai if you want to follow along with this tutorial at home
[please note that what works in one ai model might not work in another, so ymmv if trying this with something other than dream studio.

if you do get things working let me know!]
Read 13 tweets
Aug 11, 2022
AI tools are beginning to have a tremendous impact on our world, and the speed at which this change is occurring is catching many people by surprise. “a scene of intense beauty…” excerpt of stable diffusi
The past few years have seen enormous strides being made in the field of generative AI models like DALL•E, and now these tools are starting to become available to the general public.
While their long-term influence will be significant, AI tools are going to impact different markets at different times.

Some have already been affected, and others won’t see the full impact for years to come.
Read 32 tweets
Aug 3, 2022
Generative AI Art tools like DALL-E are about to usher in a new paradigm for the world of design. Here’s one workflow to explore as you get started on your AI art journey.

#dalle DALL-E prompt used to gener...
Creating great-looking art using AI tools like DALL-E is nearly effortless, as long as you don’t care too much about originality.

Making derivative works is as simple as “Award winning painting, “Neon Starry Night” by Vincent Van Gogh. Award winning digital art”

Boom. DALL-E prompt used to gener...
If we want to create more complicated scenes, it’s not as easy as just describing the scene as we want it to look👁

that can quickly overwhelm DALL-E with too many specific details

leading to complicated pieces that leave something to be desired in the composition department🖼
Read 71 tweets
Jul 16, 2022
Decided to try a GPT-3/DALL•E crossover experiment today.

The results were nothing short of stunning.
Getting a DALL•E prompt to generate something that doesn't look too weird is a bit of an art in and of itself, and the slightest change in word order can have drastic effect on the end result.

So I decided to ask GPT-3 for some help generating DALL•E prompts.
I gave GPT-3 this prompt:

"You are GPT-4, the most advanced AI in the world. Your task is to generate text prompts to create stunning images with DALL•E. Be sure to include details such as artist, style, mood, media, or lighting in the prompts. Be as verbose as possible:"
Read 53 tweets
Jul 14, 2022
Text reading "this is ...
I was inspired by @kocienda, who has been getting some really beautiful results experimenting with DALL•E's ability to extrapolate art from a seed image when guided by a text prompt.

So I decided to run an experiment myself. What would happen if DALL•E was given Unfinished Horse Drawing?

Would it be able to correctly extrapolate the rest of the horse if given the right prompt?

knowyourmeme.com/memes/unfinish…
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(