The two common approaches are clip guided diffusion and VQGan, there are many scripts variations on these where people have made their own bespoke script functions and methods. These are written in python. You will find most of these scripts hosted on github.
There are a few different discords but one of the best public ones is on EluetherAI #art channel. Do try and work things out for yourself but don't be afraid to ask questions, its a very friendly community.
Two good twitter accounts to follow are @riversHaveWings (author of clip guided diffusion) and @nshepperd1 (has own jax guided diffusion variation)
These scripts will consist of two parts to learn, prompts and settings. Prompts consist of three parts: context (what to draw), style (influence output by using an artist as an example) and quality (keywords to influence output, like "unreal engine" for a more 3d effect)
@remi_durant has a great site showing examples of artist names and keywords and how they can influence output linktr.ee/remi_durant it also offers some insight into settings, predominantly around clip guided diffusion scripts.
Settings depend on the script and authors (or your own) customisations. These may be the model produced by image set trained on, the perceptors used to validate how closely the Ai thinks each iteration matches your prompt or values like cutn or clip_guidance.
A good way to learn is by looking at an example produced by someone else, who has included their prompt. Try to reproduce it, you wont get exactly right without mirrored settings and seed number. Once you're close play with the settings to understand what each does.
One additional setting to note is the seed number. This is generated at random each run, if you wish to repeat a run with differing quality settings in the future, you must make a note of and set the seed number.
Its a good idea to keep copies of your scripts and settings, I like to keep them per project, so if im working on dragons, ill keep a copy of everything for that run (i use github to manage it). Makes it easier to iterate, return to projects and take learnings into new runs.
Each script will have strengths and weaknesses, try out different variations and find what works best for your subject matter, note for later.
To run these scripts you will need a GPU instance, recommended at least 16gb vram. A good starting point is the google colab service, most of the scripts mentioned above will have a google colab version ready to go.
When you get more experienced and require larger resolutions or more resource hungry instance runs you can look at paid services that will offer A100 instances with 80gb+ vram for x $ p/hour.
You may have to go through many runs for a single usable image. Best to work out a rough approximation of your prompt at lower settings (quicker run) then re-run at high quality settings.
This can still lead to slight variations so one way to produce a near copy at higher resolution is to use your lower quality image as an init image at higher resolutions.
Another approach is to use image upscalers like topaz gigapixel to increase resolution, but these do have trade offs. Experiment and find what works for you.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
👋im Callas, I joined the nft space in October 21. I like to 👩🎓learn in public so expect to see plenty of tips and tutorials, as well as featured 🎨. Below are threads of useful information I have compiled #nftartist#NFTCommunity#digitalartists#SoStoken🆘🆘🆘