2/ This model is unique as it was fine-tuned from the Stable Diffusion 2 base with an extra channel for depth.
Using MiDaS (a model to predict depth from a single image), it can create new images with matching depth maps to your "init image"
3/ I set the denoising strength to 1.0 so that none of the original RGB image was used
Even with widely different prompts it was able to generate consistent objects
Using simple, recognizable shapes such as wooden doll-house furniture worked great for this
4/ Regular photos ended up having an unavoidable “doll-house” feel to them (even with heavy prompt tweaking) due to the extreme perspective.
I found that changing to a longer focal length (3x on an iPhone) and capturing from further away resolved this.
5/ Here are a few of the prompts used:
"A beautiful rustic Balinese villa, architecture magazine, modern bedroom, infinity pool outside, design minimalism, stone surfaces"
6/ "Luxurious modern studio bedroom, trending architecture magazine photo, colorful framed art hanging over bed, design minimalism, furry white rugs, trendy, industrial, pop art, boho chic"
8/ There is some “creativity” in how the depth-map is matched under the prompt.
Here are a few outtakes where the model tried to match the plant to antlers, toys, candles, statues, a double-necked guitar and even a kid with Mickey ears🤯
Follow for more creative experiments 👨🎨
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I “jailbroke” a Google Nest Mini so that you can run your own LLM’s, agents and voice models.
Here’s a demo using it to manage all my messages (with help from @onbeeper)
🔊 on, and wait for surprise guest!
I thought hard about how to best tackle this and why, see 🧵
After looking into jailbreaking options, I opted to completely replace the PCB.
This let’s you use a cheap ($2) but powerful & developer friendly WiFi chip with a highly capable audio framework.
This allows a paradigm of multiple cheap edge devices for audio & voice detection…
& offloading large models to a more powerful local device (whether your M2 Mac, PC server w/ GPU or even "tinybox"!)
In most cases this device is already trusted with your credentials and data so you don’t have to hand these off to some cloud & data need never leave your home
I wanted to imagine how we’d better use #stablediffusion for video content / AR.
A major obstacle, why most videos are so flickery, is lack of temporal & viewing angle consistency, so I experimented with an approach to fix this
See 🧵 for process & examples
Ideally you want to learn a single representation of an object across time or different viewing directions to perform a *single* #img2img generation on.
This learns an "atlas" to represent an object and its background across the video.
Regularization losses during training help preserve the original shape, with a result that resembles a usable slightly "unwrapped" version of the object
We are getting closer to “Her” where conversation is the new interface.
Siri couldn’t do it, so I built an e-mail summarizing feature using #GPT3 and life-like #AI generated voice on iOS.
(🔈Audio on to be 🤯with voice realism!)
How did I do this? 👇
I used the Gmail API to feed in recent unread e-mails into a prompt and send to the @OpenAI#GPT3 Completion API. Calling out details such as not “just reading them out” and other prompt tweaks gave good results
@OpenAI Here are the settings I used, you can see how #GPT3 does a great job of conversationally summarizing. (For the sake of privacy I made up the e-mails shown in the demo)
I used AI to create a (comedic) guided meditation for the New Year!
(audio on, no meditation pose necessary!)
Used ChatGPT for an initial draft, and TorToiSe trained on only 30s of audio of Sam Harris
See 🧵 for implementation details
ChatGPT came up with some creative ideas, but the delivery was still fairly vanilla, so I iterated on it heavily and added a few Sam-isms from my experience with the @wakingup app (Jokes aside - highly recommended)
@wakingup Diffusion models & autoregressive transformers are coming for audio!