Today's release of macOS Ventura 13.1 Beta 4 and iOS and iPadOS 16.2 Beta 4 include optimizations that let Stable Diffusion run with improved efficiency on the Apple Neural Engine as well as on Apple Silicon GPU
We share sample code for model conversion from PyTorch to Core ML and have example Python pipelines for text-to-image using Core ML models run with coremltools and diffusers
As a highlight, the baseline configuration of M2 MacBook Air with 8GB RAM runs huggingface.co/stabilityai/st… for 50 iterations in 18 seconds.
For distilled #StableDiffusion2 which requires 1 to 4 iterations instead of 50, the same M2 device should generate an image in <<1 second:
If you are excited about this field and would like to work on applied R&D in generative models, send me a note or come to the Apple booth at #NeurIPS22 to chat with us!
• • •
Missing some Tweet in this thread? You can try to
force a refresh