curious/confused about all the async autonomous coding agents out there? it can be tricky at first to figure out how to use em. we wrote up a guide
the five biggest things you should know imo 🧵
1/ as a human your time is the most valuable thing. so stop getting stuck trying to figure out which of 10 approaches is the best one. just make agents try all of them for u
its often hard/impossible to know in advance what the best approach is. i.e. designs
2/ can not emphasize enough how helpful it is to set up preview deployments i.e. hook up ur github to @vercel
good async agents can look at these to check their work, and it's also super convenient to just test a preview link rather than checking out the commit to your local, and running all the commands to launch localhost and stuff
We embedded 250,000 works of art 🎨 from The Met using @nomic_ai's new SOTA #multimodal embeddings model!
It's the *first ever* semantic search tool of its kind 👩🎨 🔎
Search with smart queries like "oil painting with flowers & dogs".
How we did it & how to use it👇
@nomic_ai @WilliamGao1729 Nomic just released their newest model, Nomic-Embed-Vision.
It transforms the way we can interact with images. Instead of using CLIP to get text captions for models and using those captions for semantic search, you can work directly with image embeddings!👇
@nomic_ai @WilliamGao1729 "A picture is worth a thousand words" so any caption is going to be super lossy. By dealing directly with original image embeddings, you retain all the important aspects of the image, from style to subject.
We realized Nomic Embed Vision would be perfect for art discovery 👇
🔔the guy who invented the LSTM just dropped a new LLM architecture! (Sepp Hochreiter)
Major component is a new parallelizable LSTM.
⚠️one of the major weaknesses of prior LSTMs was the sequential nature (can't be done at once)
Everything we know about the XLSTM: 👇👇🧵
1/ Three major weaknesses of LSTMs that make Transformers better:
"Inability to revise storage decisions"
"Limited storage capacities"
"Lack of parallelizability due to memory mixing".
SEE THE GIF, if you don't get it. LSTMs are sequential which basically means you have to go through the green boxes (simplified) one after the other. You need the results from the prior box before you can move on.
Transformers don't do this. They parallelize operations across tokens, which is a really really big deal.
1/ first of all, @sama posted this cryptic tweet a few days ago.
that tweet contains the name of one of the two new GPT2 models.
can I confirm that it is from OpenAI? no. However, model creators need to work with @lmsysorg to add the model and it seems strange for LMSYS team to allow someone to pretend