curious/confused about all the async autonomous coding agents out there? it can be tricky at first to figure out how to use em. we wrote up a guide
the five biggest things you should know imo π§΅
1/ as a human your time is the most valuable thing. so stop getting stuck trying to figure out which of 10 approaches is the best one. just make agents try all of them for u
its often hard/impossible to know in advance what the best approach is. i.e. designs
2/ can not emphasize enough how helpful it is to set up preview deployments i.e. hook up ur github to @vercel
good async agents can look at these to check their work, and it's also super convenient to just test a preview link rather than checking out the commit to your local, and running all the commands to launch localhost and stuff
@nomic_ai @WilliamGao1729 Nomic just released their newest model, Nomic-Embed-Vision.
It transforms the way we can interact with images. Instead of using CLIP to get text captions for models and using those captions for semantic search, you can work directly with image embeddings!π
@nomic_ai @WilliamGao1729 "A picture is worth a thousand words" so any caption is going to be super lossy. By dealing directly with original image embeddings, you retain all the important aspects of the image, from style to subject.
We realized Nomic Embed Vision would be perfect for art discovery π
πthe guy who invented the LSTM just dropped a new LLM architecture! (Sepp Hochreiter)
Major component is a new parallelizable LSTM.
β οΈone of the major weaknesses of prior LSTMs was the sequential nature (can't be done at once)
Everything we know about the XLSTM: πππ§΅
1/ Three major weaknesses of LSTMs that make Transformers better:
"Inability to revise storage decisions"
"Limited storage capacities"
"Lack of parallelizability due to memory mixing".
SEE THE GIF, if you don't get it. LSTMs are sequential which basically means you have to go through the green boxes (simplified) one after the other. You need the results from the prior box before you can move on.
Transformers don't do this. They parallelize operations across tokens, which is a really really big deal.
1/ first of all, @sama posted this cryptic tweet a few days ago.
that tweet contains the name of one of the two new GPT2 models.
can I confirm that it is from OpenAI? no. However, model creators need to work with @lmsysorg to add the model and it seems strange for LMSYS team to allow someone to pretend
how good are the mystery models? ππππ§΅π
π§΅megathread of speculations on "gpt2-chatbot": tuned for agentic capabilities?
some of my thoughts, some from reddit, some from other tweeters
my early impression is π
1/
there's a limit of 8 messages per day so i didn't get to try it much but it feels around GPT-4 level, i don't know yet if I would say better... (could be placebo effect and i think it's too easy to delude yourself)
it sounds similar but different to gpt-4's voice
as for agentic abilities...
2/ look at the screenshots i attached but it seems to be better than GPT-4 at planning out what needs to be done.
for instance, it comes up with potential sites to look at, and potential search queries. GPT-4 gives a much more vague answer (go to top tweet)