Lior⚡ Profile picture
Jun 18 2 tweets 1 min read Twitter logo Read on Twitter
GPT-Engineer just hit 12,000 stars on Github.

It's an AI agent that can write an entire codebase with a prompt and learn how you want your code to look.

▸ Asks clarifying questions
▸ Generates technical spec
▸ Writes all necessary code
▸ Easy to add your own reasoning… twitter.com/i/web/status/1…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Lior⚡

Lior⚡ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AlphaSignalAI

Apr 17
Here are the people everyone should follow to keep up and understand AI:

🧵/8
@DrJimFan - Jim is an AI Scientist and author of the NeurIPS Best Paper: MineDojo.

He has amazing insights on the latest progress in the field.

@karpathy - The single most important person to follow in AI. Karpathy is a founding member of OpenAI and was the AI Director of Tesla.

He regularly shares youtube tutorials, weekend projects, and repos that you can interact with.

Read 10 tweets
Apr 5
Big News! Meta just released Segment Anything, a new AI model that can "cut out" any object, in any image/video, with a single click.

The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks.

segment-anything.com
Meta also released SA-1B, the largest mask dataset to date with 11M images, 1B+ masks.

It is designed for training general-purpose object segmentation models from open world images.

Demo: segment-anything.com/dataset/index.…

github.com/facebookresear…
The model was decoupled into:

1) a one-time image encoder
2) a lightweight mask decoder that can run in a web-browser in just a few milliseconds per prompt.

Paper: ai.facebook.com/research/publi…
Read 5 tweets
Mar 22
JUST IN: Microsoft integrates GPT-4 to Github Copilot, announcing Copilot X

Copilot is evolving to bring chat and voice interfaces, support pull requests, answer questions on docs, and adopt OpenAI’s GPT-4 for a personalized experience.

github.blog/2023-03-22-git…

1/🧵
Copilot X gives you a ChatGPT-like experience in your editor that natively integrates with VS Code and Visual Studio.

A developer can get in-depth analysis and explanations of what code blocks are intended to do, generate unit tests, and even get proposed fixes to bugs.
Copilot for Pull Requests: AI-powered tags in pull request descriptions through a GitHub app that organization admins and individual repository owners can install.
Read 6 tweets
Mar 16
JUST IN: Microsoft introduces 365 Copilot: a new LLM based AI-copilot for the Microsoft Suite: Word, Excel, PowerPoint, Outlook, Teams.

🧵Here's a summary:
Copilot in Word writes, edits, summarizes, and creates right alongside you. With only a brief prompt, Copilot in Word will create a first draft for you, add content to existing documents, summarize text, and rewrite sections or the entire document to make it more concise.
Copilot in Excel works helps analyze and explore data.

You can ask Copilot questions about your data set in natural language: It will reveal correlations, propose what-if scenarios, and suggest new formulas based on your questions.
Read 7 tweets
Mar 15
I just went over the GPT-4 paper to understand more about how it can use images as inputs and was quickly blown away.

GPT-4 can understand physics, charts, diagrams, math, text, pictures, jokes, satire, and memes.

🧵Here are some incredible examples:
Ability to detect what's unusual in an image:
Understanding of diagrams: Physics + Math
Read 8 tweets
Feb 28
Microsoft's new Kosmos-1 is incredible.

It's a new Multimodal Large Language Model (MLLM).

Their model can understand images, text, images with text, OCR, image captioning, visual QA.

It can even solve IQ tests.

Paper: arxiv.org/abs/2302.14045
Code: github.com/microsoft/unilm Image
The team also introduced a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs.

This is an example of Kosmos-1 solving a visual IQ test. Image
The Multimodal Chain-of-Thought prompting enables KOSMOS-1 to tackle complex question-answering and reasoning tasks. Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(