It is a rudimentary version of what will be possible, but it really works and will get better fast. Today we can correctly complete a function from our evaluation set about 37% of the time. It's also just really fun to use, and brings back the early joy of programming for me.
I think Codex gets close to what most of us really want from computers—we say what we want, and they do it.
Programming languages are an artifact of computers not being able to actually understand us, and humans and computers relying on a lingua franca to understand each other.
I’m not even sure I understand what the word “understand” really means anymore, but AI is on its way to a good enough facsimile of if to allow for a new type of computer interface—we saw what we want, and the computer does it, no intermediate steps required.
It will take a long time for the technology to get good enough, but eventually we may forget that used computers any other way. As more things becomes API-ified, AI systems that write code can easily make a lot of things happen in the world.
This is an example of an important and somewhat counterintuitive trend in AI—cognitive labor is going to change sooner than physical labor.
This will also, I think, be an example of AI rapidly accelerating jobs--programmers in the future will be much more productive than programmers today and able to do things we can barely imagine.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Almost everyone starts off extrinsically motivated to some degree.
Basic version: for most people the levels of the video game go money, power (little power as in managing other people, etc), status (and proving yourself), impact (real power), and finally ‘self-actualization’, eg seeing how good you can be and expressing your curiosity.
All the levels always overlap (most people who do great work were never entirely driven by money, at least not for long, and people on the last level still want more status/ impact), but the mix changes a lot over time. The last level is like infinite Tetris, it never stops.
If you want to have the biggest possible impact in tech, I think you should still move to the Bay Area.
The people here, and the network effects caused by that, are worth it.
It's hard to overstate the magic of lots of competent, optimistic people in one place.
The future will certainly be more distributed, but I think that a large fraction of the most important US companies started in the next decade will continue to be within 50 miles of SF.
It's easy to not be in the Bay Area right now, because there's not much to miss out on. As soon as stuff restarts, and the most interesting meetings, dinners, events, and parties are here, I predict FOMO brings a lot of people back fast :)
The expected value of your impact on the world is like a vector.
It is defined by two things: direction and magnitude. That’s it.
Direction is what you choose to work on. Almost no one spends enough time thinking about this. A useful framework for this is to think on a long-but-not-too-long timescale (10-20 years seems to work),
Giving capital to promising people “too early” in their career is a great idea with much further to go, and the power law provides an interesting way to finance it.
YC is a great example. You can imagine taking that further—giving $25k to the smartest and most determined 100,000 people you can find each year to work on whatever they want, in exchange for the right to invest in their next startup. A country could make the economics work.
Giving 10 years of “tenure” to a group of 20 super promising 22 year old researchers finishing up undergrad is not that expensive relative to the value it would likely create, and there seem like a bunch of ways to capture a part of it.
Hi Jerome! It's great to get feedback from someone with so much experience deploying AI at scale.
We share your concern about bias and safety in language models, and it's a big part of why we're starting off with a beta and have safety review before apps can go live.
We think it's important that we can do things like turn off applications that are misusing the API, experiment with new toxicity filters (we just introduced a new one that is on by default), etc.
We don't think we could do this if we just open-sourced the model.
We do not (yet) have a service in production for billions of users, and we want to learn from our own and others' experiences before we do. We totally agree with you on the need to be very thoughtful about the potential negative impact companies like ours can have on the world.