Jeremy Howard Profile picture
Jul 28, 2022 10 tweets 8 min read Read on X
Our biggest launch in years: nbdev2, now boosted with the power of @quarto_pub!

Use @ProjectJupyter to build reliable and delightful software fast. A single notebook creates a python module, tests, @github Actions CI, @pypi/@anacondainc packages, & more
fast.ai/2022/07/28/nbd…
What can you create with #nbdev with @quarto_pub? Well, for starters, check out our beautiful new website.

Created with nbdev, of course!

nbdev.fast.ai
That website is generated automatically from the notebooks in this repo.

Take a look around -- all the things you'd hope to see in a high-quality project are there. It's all done for you by nbdev. For instance, see the nice README? Built from a notebook!
github.com/fastai/nbdev/
nbdev v1 is already recommended by experts, and v2 is a big step up again.

"From my point of view it is close to a Pareto improvement over traditional Python library development." Thankyou @erikgaas 😀
Here's an example of the beautiful and useful docs that are auto-generated by nbdev+Quarto.
nbdev.fast.ai/merge.html
Here's an example of an exported function in a notebook cell. This is automatically added to the python module, and the documentation you see on the right is auto-generated.
nbdev.fast.ai/merge.html#nbd…
Every time we update a notebook to change the docs, library, or tests, everything is checked by @github Actions automatically
Here's the @pypi pip installer that's auto-generated. See the description? That's created for you from the notebook you use for your documentation homepage (just like the README, and the description for your conda package)
pypi.org/project/nbdev/
I've barely scratched the surface in this brief tweet thread! For much more information, take a look at the blog post authored with @HamelHusain
fast.ai/2022/07/28/nbd…
This launch wouldn't have been possible without some amazing people. I'd especially like to highlight Hamel & @wasimlorgat, who made nbdev2 a far better product than it would have been without them, JJ Allaire @fly_upside_down & the @quarto_pub team, & the @fastdotai community

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeremy Howard

Jeremy Howard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jeremyphoward

Sep 3
Today @answerdotai is proposing `/llms.txt`. This is a file you can use to tell models where to find LLM-friendly content for your website.

It provides background information, along with links to markdown files providing more detailed information.
answer.ai/posts/2024-09-…
We're providing a website with details of the proposal, & javascript and python parsers. There's also an example of how to incorporate llms.txt into an editor--rather than weight into the emacs vs vim vs vscode wars, we picked ed, the standard text editor
llmstxt.org
Today websites are not just used to provide information to people, but they are also used to provide information to large language models.

For instance, language models are often used to enhance development environments used by coders.
Read 7 tweets
Aug 27
Something that nearly everyone is sleeping on is the importance of prompt caching.

We've just added support for it to Claudette, so @AnthropicAI caching is now *very* easy to use -- cached tokens are 90% cheaper, and faster!

Docs here: claudette.answer.ai/#prompt-caching
Image
Here's the official API docs with details on pricing:
docs.anthropic.com/en/docs/build-…
BTW Deepseek also provides prompt caching -- and theirs is fully automated, which is pretty cool!
platform.deepseek.com/api-docs/news/…
Read 5 tweets
Jul 29
Announcing FastHTML. A new way to create modern interactive web apps.

Scales down to a 6-line python file; scales up to complex production apps.

Auth, DBs, caching, styling, etc built-in & replaceable and extensible. 1-click deploy to @Railway, @vercel, @huggingface, & more. Image
To get started, head over to the home page: .

The whole site, designed by the @tinloof gang, is itself a running FastHTML app, and includes live code examples running inside that page.fastht.ml
I started FastHTML because during 25+ years of web development, I realized that web programming could be easier & more powerful. I feel that recent trends move away from the power of the web’s foundations, resulting in a fractured ecosystem of over-complex frameworks and tools. Image
Read 14 tweets
Jul 4
This is disgraceful. And ironic.

@stripe canceled an account used to collect money for a course. The cancellation was due to an AI/ML model failure.

The course was about how to use AI/ML correctly.
In this case @HamelHusain has enough reach on twitter that he got someone to notice and fix the mistake. But that’s not a solution for most people.
As we’ve been harping on about for many many years:

ALL ALGORITHMIC DECISION MAKING MUST HAVE MEANINGFUL HUMAN RECOURSE.
Read 4 tweets
Jun 29
For those that hope (or worry) that LLMs will do breakthrough scientific research, I've got good (or bad) news:

LLMs are particularly, exceedingly, marvellously ill-suited to this task. (if you're a researcher, you'll have noticed this already)

Here's why🧵
Breakthrough research requires either:

1. Going in a totally new and unexpected direction that everyone decided long ago was stupid, or
2. Finding some extraordinary new experimental data that means we have to change our theories

LLMs can't run experiments, so we'll focus on 1
It's not helpful to just say "LLMs can't reason", since clearly they do some things which humans would use reasoning for.

But LLMs are, first and foremost, fuzzy subgraph matching machines (at *massive* scale*).
Read 21 tweets
Jun 21
Today @AnthropicAI launched Claude Sonnet 3.5, the most powerful language model in the world.

And today, we're making it even better, launching Claudette--Claude's BFF!

Claudette makes Claude's awesome features easier & more powerful for Pythonistas.🧵
claudette.answer.ai
With Claudette, you can chat through the API just as easily as you can chat through the web app. claude.ai
Image
But you can also do stuff you can't do in the web app at all, like "prefill" -- forcing Claude to start its response with whatever you want. Image
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(