Alex Tamkin Profile picture
Dec 7, 2021 7 tweets 3 min read Read on X
Love the "data science maturity levels" in @Patterns_CP

Interesting way to contextualize research at a glance (reminds me a bit of @justsaysinmice)

Full list in thread:
1) Concept

Basic principles of a new data science output observed and reported (e.g., statement of principles, dataset, new algorithm, new theoretical concept, theoretical system infrastructure)
2) Proof-of-concept

Data science output has been formulated, implemented, and tested for one domain/problem (e.g., dataset with rich domain-specific metadata, algorithm coded up as software, principles with expanded guidance on how to implement them)
3) Development/pre-production

Data science output has been rolled out/validated across multiple domains/problems
4) Production

Data science output is validated, understood, and regularly used for multiple domains/problems (e.g., operational data-sharing service across institutes/countries, ML algorithm to tag images, shared data infrastructure to manage access to compute/archive resources)
5) Mainstream

Data science output is well understood and (nearly) universally adopted (e.g., the iInternet, citation of articles using DOIs)
More info about the levels + rationale here!
cell.com/patterns/dsml

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Tamkin

Alex Tamkin Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AlexTamkin

Dec 12
How are AI Assistants being used in the real world?

Our new research shows how to answer this question in a privacy preserving way, automatically identifying trends in Claude usage across the world.

1/ Image
For example, here are the most common use cases on …

2/ Claude.aiImage
And some insights into how Claude use varies across different languages

3/ Image
Read 6 tweets
Apr 20, 2022
How can we choose examples for a model that induce the intended behavior?

We show how *active learning* can help pretrained models choose good examples—clarifying a user's intended behavior, breaking spurious correlations, and improving robustness!

arxiv.org/abs/2204.08491

1/
A fundamental challenge in ML is *task ambiguity*: when the training data doesn't specify the user's intended behavior for all possible inputs

For example, imagine you have a dataset of red squares and blue circles. How should the model classify blue squares?

2/
Task ambiguity can be hard to anticipate, and it has major implications for a model's safety and reliability when deployed

This is especially relevant for pretrained models that can be trained "few-shot" with only a handful of examples

3/
Read 14 tweets
Feb 19, 2022
One of the reasons I think GPT-J is so cool is that its pretraining data is publicly available

This lets us ask questions that were impossible to answer for LLMs like GPT-3

For example: "did our model actually learn the task or was this example in the training data?"

1/
Case in point, a recent paper looks at few-shot performance on numerical tasks like arithmetic

arxiv.org/abs/2202.07206
by @yasaman_razeghi @rloganiv @nlpmattg @sameer_

2/
The question they ask is simple:

How does the frequency of a term in the training data (e.g. "23") impact performance on problems involving that term (e.g. "What is 23 times 18?")

3/
Read 7 tweets
Dec 8, 2021
DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning

SSL is a promising technology, but current methods are field-specific. Can we find general algorithms that can be applied to any domain?

🌐: dabs.stanford.edu
📄: arxiv.org/abs/2111.12062

🧵👇 #NeurIPS2021

1/
Self-supervised learning (SSL) algorithms can drastically reduce the need for labeling by pretraining on unlabeled data

But designing SSL methods is hard and can require lots of domain-specific intuition and trial and error

2/
We designed DABS to drive progress in domain-agnostic SSL

Our benchmark addresses three core modeling components in SSL algorithms:

(1) architectures
(2) pretraining objectives
(3) transfer methods

3/
Read 13 tweets
Feb 25, 2021
A quick thread for PhD admits thinking about potential advisors:

I see a lot of discussion about "hands-on" vs "hands-off" advisors

But I think there are at least 3 underlying dimensions here, each of which is worth considering in its own right:

👇 [THREAD]

1/
1) Directiveness—how much your advisor directs your research, in terms of the problems you work on or day-to-day activities

2/
Low directiveness can mean lots of freedom and the space to think big and chart your own path. However, it can also leave some feeling adrift or unproductive.

3/
Read 12 tweets
Jan 11, 2021
Some takeaways from @openai's impressive recent progress, including GPT-3, CLIP, and DALL·E:

[THREAD]

👇1/
1) The raw power of dataset design.

These models aren't radically new in their architecture or training algorithm

Instead, their impressive quality is largely due to careful training at scale of existing models on large, diverse datasets that OpenAI designed and collected.

2/
Why does diverse data matter? Robustness.

Can't generalize out-of-domain? You might be able to make most things in-domain by training on the internet

But this power comes w/ a price: the internet has some extremely dark corners (and these datasets have been kept private)

3/
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(