Research in AI is surprisingly more accessible to people with different backgrounds compared to other fields.

Anyone (w/ relevant experience) can contribute to impactful research.

Here are 5 research orgs you can join to contribute to real, open research in deep learning ↓
1. #EleutherAI

EleutherAI may be the most famous AI open research collective. Lots of great work has been released by EleutherAI, such as the Pile dataset, GPT-J, GPT-NeoX-20B, and VQGAN-CLIP.

Link → discord.gg/zBGx3azzUn
2. @LAION

This server focuses on developing new & replicating existing multimodal models and creating datasets to support these efforts. They have released the LAION-400M and LAION-5B datasets and trained their own CLIP models

Link → discord.com/invite/eq3cAMZ…
3. @openbioml

This server is focused on BioML research projects and just recently launched, and projects are just getting started, which means it's the perfect time to get involved!

Link → discord.gg/GgDBFP8ZEt
4. @CarperAI

This is a new research organization focused on preference learning and contrastive learning, with various applications from story generation to code generation and even architecture design generation.

Link → discord.com/invite/KgfkCVY…
5. @ml_collective

This is a great community known for its DLCT weekly talk series along with lots of events that are great for sharing research ideas & collaboration.

Members have published in top-tier conferences like NeurIPS, CVPR, etc.

Link → discord.gg/U2MzJYGS9P
If you like this thread, please like and retweet! 🙏

Hopefully you get the opportunity to participate in open AI research!

Follow me to stay tuned for some of the research I am working on through these research collectives 🙂 → @iScienceLuvr
A correction, the correct handle is actually @laion_ai...
It's at times like these I wish Twitter had an edit button 😅

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tanishq Mathew Abraham

Tanishq Mathew Abraham Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @iScienceLuvr

Jun 29
Applying deep learning to pathology is quite challenging due to the sheer size of the slide images (gigapixels!).

A common approach is to divide images into smaller patches, for which deep learning features can be extracted & aggregated to provide a slide-level diagnosis (1/9)
Unfortunately, dividing into small patches limits the context to cellular features, missing out on the various levels of relevant features, like larger-scale tissue organization. (2/9)
Additionally, it is difficult to improve long-range dependencies with Transformers due to the high number of patches which makes attention calculation computationally difficult. (3/9)
Read 9 tweets
Jun 13
You may have seen surreal and absurd AI-generated images like these ones...

These are all generated with an AI tool known as DALL·E mini

Let's talk about the history of #dallemini, and also *how* it works! ↓↓↓🧵 Image
First, let's clarify the different AI tools which many get confused about:

- DALL·E was an @OpenAI-developed AI project from Jan 2021

- DALL·E mini is a community-created project inspired by DALL·E
- DALL·E 2 is another @OpenAI-developed tool released in April (2/16)
@OpenAI DALL·E mini was actually originally developed about a year ago, back in July 2021.

During a programming competition organized by @huggingface (an AI company), @borisdayma & some community folks (including myself!) developed a neural network inspired by DALL·E & studied it (3/16) ImageImage
Read 16 tweets
May 31
Have you seen #dalle2 and #Imagen and wondered how it works?

Both models utilize diffusion models, a new class of generative models that have overtaken GANs in terms of visual quality.

Here are 10 resources to help you learn about diffusion models ⬇ ⬇ ⬇
1. "What are Diffusion Models?" by @ari_seff
Link →

This 3blue1brown-esque YouTube video is a great introduction to diffusion models!
2. "Introduction to Diffusion Models for Machine Learning" by @r_o_connor
Link → assemblyai.com/blog/diffusion…

This article provides a great deep-dive of the theoretical foundations for Diffusion Models.
Read 13 tweets
Apr 27
Awesome and surprising things you can do with Jupyter Notebooks ⬇
1. Write a full-fledged Python library!

You can write all of your code, documentation, & tests with Jupyter Notebooks & nbdev.fast.ai, all while maintaining best software practices and implementing CI/CD!

fastai deep learning library is entirely written in notebooks!
2. Create a blog!

Platforms like fastpages.fast.ai easily allow you to create blog posts from your Jupyter Notebooks, with the code cells and outputs in your post, and can even be made interactive.

I have my own such blog at tmabraham.github.io/blog
Read 8 tweets
Mar 28
Yet another state-of-the-art method for text-to-image generation, this time from researchers at @MetaAI!

Link: arxiv.org/abs/2203.13131

How does it work? A short thread on this paper ⬇
The model is based on an autoregressive transformer (like DALL·E) combined with a VQGAN but utilizes several key tricks to improve the quality and also controllability of the generations. 2/10
One trick is the use of a segmentation map (referred to as a scene) and a VQGAN for the scene.

As you can see here, this provides more controllability to the generation process. 3/10
Read 10 tweets
Mar 25
Given that this @kaggle ML competition recently started, I thought it would be a good opportunity to share my approach to Kaggle competing

A quick thread (1/7) 👇
The first step is to explore the data, also known as EDA. Getting a feel for the data is important to be able to derive important insights that can help you. 👨‍💻

Here is my EDA notebook for this competition:
kaggle.com/code/tanlikesm…

(2/7)
After exploring the data, the next step is to make a baseline solution. In this case, I had put together a quick pretrained baseline based on @Nils_Reimers's SentenceTransformers:

kaggle.com/code/tanlikesm…

(3/7)
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(