Bojan Tunguz Profile picture
Oct 10 4 tweets 3 min read
Decision trees based Machine Learning models are some of the best performant algorithms in eras of predictive capability, especially on small and heterogenous datasets.

1/4
They also provide an unparalleled level of interpretability compared to all other non-linear algorithms. However, they are very hard to optimize on Von Neumann architecture machines due to their non-uniform memory access patterns.

2/4
In groundbreaking work published in Nature Communications a team of researchers has shown that analog content addressable memory (CAM) devices with in-memory calculation can dramatically accelerate tree-based model inference, as much as 10**3 over the conventional approaches 3/4
The Nature Communications article can be found here: nature.com/articles/s4146…

#MachineLearning #ML #AI #DataScience #DS #Hardware #Computation #DecisionTrees

4/4

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Bojan Tunguz

Bojan Tunguz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tunguz

Oct 8
This past week I came across another paper that purports to get the SOTA for NNs for tabular data. Due to the extreme penchant for exaggeration in this community, I have given up on checking most of these claims, but decided to take a look at this particular work.

1/6 Image
I decided to check how does XGBoost *really* perform on the datasets used in the paper, and the results were not pretty.

2/6
The main takeaway: for all three datasets used in the paper, the reported performance of XGBoost was widely inaccurate and the real performance was much better than their best results.

3/6
Read 6 tweets
Oct 1
This week @NVIDIA open sourced the 3D object generation AI model, GET3D. GET3D is a generative model of high quality 3D textured shapes learned from images.

1/4
Trained using only 2D images, GET3D generates 3D shapes with high-fidelity textures and complex geometric details.

2/4
These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing.

3/4
Read 4 tweets
Sep 29
I have just done something really cool - I've managed to *train* XGBoost in browser completely within an HTML file! This has been possible thanks to the PyScript project that allows running Python inside of HTML, similar to how JavaScript works.

trainxgb.com

1/5 Image
The example below is very simple - the script loads the small Iris dataset from sklearn. With a slider you are able to adjust the number of XGBoost trees, and the script will train different XGBoost models accordingly and print out accuracy.

2/5
PyScript is still in very early stages of development. Getting all the relevant components to work together is still tricky, and there are not many detailed tutorials. Hence, this example is *very* rudimentary. I'll try to make it more powerful and snazzy does the road.

3/5
Read 5 tweets
Sep 20
All right, here is one trick for using XGBoost for *data analysis*.

1/5
First, you create a simple model with XGBoost. It doesn't have to be fancy, or even too accurate, it's just for reference purposes. Use that model to calculate the Shapley values for your training set. Here is an example:

kaggle.com/code/tunguz/tp…

2/5
Next, use those Shapley values for some simple clustering, dimensionality reduction and visualization:

kaggle.com/code/tunguz/tp…

3/5
Read 5 tweets
Sep 19
NVIDIA GTC starts today! There are tons of exciting topics and webinars covered. This year again the whole conference is online and free, so go and register if you have not done so already.

Here are a few special highlight sessions:

1/4 Image
GTC 2022 Keynote - September: lnkd.in/gYNqxsnr

How CUDA Programming Works: lnkd.in/gKmdjZub

Building the Future of Work with AI-powered Digital Humans: lnkd.in/gXJWk6vz

Building Future-Ready Intelligence for Cars: lnkd.in/gJ9BJMGM

2/4
A Deep Dive into RAPIDS for Accelerated Data Science and Data Engineering: lnkd.in/gM7mquwc

A Deep Dive into the Latest HPC Software: lnkd.in/ghXxGmar

Cross-Framework Model Evaluation and Accelerated Training with NVIDIA Merlin: lnkd.in/gXUEdajH

3/4
Read 4 tweets
Aug 5
A very good paper I came across this morning by the @DeepMind researchers. For the past five years Transformers have been one of the most dominant approaches to Deep Learning problems, especially in the #NLP domain.

1/5
However, despite many interesting papers on the topic, and lots of good open code, there has been a noticeable lack of *formal* definition of what transformed are, especially on the level of pseudocode.

2/5
This paper aims to rectify that. It provides pseudocode for almost all major Transformer architectures, including training algorithms.

3/5
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(