Bojan Tunguz Profile picture
Oct 1 4 tweets 3 min read
This week @NVIDIA open sourced the 3D object generation AI model, GET3D. GET3D is a generative model of high quality 3D textured shapes learned from images.

1/4
Trained using only 2D images, GET3D generates 3D shapes with high-fidelity textures and complex geometric details.

2/4
These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing.

3/4

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Bojan Tunguz

Bojan Tunguz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tunguz

Sep 29
I have just done something really cool - I've managed to *train* XGBoost in browser completely within an HTML file! This has been possible thanks to the PyScript project that allows running Python inside of HTML, similar to how JavaScript works.

trainxgb.com

1/5 Image
The example below is very simple - the script loads the small Iris dataset from sklearn. With a slider you are able to adjust the number of XGBoost trees, and the script will train different XGBoost models accordingly and print out accuracy.

2/5
PyScript is still in very early stages of development. Getting all the relevant components to work together is still tricky, and there are not many detailed tutorials. Hence, this example is *very* rudimentary. I'll try to make it more powerful and snazzy does the road.

3/5
Read 5 tweets
Sep 20
All right, here is one trick for using XGBoost for *data analysis*.

1/5
First, you create a simple model with XGBoost. It doesn't have to be fancy, or even too accurate, it's just for reference purposes. Use that model to calculate the Shapley values for your training set. Here is an example:

kaggle.com/code/tunguz/tp…

2/5
Next, use those Shapley values for some simple clustering, dimensionality reduction and visualization:

kaggle.com/code/tunguz/tp…

3/5
Read 5 tweets
Sep 19
NVIDIA GTC starts today! There are tons of exciting topics and webinars covered. This year again the whole conference is online and free, so go and register if you have not done so already.

Here are a few special highlight sessions:

1/4 Image
GTC 2022 Keynote - September: lnkd.in/gYNqxsnr

How CUDA Programming Works: lnkd.in/gKmdjZub

Building the Future of Work with AI-powered Digital Humans: lnkd.in/gXJWk6vz

Building Future-Ready Intelligence for Cars: lnkd.in/gJ9BJMGM

2/4
A Deep Dive into RAPIDS for Accelerated Data Science and Data Engineering: lnkd.in/gM7mquwc

A Deep Dive into the Latest HPC Software: lnkd.in/ghXxGmar

Cross-Framework Model Evaluation and Accelerated Training with NVIDIA Merlin: lnkd.in/gXUEdajH

3/4
Read 4 tweets
Aug 5
A very good paper I came across this morning by the @DeepMind researchers. For the past five years Transformers have been one of the most dominant approaches to Deep Learning problems, especially in the #NLP domain.

1/5
However, despite many interesting papers on the topic, and lots of good open code, there has been a noticeable lack of *formal* definition of what transformed are, especially on the level of pseudocode.

2/5
This paper aims to rectify that. It provides pseudocode for almost all major Transformer architectures, including training algorithms.

3/5
Read 5 tweets
Jul 22
The longer you work with ML algorithms, the more you appreciate what an outsize effect your *data* has on the quality of your models. I've seen that shift on Kaggle over the years, where more and more time is spent on some kind of dataset augmentation.

1/5
There is still only so much you can do there, and unless you are "enterprising" and decide to scrape the competition host's website for their dat (yes, this has happened) your legitimate options are rather limited.

2/5
Outside of the Kaggle world, however, things are different. Large computational resources and advanced algorithms still dominate the ML discourse, but those who are paying attention know that neither of them would be worth much without the huge datasets that are bing used.

3/5
Read 5 tweets
Jul 22
It's actually scary how ignorant academics who try to do research on NNs for tabular data are of tabular data. I think that part of the problem is that almost all of the interesting and relevant tabular data problems are in the industry, 1/3
and academics tend to be completely inured from any kind of practical application of ML/DS.

If you are an academic who is interested in doing research on tabular data,

2/3
I would BEG YOU, FOR THE LOVE OF EVERYTHING THAT IS DECENT, PLEASE, PLEASE PLEASE GET OUT OF YOUR IVORY TOWER AND TRY TO LEARN WHAT KINDS OF PROBLEMS ACTUAL DATA SCIENTISTS DEAL WITH IN THEIR PROFESSIONAL LIVES!!!

3/3
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(