Encouraging! #AppleM1 Silicon (MBA) smokes my 2017 MBP15" i7 on #rstats #tidverse tidymodels hotel example, random forests (last fit 100 trees). Experimental arm-R build = extra speedup. Thanks @fxcoudert for gfortran build & @juliasilge @topepos + team for the nice API + DOC.
And it’s wonderful to see that essential R packages are working on the M1 platform.
Another implication might be that 4 cores are a good default for parallel processing with this configuration. The original tidymodels example would select 8 cores here. tidymodels.org/start/case-stu…
If you are interested in the benchmark, here is the code I used. Computing: gist.github.com/dengemann/a9e4…
Plotting:
gist.github.com/dengemann/4759…
Update including i9 MBP 16" results; x-axis jitter removed for clarity
Update: Using #Python I find comparable results when using the random forests from scikit-learn on the same dataset. #AppleM1 is systematically faster + the native ARM build makes a difference. Interestingly, the Intel Mac seems still faster with basic linear algebra (next tweet)
Benchmarking matrix multiplication, SVD, and eigen decomposition the i5 Intel from 2017 was fastest #NumPy.(gist.github.com/dengemann/03a0…) obviously this is not what matters for random forests. Note that M1 native (yellow) plays in the league as i5 and these are only the first builds.
Correction: It’s of course i7, not i5 – a misnomer. Still the same machine as in previous benchmarks.
As anticipated, the story with #NumPy on #AppleM1 is not quite over. With NumPy optimised for ARM via #Apple #TensorFlow M1 can beat the i7 on matrix multiplication and SVD! Excited to see what's yet to come. cc @numpy_team @PyData

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Denis A. Engemann

Denis A. Engemann Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @dngman

19 May
I am very excited to share our latest work published in @eLife! We combined #MEG, #fMRI and #MRI for #BrainAge prediction & #biomarker development. Each modality added unique information & enhanced brain-behavior mapping! elifesciences.org/articles/54055 Thread👇
1/ We combined anatomical MRI (surface area, thickness, volume), fMRI-connectivity and MEG (source power, connectivity, alpha-peak, 1/f, latencies) with a stacking approach (1: ridge, 2: random forest). We made sure our model potentially extracts information from missing values.
2/ Is combining multimodal brain data worth the effort for brain age prediction? Compared to anatomical MRI, fMRI and MEG added similar improvements. Combining all modalities yielded to markedly improved prediction performance. MEG and fMRI contributed independent information!
Read 10 tweets
18 May
Excited to share our paper @NeuroImage_EiC by @DavSabbagh with @PierreAblin @GaelVaroquaux @agramfort doi.org/10.1016/j.neur… Nonlinear subject-level regression on M/EEG using linear models without source localization: theory + empirical benchmarks Thread👇
1/ When regressing outcomes on M/EEG power spectra (f = power; f = log(power), ...), volume conduction creates a nonlinear problem that cannot be addressed with otherwise effective linear models. Source localization can fix this but is not always available. How can we do without?
2/ As long as we consider a single subject we have nearly constant volume conduction. In this case we can show mathematically that source localization can be replaced with spatial filters or with Riemannian geometry.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!