Drum-roll, please: @OllyPerksHPC and I just announced the winners of the @awscloud/@arm Summer Cloud HPC #ahugHackathon hosted by @ArmHPCUserGroup. 1/17
The judges were really impressed by the incredible effort from across the community that saw the participants get 31 codes working on Graviton2-based #HPC clusters that previously only built and ran on other, mostly x86-based, lifeforms. 2/17
This lifts the water level for the whole Arm-HPC community. 3/17
The teams contributed loads of updates to @spackpm recipes, and built @reframehpc scripts so we can keep running these codes in CI/CD to make sure they only get better over time. 4/17
Automation will help us build a map of the performance space so we don't need to navigate by trial and error. That's a pretty new approach for a novel architecture in HPC. 5/17
The teams near the top of the table were all insanely good. But since A-HUG said there would be a prize (Apple M1 MacBook Pros hpc.news/macbook for the winning team), _there can be only one_ ... 6/17
The first prize in the AWS/Arm Summer Cloud #HPC #ahugHackathon goes to team ”DogeCoinToTheMoon” - a group of MSc students from @EdinburghUni. The judges were super impressed by not only their depth of analysis, but their breadth of coverage, too. 7/17
Check out just *one* of DogeCoin's many performance reports: hpc.news/kripke 8/17
However: the sheer strength of the SECOND PLACE team gave the panel a moment of pause. Team Wolfpack from @NCState in Raleigh really dived deep in their optimization studies, doing investigations into the code to produce some huge performance improvements. 9/17
Check this out: hpc.news/hpcdock 10/17
Finally, Team Iman - one individual from @nyuniversity working all hours of the day and night from his apartment in Brooklyn, quite incredibly beat ALL the other teams to take THIRD PLACE. It's a pretty heroic achievement. 11/17
Not only that, but he contributed thoroughly to check-in discussions, *set the bar* for others with his Laghos study (hpc.news/laghos) and helped other people solve problems. He also ate the best pizza, because it was real NYC pizza. 12/17
As sponsors, @arm and @awscloud dug into our piggy banks so @ArmHPCUserGroup could award some extra prizes for 2nd place and 3rd place groups, too. 13/17
They'll all receive some Apple M1-based iPad Pros (apple.com/ipad-pro/), with our thanks, and the appreciation of the whole community. 14/17
This event contributed a lot of new working #HPC codes to the whole community. We're so thankful that we were able to be part of it, and to support it. 15/17
Would not have been possible without our army of volunteers who formed our mentors team for the #ahughackathon, because they helped - and inspired - all the teams to learn more, go further and try harder. 16/17
There's loads more to come from this event, so stay tuned over the coming weeks for more results, outcomes and outputs from this incredible community effort. We all had a barrel of fun, too. 17/17
hey @threadreaderapp, could you unroll me please?

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Brendan Bouffler☁️ 🏳️‍🌈

Brendan Bouffler☁️ 🏳️‍🌈 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @boofla

22 Jul
A lot of folks have asked about what we used to build out the infrastructure for the hackathon. 1/22
I'm going to lean on @OllyPerksHPC and @CQnib and others who worked really closely to look after both the orchestration of the clusters themselves as well as the "insides" of the clusters (packages, compilers, libraries etc). 2/22
Firstly, we extensively used @awscloud #ParallelCluster, which takes spec files that more or less say - "gimme a cluster with, lemme see ... I think I want Slurm today, and up to 16 compute nodes made from Graviton2, I also want EFA for doing fast MPI stuff, oh, and a… 3/22
Read 23 tweets
13 May
When we (@awscloud) started building EFA, our fast network fabric for HPC, I was skeptical whether we'd be able to run the really hard "latency-sensitive" codes like weather simulations or molecular dynamics. Boy was I wrong. Turns out: we rock at these codes. 1/16
We learned, tho, that single-packet latency is really distracting when trying to predict code performance on an HPC cluster.

Don't misunderstand me tho: it's not irrelevant. 2/16
It's just that so many of us in HPC ignored the real goal which is for MPI ranks on different machines to exchange chunks of data quickly - we mistook single-packet transit times as a proxy for that. 3/16
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!