👩‍💻 Paige Bailey Profile picture
✨ AI should be about empowering humans, building understanding, and making dreams realities. 👩‍💻GenAI Developer Experience @Google ex-@GoogleDeepMind @GitHub
Dec 19 6 tweets 5 min read
📏🏆 Overjoyed to announce the winners of the Gemini Long Context Competition on @Kaggle!

We were blown away by the creativity, ingenuity, and technical skill demonstrated in folks' submissions.

It's clear that the Kaggle community is pushing the boundaries of what's possible with Gemini’s long context capabilities – especially for video understanding and processing, analysis of large codebases, and large-scale text extraction and summarization.

🧵 Winners below:Image FrameCut: A Natural Language Video Editor

This project truly impressed us with its comprehensive approach to video editing through natural language, using Gemini’s multimodal longer context capabilities.

FrameCut.ai not only demonstrated a strong technical foundation, but also presented a compelling vision for a complete product in just a single month – something I’d definitely use to edit videos!

youtube.com/watch?v=D2i8Au…
kaggle.com/code/kyle1373/…
Aug 3 7 tweets 4 min read
👋 Inspired by recent conversations with friends, and based on a long history of automating away every job I've ever had (from data processing to PM work):

Am sharing a few ways that I'm using Gemini 1.5 and 2M+ tokens of context in @GoogleAIStudio to automate the boring parts of DevRel and UXR!

Reminder that you can stuff quite a bit into 2M+ tokens (hours of video, years of emails, full codebases, etc.) and that, over time, we expect 2M tokens ➡️ infinity, cost ➡️ $0, latency ➡️ near instant.Image (1) Uploading a dated codebase (in this case, Flax 0.7.5), and a newer version of the codebase (Flax 0.8.5), then analyzing changes.

You can generate documentation changes based on the differences in code; blog posts or release notes describing the code changes; and - a favorite - update old tutorials based on the new versions of the APIs.
Apr 29, 2023 4 tweets 2 min read
✨🤔 Wondering how far a person can get with "make this code faster", "make this code more readable and reusable", "refactor this code to be more concise" in the prompt.

👇🏻Am also impressed Bard deduced that I was attempting to implement a multiplication table! Image ✨👩‍💻 Jazzed to imagine a future where we all have friendly, competent technical assistants that cheerfully answer n00b questions about chemistry, physics, math, and programming.

📝 Citing sources would be a strong next step, just as we cite potentially recited code in snippets! Image
Dec 1, 2022 34 tweets 8 min read
👩‍💻 If this is what ChatGPT is like (a variant of InstructGPT), then GPT-4 is going to be *bonkers*.

👇🏻A thread of my favorite examples of ChatGPT, for source code-related tasks:

Nov 14, 2022 7 tweets 3 min read
📊 Is anyone else *super* dissatisfied with the tech industry's preferred/tracked open-source metrics?

@github stars; pip install or download counts; @-mentions or tags on social media: all of these stats can, and will, be gamed. We can do much better!

👇🏻Here are some ideas: @github (1) Projects listing a particular repo as a dependency.

This can be easily tracked via GitHub's dependency graph, or by scraping which Dockerfiles, conda environment YAMLs, etc. reference a library or framework. Image
May 26, 2022 9 tweets 4 min read
🤖 Reinforcement learning in production is a very nascent space, but a fast-growing and multi-faceted one (everything from game dev to operations research)!

👇To showcase this, am compiling a list of projects that are using @raydistributed and RLlib to enable their experiments: @raydistributed (1) 👾 Game development

Everything from multi-agent reinforcement learning; to game balancing and boss optimization; and (even sometimes outside of the realm of RL, but still powered by Ray): in-app game recommendations.

Mar 23, 2022 9 tweets 2 min read
The longer I work on open-source ML tools, the more convinced I become in decoupling libraries.

Crafting simple, delightful, and composable user-facing APIs is *endlessly difficult*; you shouldn't also have to have a PhD in distributed systems in order to make those APIs scale. Library authors should be able to focus on building concise, extensible features for their users–that help domain experts go from having an idea, to realizing it, as quickly as possible.

Asking those authors to worry about hardware, or data / model parallelism, is unreasonable.
Mar 22, 2022 4 tweets 4 min read
Would ❤️ to see @Gradio support for @RayDistributed Serve!

🙌 Serve enables model composition (stitching multiple models together, in an inference pipeline); auto-scaling; and the integration of arbitrary Python business logic, based on inputs / outputs:

github.com/ray-project/ra… @Gradio @raydistributed What might models+business logic stitched together in a pipeline look like?

🛰Cycle through 10K+ satellite images
☁️Remove all images w/ cloud cover
🌊If a satellite image shows a sediment plume:
📐Estimate the size/volume of the plume
🌧Log volume, lat/long, and recent rainfall
Feb 26, 2022 8 tweets 4 min read
🧵 Are you interested in recreating a similar experience in VS @Code, using open-source extensions and tools?

👇 Thread below: @code 📝 For docstring & source code generation: check out @HuggingFace's open-source alternative to Copilot, called Code Clippy, which is powered by GPT-J:

github.com/CodedotAl/gpt-…

(And for additional features like source code explanation, check out Copilot!)

Feb 11, 2021 5 tweets 2 min read
Your company and your projects can get considerable value out of traditional machine learning methods, *without* giant language models or massively-scaled architectures —

just like you can still exercise, or hike, or rock climb, even if you aren't prepping to conquer Everest. Also: traditional models are generally cheaper to implement; more straightforward to explain and understand; and easier to maintain.

Giant models are exciting, from a research perspective; & can unlock capabilities. But you don't *have* to use them, if a smaller hammer works. 🤷‍♀️
Nov 5, 2020 5 tweets 2 min read
"Software experts are—of necessity— comfortable with high cognitive friction. They pride themselves on their ability to work in spite of its adversity.

Normal humans, who are new users of these products, lack the expertise
to judge whether the cognitive friction is avoidable." "High cognitive friction polarizes people: it either makes them
feel frustrated & stupid for failing, or giddy with power at overcoming the extreme difficulty.

These people either adopt cognitive friction as a lifestyle, or
they go underground and accept it as a necessary evil."
Oct 31, 2020 6 tweets 3 min read
TIL that Claude Shannon's revelation that 0s and 1s could convey information was a *master's thesis*, and that he was only twenty-one. 😲 Image "He gave engineers the conceptual tools to digitize information and send it flawlessly (or, to be precise, with an arbitrarily small amount of error) -- a result considered hopelessly utopian up until the moment Shannon proved it was not." Image
Sep 4, 2020 5 tweets 3 min read
"I get taught life lessons all the time. Probably the most important one is that people will tell you [that] you can’t do something, and you have to ignore them because you can."

- Alan Cooper, @ComputerHistory Oral History (2017)

archive.computerhistory.org/resources/acce… Image "Are you working to become a good ancestor -- to make something truly interesting -- or are you just working to make money?

Because you can make money by burning villages down and rebuilding them; that’s a good way to make money, but it’s not a good way to be a good ancestor." Image
Jul 10, 2020 6 tweets 1 min read
A thread:

College had always been very grad-school-focused, for me: do the NSF REUs, do the industry internships, take the GRE, get a PhD, be lady Carl Sagan.

But halfway through senior year, my mom got very sick (heart issues); and I needed to get a big-girl-job to support us. Role: developing geophysics plug-ins & geoprocessing scripts at Chevron.

That job isn't something that I would have selected, by a long mile—but it let me experiment with machine learning, with data science, and with tools like Spark, on massive amounts of accelerators/compute.
Sep 2, 2019 10 tweets 18 min read
Hey, internet: just in case you're not one of those engineers who hears "Hey, compilers!" and comes running delightedly, here's a thread on why I think #MLIR is going to transform the machine learning industry:

(1/n)

for more technical details, ref: ai.google/research/pubs/… We're seeing an explosion of domain-specific accelerated hardware (yay!).

@GoogleAI's TPUs are an example of this, sure; but you also have @Apple's specialized chips for iPhones, @BaiduResearch's Kunlun, and literally *thousands* of others.

The way that you program for
(2/n)
Dec 31, 2018 5 tweets 3 min read
had a really odd dream that I co-founded a startup

pitch: applying transfer learning concepts to education

(ex: if you're an engineer who is adept at natural language processing, what additional machine learning concepts would you need to master to solve computer vision tasks?) 🧠 we used @KhanAcademy data to train the models (zero idea how imaginary-startup got access to it, but 🤷‍♀️)

the dream ended with our acquisition by @Coursera, who eventually seized complete control of the undergraduate (101- and 201-level) and enterprise education markets 💪
Nov 30, 2018 32 tweets 42 min read
✨💡 This is an ace idea from @sarah_edo! 💕

👩‍💻 Be on the lookout for a @TensorFlow Advent Calendar tomorrow, as well, highlighting meaningful, high-impact projects and papers from our community. If you'd like for yours to be considered, please shoot me an @-mention! #TFadvent begins today! 😄

For our first project, I'd like to highlight this accessibility example from @shekitup that uses @TensorFlowJS to interpret sign language—and then translates those signs into input that can be used by a home assistant! 🗣️✨

medium.com/tensorflow/get…
Nov 26, 2018 7 tweets 5 min read
Noogler training doesn't start until tomorrow (😭), so have spent the day derping around @Stanford!

(I think that this was the first building I ever wandered into, back in July 2010 - was presenting a poster at the Lunar Science Forum, and had 30min to dash on/off campus. 🤗💕) 👩‍🎓HOW DO YOU DO, FELLOW KIDS #ClassCrashing
Nov 22, 2018 40 tweets 39 min read
✨🧠 The ecosystem that has grown up around @TensorFlow in the last few years blows my mind. There's just so much functionality, compared to some of the other, newer frameworks.

👉Consider this an ever-expanding thread for me to take notes + wrap my brain around products. Ready? 1) @TensorFlow Extended (TFX)

It's no secret that I 💕 #TFX and all of its tooling for deploying machine learning models into production. If you care about keeping your models up-to-date and monitoring them, you should check out the product + its paper.

tensorflow.org/tfx/?hl=zh-cn
Nov 18, 2018 7 tweets 4 min read
a friend asked me to play with @meitu_kr (app) because she wants me to explain what filters are applied / if any are deep learning related

this retouching is CRAZY 😳

📃: Facial skin beautification via sparse representation over learned layer dictionary ieeexplore.ieee.org/document/77275… ✨💄The company has an entire research team dead-focused on "beautifying" images/video (MTLab: Meitu Imaging and Vision Lab), and a slew of academic publications: mtlab.meitu.com/en/article

👉 It also looks like they'll be releasing an API / SDK soon: mtlab.meitu.com/en/apply
Nov 7, 2018 4 tweets 5 min read
👋 Hi, friends! A couple of folks DM-ed asking about the difference between @GoogleColab and @GoogleCloud #DataLab.

📒 Both services are built with @ProjectJupyter, and look kinda similar; and both are useful for data exploration! - but that is the extent of the overlap. (1/n) ✨📒 @GoogleColab doesn’t require any setup or other @Google products to be used; though notebooks are stored on @GoogleDrive.

It’s intended primarily for interactive use- which means some long-running background processes may be stopped. It currently only supports Python. (2/n)