What’s new in Android Studio? A lot of things, some of which make use of generative AI to improve your workflow. Here’s what got announced at #GoogleIO 🧵
Android Studio Hedgehog (the latest Canary version, and no, its logo isn’t Sonic) adds a new AI-powered conversational feature called Studio Bot, better tools to develop for multiple form factors, and new insights, debugging, and testing solutions.
Studio Bot is an AI-powered conversational experience that leverages Codey, Google’s foundation model for coding, to help you generate code for your app. You can also ask it questions about Android development or to help you fix errors in your code.
Google cautions that Studio Bot is in its very early stages, so expect it to make mistakes. That being said, it’ll hopefully continue to improve through your use and feedback. Studio Bot is currently only available to Android developers in the U.S.
If you’re worried about your IP, Google says only the chat dialogue between yourself and Studio Bot is shared with the company — not your source code. You can learn more about Studio Bot here: developer.android.com/studio/preview…
You can start preparing your app to take advantage of the expanded screen sizes and functionality of new foldables and tablets launching later this year (Pixel Fold and Pixel Tablet) by creating AVDs using these new profiles in Android Studio Hedgehog.
If you’re looking to make an app for the new version of Wear OS — Wear OS 4 based on Android 13 — you can test the new platform features through the new Wear OS 4 emulator. developer.android.com/training/weara…
If you want to test your app on a physical Pixel device and don’t have one, you can use Android Device Streaming to stream a remote physical Pixel device directly within Android Studio. An early access program will launch later this year. developer.android.com/studio/preview…
The Espresso Device API will help you write tests that perform synchronous configuration changes when testing on AVDs running API level 24 and higher. This will help you test your app across configuration changes like rotating or folding a device. developer.android.com/studio/preview…
Android Studio Hedgehog will let you view important crash reports from Android vitals, building on the App Quality Insights feature first introduced in Android Studio Electric Eel.
A new Power Profiler feature will show power consumption data segmented by each sub-system, such as Camera, GPS, etc. This data is available when recording a System Trace via the Profiler so you can visually correlate power consumption with what’s happening in your app.
The Device Explorer (previously Device File Explorer) now includes info about debuggable processes running on connected devices, accessed through the new Processes tab. Here you can select a process and kill it, force stop it, or attach the debugger.
Gradle Managed Devices can now target real, physical devices running in Firebase Test Lab, so you can more easily scale automated tests in your CI infrastructure across the large number of devices available through FTL. This is available with Android Gradle Plugin 8.2.
Android Studio Hedgehog includes IntelliJ 2023.1, the release notes of which are available here: jetbrains.com/idea/whatsnew/… The New UI has been improved with a new Compact Mode.
There are other changes, many of which were already announced/are already available, such as Live Edit for Compose UI, default Kotlin DSL in Gradle build scripts, automatic per-app language support, Android SDK Upgrade Assistant, etc. See Google’s blog post for the full details!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"I think Android 17 is moving from an operating system to an intelligent system"
@ssamat recently talked to @AndroidAuth about many things, but what he said about Android 17 was really interesting!
I know some will read this and go, "oh great, more AI!" but hear me out 🧵
Time is our most precious commodity. Time spent on repetitive, manual tasks (what Sameer calls 'digital laundry') is time you aren't spending with friends or family, or on other stuff you enjoy.
So, what can we do? We're making Android more helpful by making it more agentic!
Gemini can already take actions within many apps like Keep, YouTube, Maps, etc.
Here's a recent example of how I leveraged Gemini's Keep integration:
Before moving to the Bay Area, I wanted to learn some of my mom's best recipes, so I recorded us cooking together and then asked Gemini to create individual Keep notes detailing the ingredients and steps to make each dish! What would've taken me hours (watching the videos, noting each ingredient/step, and jotting them down) took just minutes thanks to AI.
Android finally has a pop-up dialog for the Bluetooth Quick Setting tile! I previously spotted this in the QPR2 Beta 2 release, but it's now enabled by default in QPR2 Beta 3!
(Image credits: @Nail_Sadykov.)
Android 14 QPR2 Beta 3 is preparing to add a new "make all apps dark" toggle under Settings > Accessibility > color and motion. This option inverts the colors of apps that don't support a dark theme. This does the same thing as the developer option, just now available outside of dev options.
In the "Internet" panel, there's now a "Share Wi-Fi" button in the bottom left. Tapping this button opens the existing Share Wi-Fi page where a QR code is shown that others can scan to join your network, or you can tap "Nearby" to share your Wi-Fi credentials via Nearby Share.
Android 14 is rolling out for the Pixel 4a 5G and later starting TODAY! The update brings more customization, control, and accessibility features. If you haven’t been on the beta and want to know what new features are in Android 14, here’s a summary of the highlights:
Google says that several of the large-screen features originally planned for Android 14 were made available as updates to Android 13. This is because the Pixel Tablet and Pixel Fold shipped with Android 13 QPR3, so these features had to be ready for their launch.
Speaking of the Pixel Fold, Google Translate will soon be adding a Dual Screen Interpreter Mode, which will show your own words on one screen and your translated words on the other screen.