Ok, the #googlecloudnext announcements are coming out, and let's explore one in particular.
Cloud Workstations are in public preview! This isn't your parent's virtual desktop. It's a fast, flexible dev environment with security in mind. Let's explore together in a 🧵 ...
Browser-based IDEs are a thing now. It's how Googlers do a lot of their engineering work. It's ready for primetime.
To start with @googlecloud Cloud Workstations, you create a cluster. It manages lifecycle and networking things. I can put these around the world, close to devs.
Once I have a cluster, I create a Workstation configuration. You can imagine your Platform team or Ops team setting up some default configurations.
One big part of the configuration is the assignment of resources. You can have some MASSIVE Workstations, which could be handy for compute or memory-intensive app development and testing.
Here's where it gets fun. We've partnered with @jetbrains to offer any of their fantastic IDEs in the environment. Or use vanilla @code. Or bring your own setup. It's just a container.
After creating a configuration—and you can create any number of these—it's ready to roll.
I create an individual Workstations instance, and then start it up. Once it's running, I have a few options for how to connect to it.
Here, I jumped right into an instance from the @googlecloud Console. It loaded so fast I didn't even realize it was running already. And you see I have full @code here. I can install extensions as I see fit.
This isn't just a code editor. That's no fun. This is a full compute environment. It's loaded with @googlecloud Code, so I can easily create local minikube clusters, and much more. Develop, compile, test, and do whatever you need to do.
Every human on Earth with a Google account can access a free Cloud Shell at shell.cloud.google.com. This takes a step further.
Get secure, durable, managed Workstations with premium IDEs and flexible resource allocations. Read the docs and try it out!
It's been fun to watch @googlecloud Anthos find it's fit both for customers primarily in Google Cloud, and also those earlier in their journey.
So what's new with this Kubernetes + management plane product that makes it a good choice for anyone running containers? Quick 🧵 ...
First, I like seeing a new dashboard experience! This the view is at the "fleet" level. Regardless of where your clusters are—Google Cloud, other clouds, on-prem, edge—you get an all-up view of health and policy compliance. Awesome.
We've also added an Anthos UX (and an easier Terraform experience) for your on-premises clusters. Provision on-prem clusters from the cloud!
You're not going to watch every talk at #GoogleCloudNext. I know it. You know it. Not a big deal, I won't either.
But there are a handful you might want to bookmark to learn what's new, or what the other clouds will be talking about in a year or two. A 🧵 of your best bets ...
One of the fastest ways to deploy an app? Push to @googlecloud Run by running this command against your source code: "gcloud run deploy ."
But do that for prod deployments? Unlikely. We just added Cloud Run support to Cloud Deploy, and I wanted to try it out. A 🧵 …
For reference, Cloud Deploy is our continuous deployment service that supports staged rollouts to GKE, and now Cloud Run (in preview). It relies on declarative configurations to define deployment pipelines and targets.
Let’s make all this “secure software supply chain” stuff feel a little more real, shall we?
How do I ensure that *only* container images built by a particular build engine get deployed to production? We just made this super easy to do. Demo 🧵 time!
As a dev, I barely want to think about this stuff. Just make it easy to do the right thing! I’m going to show you how we made @googlecloud Cloud Build SLSA level 2 compliant (slsa.dev/spec/v0.1/requ…) by adding attestations automatically. You’ll see in a moment why that matters.
We start with the app, of course. This is a basic @golang app I wrote to serve up an API and web page to generate ice-breaker questions in team meetings.
First, it's just PostgreSQL, but operationalized in a way that @googlecloud does so well. It's 100% compatible PostgreSQL 14.
Performance is silly great. 4x faster than standard PostgreSQL for traditional workloads, and 2x faster than AWS Aurora. And you can use it for analytical queries, where it's 100x faster than standard PostgreSQL.
If you ONLY care about using the simplest Kubernetes in a given cloud, use the native managed option (GKE, EKS, AKS, etc).
If you're expanding outward from your anchor cloud, you care about more.
We just shipped a new multicloud Anthos. Here's a 🧵 of how it works. Buckle up.
As a refresher, Anthos is a platform for container-based apps. You get GKE, config mgmt, service mesh, fleet mgmt and more, everywhere. It's GA on @googlecloud, vSphere, bare metal (bring your own OS), AWS, and Azure.
We just made a big improvement to how multicloud works.
In the previous version of Anthos multicloud, you'd use a standalone CLI to provision a management cluster, which in turn, would provision any user clusters. It's a fine pattern, but extra work for you, and more stuff to manage.