First, it's just PostgreSQL, but operationalized in a way that @googlecloud does so well. It's 100% compatible PostgreSQL 14.
Performance is silly great. 4x faster than standard PostgreSQL for traditional workloads, and 2x faster than AWS Aurora. And you can use it for analytical queries, where it's 100x faster than standard PostgreSQL.
Other things I like? 99.99% SLA, automatic failover and recovery, automatic backups, and integration with Vertex AI to pull predictions into SQL queries. Oh, and pricing that easy. Pay only for the storage you use, and you don't get saddled with IO charges.
Let's take a look at provisioning a cluster. First, you're asked whether you want an HA cluster, or an HA cluster with read pools.
After picking a cluster type, I was asked to pick a region and private network.
Next I get the only infrastructure question I need to answer. I chose a machine type, which I can change later.
Then I added a read pool with a couple of nodes. Again, this is easy to resize after the fact.
It took a few minutes to provision everything. I was taken to a view that showed some metrics and such.
Once it was done, I could resize instances, view metrics, and more. And for an instance I created yesterday, I can see a couple of automatic backups we took.
AlloyDB is a feat of engineering, and a terrific database option in @googlecloud.
If you ONLY care about using the simplest Kubernetes in a given cloud, use the native managed option (GKE, EKS, AKS, etc).
If you're expanding outward from your anchor cloud, you care about more.
We just shipped a new multicloud Anthos. Here's a 🧵 of how it works. Buckle up.
As a refresher, Anthos is a platform for container-based apps. You get GKE, config mgmt, service mesh, fleet mgmt and more, everywhere. It's GA on @googlecloud, vSphere, bare metal (bring your own OS), AWS, and Azure.
We just made a big improvement to how multicloud works.
In the previous version of Anthos multicloud, you'd use a standalone CLI to provision a management cluster, which in turn, would provision any user clusters. It's a fine pattern, but extra work for you, and more stuff to manage.
I don't think about which data center I'm using when I upload a pic to Google Photos. Or when I perform a search. Or use Gmail.
Why should the public cloud be so different? Here's a 🧵with 10 @googlecloud services that are unique because of their global backplane …
First, VPC. Most VPC products in the public cloud take a regional approach. If you want to interconnect a bunch of regional VPCs later on, it's tricky.
Not with @googlecloud. A single VPC is global with automatic communication across regions.
I just spun up an Amazon EKS cluster because I like to live dangerously. I hoped to access it via AWS CloudShell, but surprisingly there's no kubectl there.
I could complain, or I could just use the dev-friendly, k8s-ready @googlecloud to make my AWS experience better. A 🧵 ...
So I did the steps to get an EKS cluster, and then separately provision the worker nodes. The dashboard experience is a bit light for management but we can fix that.
From my local machine, I logged into the EKS cluster, and ran a single command to register it with GKE.
Once it's there, I can see the cluster, view workloads, and even deploy workloads and expose services. And I installed Anthos Config Management on the EKS cluster which applies OPA Gatekeeper policies and common config. This is create UX, regardless of where your cluster lives.
Virtually every software system has a workflow engine. The only question is whether you build your own, or drop one in. Today, @GCPcloud shipped Cloud Workflows.
Declarative definitions, a rich syntax, and no operational effort? Let's take a look in this 🧵thread 🧵
You use Cloud Workflows to execute a series of steps, typically callouts to HTTP endpoints. A workflow may be long-running or short-running. Here, I just call out to a public API endpoint (Chuck Norris facts!), parse the result, and return it.
Cloud Workflows also integrates nicely with @GCPcloud services (OF COURSE IT DOES). Here, I'm indicating that I want OpenID Connect authentication when calling one of my Cloud Functions.