The case for Cloud Run
I'm sure by now you have heard the tales of the magical land called Kubernetes and all the benefits that come along with it. What may have slipped the cracks is the darker side of Kubernetes. Yes, I am referring to the proper maintenance and securing of a cluster, along with the laborious setup process. What if your app doesn't have enough moving parts to warrant all of that effort? Thanks to Cloud Run, you are not stuck with Elastic Beanstalk any more!
What is Cloud Run?
According to Google's documentation,
Cloud Run is a managed compute platform that lets you run containers directly on top of Google's scalable infrastructure.
Some of those phrases should sound familiar to anyone who has scanned through a Kubernetes walkthrough or quick-start. Google Cloud Run essentially allows you to have the core flexibility of Kubernetes combined with the scale of Google Cloud and best of all, this service allows scaling to zero, which is great for dev environments as well as apps that are infrequently used.
How to get started
Cloud Run makes it super simple to get started. I personally always use a Docker image and that is the route that I will describe here but there are language-defaults such as for Node which can be used to steer clear of writing a Dockerfile. Once you have a Dockerfile written, just make sure that your service is accepting HTTP requests at
0.0.0.0:$PORT and you're off to the races. Head to https://console.cloud.google.com/run, hit Create and link your GitHub repository. Did I mention that using this method also bundles Continuous Deployment with Google Cloud Builds with no extra effort?
One of the only scary things about Google Cloud Run is runaway billing. There have been some examples of this in the past (the unlucky guinea pig, meta article) and this, at least in theory, is very preventable by setting a reasonable maximum number of instances.