You are supporting a business-critical application in production deployed on Cloud Run. The application is reporting HTTP 500 errors that are affecting the usability of the application. You want to be alerted when the number of errors exceeds 15% of the requests within a specific time window. What should you do?
A. Navigate to the Cloud Run page in the Google Cloud console, and select the service from the services list. Use the Metrics tab to visualize the number of errors for that revision and refresh the page daily.
B. Create a Cloud Function that consumes the Cloud Monitoring API Use Cloud Composer to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold.
C. Create an alerting policy in Cloud Monitoring that alerts you if the number of errors is above the defined threshold.
D. Create a Cloud Function that consumes the Cloud Monitoring API Use Cloud Scheduler to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold
Your application is controlled by a managed instance group. You want to share a large read-only data set
between all the instances in the managed instance group. You want to ensure that each instance can start
quickly and can access the data set via its filesystem with very low latency. You also want to minimize the total
cost of the solution.
What should you do?
A. Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.
B. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup script.
C. Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple Compute Engine virtual machine instances.
D. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot, and attach each disk to its own instance.
Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture. How should you proceed with the migration?
A. Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery instead of the on-premises Hadoop environment.
B. Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into the new one in Compute Engine instances.
C. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.
D. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on that data.
You are a SaaS provider deploying dedicated blogging software to customers in your Google Kubernetes Engine (GKE) cluster. You want to configure a secure multi-tenant platform to ensure that each customer has access to only their own blog and can't affect the workloads of other customers. What should you do?
A. Enable Application-layer Secrets on the GKE cluster to protect the cluster.
B. Deploy a namespace per tenant and use Network Policies in each blog deployment.
C. Use GKE Audit Logging to identify malicious containers and delete them on discovery.
D. Build a custom image of the blogging software and use Binary Authorization to prevent untrusted image deployments.
Your company's product team has a new requirement based on customer demand to autoscale your stateless and distributed service running in a Google Kubernetes Engine (GKE) duster. You want to find a solution that minimizes changes because this feature will go live in two weeks. What should you do?
A. Deploy a Vertical Pod Autoscaler, and scale based on the CPU load.
B. Deploy a Vertical Pod Autoscaler, and scale based on a custom metric.
C. Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad.
D. Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric.
You have an application controlled by a managed instance group. When you deploy a new version of the application, costs should be minimized and the number of instances should not increase. You want to ensure that, when each new instance is created, the deployment only continues if the new instance is healthy. What should you do?
A. Perform a rolling-action with maxSurge set to 1, maxUnavailable set to 0.
B. Perform a rolling-action with maxSurge set to 0, maxUnavailable set to 1
C. Perform a rolling-action with maxHealthy set to 1, maxUnhealthy set to 0.
D. Perform a rolling-action with maxHealthy set to 0, maxUnhealthy set to 1.
You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do?
A. 1) Create a managed instance group. Replicate the static content across the virtual machines (VMs) 2) Create an external HTTP(S) load balancer. 3) Enable Cloud CDN, and send traffic to the managed instance group.
B. 1) Create an unmanaged instance group. Replicate the static content across the VMs. 2) Create an external HTTP(S) load balancer 3) Enable Cloud CDN, and send traffic to the unmanaged instance group.
C. 1) Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket 2) Reserve an external IP address, and create an external HTTP(S) load balancer 3) Enable Cloud CDN, and send traffic to your backend bucket
D. 1) Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket. 2) Reserve an external IP address, and create an external HTTP(S) load balancer. 3) Enable Cloud CDN, and send traffic to your backend bucket.
You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do?
A. Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code's performance.
B. Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.
C. Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.
D. Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.
You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is running and display this information to users. You want to use the most performant approach. What should you do?
A. Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header.
B. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run "Variables and Secrets" tab, and add the desired environment variables in Key:Value format.
C. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration information to Cloud Run's in- memory container filesystem.
D. Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata.
Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG after updating the instance template?
A. Set the Max Surge to 100%.
B. Set the Update mode to Opportunistic.
C. Set the Maximum Unavailable to 100%.
D. Set the Minimum Wait time to 0 seconds.