You use Spinnaker to deploy your application and have created a canary deployment stage in the pipeline. Your application has an in-memory cache that loads objects at start time. You want to automate the comparison of the canary version against the production version. How should you configure the canary analysis?
A. Compare the canary with a new deployment of the current production version.
B. Compare the canary with a new deployment of the previous production version.
C. Compare the canary with the existing deployment of the current production version.
D. Compare the canary with the average performance of a sliding window of previous production versions.
You are running an experiment to see whether your users like a new feature of a web application. Shortly after deploying the feature as a canary release, you receive a spike in the number of 500 errors sent to users, and your monitoring reports show increased latency. You want to quickly minimize the negative impact on users. What should you do first?
A. Roll back the experimental canary release.
B. Start monitoring latency, traffic, errors, and saturation.
C. Record data for the postmortem document of the incident.
D. Trace the origin of 500 errors and the root cause of increased latency.
You support a high-traffic web application that runs on Google Cloud Platform (GCP). You need measure application reliability from a user perspective without making any engineering changes to it. What should you do? (Choose two.)
A. Review current application metrics and add new ones as needed.
B. Modify the code to capture additional information for user interaction.
C. Analyze the web proxy logs only and capture response time of each request.
D. Create new synthetic clients to simulate a user journey using the application.
E. Use current and historic Request Logs to trace customer interaction with the application.
Your team of Infrastructure DevOps Engineers is growing, and you are starting to use Terraform to manage infrastructure. You need a way to implement code versioning and to share code with other team members. What should you do?
A. Store the Terraform code in a version-control system. Establish procedures for pushing new versions and merging with the master.
B. Store the Terraform code in a network shared folder with child folders for each version release. Ensure that everyone works on different files.
C. Store the Terraform code in a Cloud Storage bucket using object versioning. Give access to the bucket to every team member so they can download the files.
D. Store the Terraform code in a shared Google Drive folder so it syncs automatically to every team member's computer. Organize files with a naming convention that identifies each new version.
Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to the production environment. A recent security audit alerted your team that the code pushed to production could contain vulnerabilities and that the existing tooling around virtual machine (VM) vulnerabilities no longer applies to the containerized environment. You need to
ensure the security and patch level of all code running through the pipeline. What should you do?
A. Set up Container Analysis to scan and report Common Vulnerabilities and Exposures.
B. Configure the containers in the build pipeline to always update themselves before release.
C. Reconfigure the existing operating system vulnerability software to exist inside the container.
D. Implement static code analysis tooling against the Docker files used to create the containers.
You need to build a CI/CD pipeline for a containerized application in Google Cloud. Your development team uses a central Git repository for trunk-based development. You want to run all your tests in the pipeline for any new versions of the application to improve the quality. What should you do?
A. 1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository.
2.
Trigger Cloud Build to build the application container. Deploy the application container to a testing environment, and run integration tests.
3.
If the integration tests are successful, deploy the application container to your production environment, and run acceptance tests.
B. 1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository. If all tests are successful, build a container.
2.
Trigger Cloud Build to deploy the application container to a testing environment, and run integration tests and acceptance tests.
3.
If all tests are successful, tag the code as production ready. Trigger Cloud Build to build and deploy the application container to the production environment.
C. 1. Trigger Cloud Build to build the application container, and run unit tests with the container.
2.
If unit tests are successful, deploy the application container to a testing environment, and run integration tests.
3.
If the integration tests are successful, the pipeline deploys the application container to the production environment. After that, run acceptance tests.
D. 1. Trigger Cloud Build to run unit tests when the code is pushed. If all unit tests are successful, build and push the application container to a central registry.
2.
Trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests.
3.
If all tests are successful, the pipeline deploys the application to the production environment and runs smoke tests
The new version of your containerized application has been tested and is ready to be deployed to production on Google Kubernetes Engine (GKE). You could not fully load-test the new version in your pre-production environment, and you need to ensure that the application does not have performance problems after deployment. Your deployment must be automated. What should you do?
A. Deploy the application through a continuous delivery pipeline by using canary deployments. Use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics.
B. Deploy the application through a continuous delivery pipeline by using blue/green deployments. Migrate traffic to the new version of the application and use Cloud Monitoring to look for performance issues.
C. Deploy the application by using kubectl and use Config Connector to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues.
D. Deploy the application by using kubectl and set the spec.updateStrategy.type field to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
Your company operates in a highly regulated domain that requires you to store all organization logs for seven years. You want to minimize logging infrastructure complexity by using managed services. You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error. What should you do?
A. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset.
B. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.
C. Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset
D. Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.
You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do?
A. Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods.
B. Configure Identity and Access Management (IAM) policies to create a least privilege model on your GKE clusters.
C. Use Binary Authorization to attest images during your CI/CD pipeline.
D. Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images.
You want to share a Cloud Monitoring custom dashboard with a partner team. What should you do?
A. Provide the partner team with the dashboard URL to enable the partner team to create a copy of the dashboard.
B. Export the metrics to BigQuery. Use Looker Studio to create a dashboard, and share the dashboard with the partner team.
C. Copy the Monitoring Query Language (MQL) query from the dashboard, and send the ML query to the partner team.
D. Download the JSON definition of the dashboard, and send the JSON file to the partner team.