GCP : Associate Cloud Engineer Practice Exams

A Quick Review with Cloud Solution on GCP

Pisit J.
7 min readNov 19, 2022

1 — You were assigned to set up a budget alert for the total Google Compute Engine cost incurred in all of your GCP projects. All of these projects are using the same billing account.

What should you do ?

  • Ensure that you are the Billing Account Administrator.
  • Select the billing account and create a budget.
  • Select all projects and the Cloud Storage service as the budget scope.
  • Create the budget alert.

Note — Budget alert might prompt you to take action to control your costs, but do not automatically prevent the use or billing of your services when the budget amount or threshold rules are met or exceeded.

https://cloud.google.com/billing/docs/how-to/budgets

2 — You are tasked to terminate all resources on GCP project that is no longer used and you need to do this with the fewest possible steps.

What should you do ?

  • Confirm that you have the Project Owners IAM role for this project.
  • Select the project in the GCP console, go to Admin > Settings, click Shut down, and enter the Project ID to confirm the deletion.

Note — If a project is no longer in use, you can delete its resources to save cost. This action requires the role/owner or resourcemanager.projects.delete permission.

A project that is marked for deletion is not usable. After 30 days, the project is fully deleted.

https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects

https://cloud.google.com/sdk/gcloud/reference/projects/delete

3 — You have three different projects for your development, staging, and production environments in your GCP account. You want to generate a list of all Google Compute Engine instances in your account.

What should you do ?

  • Create three different configurations using the gcloud config command for each environments. Use the gcloud compute instances list command to list all the compute resources for each configuration.

https://cloud.google.com/sdk/docs/configurations#multiple_configurations

4 — You are assigned for a project that builds a microservice application on a Google Kubernetes (GKE) cluster. You need to ensure that this GKE cluster is patched against vulnerabilities of all severities and will always support a stable version of Kubernetes.

What should you do ?

  • Activate the Node Auto-Upgrades configuration for your GKE

Note — Node Auto-Upgrades help you keep the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is updated. When you create a new cluster or node pool with Google Cloud Console or the gcloud command, Node Auto-Upgrades is enabled by default.

https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades

5 — Your company’s finance team needs to back up data on a Cloud Storage bucket for disaster recovery purposes.

Which storage class would be the best option ?

  • Archive Storage with Multi-Regional Setup

Note — There are 3 storage classes for archiving data in Cloud Storage.

  • Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Archive storage is the best choice for data that you plan to access less than once a year, with 365-day minimum storage duration.
  • Coldline Storage is also ideal for data that your business expects to read less than once a quarter, with 90-day minimum storage duration
  • Nearline Storage is ideal for data you plan to read or modify on average once per month, with 30-day minimum storage duration

Unlike the archive storage services offered by other Cloud providers, your data is available within milliseconds, not hours or days.

https://cloud.google.com/storage/docs/storage-classes#archive

6 — You are planning to set up Google Compute Engine as development and production VM on separated subnets. And, you need to configure the VMs to communicate using Internal IP addresses in a VPC, without additional custom routes.

What should you do ?

  • Set up a custom mode VPC configured with 2 subnets on different regions. Configure the subnets to have different CIDR ranges.

Note — Custom mode VPC networks start with no subnets, giving you full control over subnet creation. You can create more than one subnet per region. Resources within a VPC network can communicate with one another by using internal IPv4 addresses, subject to applicable network firewall rules.

Within a VPC network, all primary and secondary IP ranges must be unique, but they do not need to be contiguous. For example, the primary range of a subnet can be 10.0.0.0/24 while the primary range of another subnet in the same network can be 255.255.0.0/18.

https://cloud.google.com/vpc/docs/vpc#subnet-ranges

7 — You want to create new VM instances in your existing subnet. But You noticed that you can’t create an instance anymore because there are no available IP addresses in the subnet. Your instances need to communicate with each other without additional routes.

What should you do ?

  • Use gcloud compute networks subnets expand-ip-range command to expand the IP range.

Note — You can expand the primary IP range of an existing subnet by modifying its subnet mask, setting the prefix length to a smaller number.

Expanding the primary IP range of a subnet cannot be undone. You cannot shrink the primary IP range of a subnet. Expand primary IP ranges conservatively; you can always expand them again. Consider IP address space in any networks to which your VPC network is or will be connected before you expand a subnet’s primary IP range.

https://cloud.google.com/vpc/docs/create-modify-vpc-networks#expand-subnet

8 — You built an application and deployed it to the Google Cloud Platform. This application needs to connect to a custom server that you plan to host on Compute Engine. You want to avoid manually reconfiguring IP address of custom server to the application .

What should you do ?

  • Configure the Primary internal IP as a static internal IP address

Note — Static internal IPs provide the ability to reserve internal IP addresses from the IP range configured in the subnet, then assign those reserved internal addresses to resources as needed. Reserving an internal IP address takes that address out of the dynamic allocation pool and prevents it from being used for automatic allocations.

Deleting a resource does not automatically release a static internal IP address. You must manually release static internal IP addresses when you no longer require them.

https://cloud.google.com/vpc/docs/ip-addresses

https://cloud.google.com/compute/docs/ip-addresses/reserve-static-internal-ip-address

9 — You have been assigned to launch many Compute Engine instances to accept incoming TCP traffic on port 9000 You want to follow Google-recommended best practices in configuring an instance firewall.

What should you do ?

  • Create a network tag for the three instances. Create an ingress firewall rule that allows TCP traffic in ports 9000, then specify the instance’s network tag as the target tag

Note — A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. Tags enable you to make firewall rules and routes applicable to specific VM instances.

You can assign network tags to new VMs at the creation time or edit the set of assigned tags at any time later. Modifying network tags of a running VM can be done without it being stopped.

https://cloud.google.com/vpc/docs/add-remove-network-tags

10 — Your company requires the relational database to be highly reliable and supports point-in-time recovery while minimizing operating costs.

What should you do ?

  • Choose Cloud SQL and verify that the enable binary logging option is selected.

If you use Cloud SQL, the fully managed Google Cloud MySQL database, you should enable automated backups and binary logging for your Cloud SQL instances. This allows you to perform a point-in-time recovery, which restores your database from a backup and recovers it to a fresh Cloud SQL instance.

In Cloud SQL, point-in-time recovery (PITR) uses binary logs. These logs update regularly and use storage space. The binary logs are automatically deleted with their associated automatic backup, which generally happens after about 7 days. If the size of your binary logs is causing an issue for your instance, you can increase the instance storage size, but the binary log size increase in disk usage might be temporary. To avoid unexpected storage issues, it is recommended to enable automatic storage increases when using PITR.

https://cloud.google.com/sql/docs/mysql/backup-recovery/pitr

11 — You want to execute a query in BigQuery, but You need to find out how much your query would cost before running it.

What should you do ?

  • Use Cloud Shell to execute a dry run query to determine the number of bytes read for the query. Utilize the Pricing Calculator to convert that bytes estimate to dollars.

https://cloud.google.com/bigquery/docs/estimate-costs

https://cloud.google.com/bigquery/pricing

12 — You have financial reports stored in Google Cloud Storage (GCS) that needs to be evaluated by an external auditing firm. The report contains sensitive information, so you decided to limit the object’s access. The auditing firm does not own a Google account where you can delegate the necessary privileges to access the object. You must implement a secure approach to do this task and have it done with the fewest possible steps.

What should you do ?

  • Generate a signed URL and specify the expiration to four hours. Share the signed URL with the auditing firm.

Note — A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource.

When you generate a signed URL, you specify a user or service account which must have sufficient permission to make the request that the signed URL will make. After you generate a signed URL, anyone who possesses it can use the signed URL to perform specified actions, such as reading an object within a specified period of time.

https://cloud.google.com/storage/docs/access-control/signed-urls

--

--