Printable of Associate-Cloud-Engineer exam engine materials and braindumps for Google certification for consumer, Real Success Guaranteed with Updated Associate-Cloud-Engineer pdf dumps vce Materials. 100% PASS Google Cloud Certified - Associate Cloud Engineer exam Today!

Free Associate-Cloud-Engineer Demo Online For Google Certifitcation:

You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?

  • A. Create a new subnet in the same region as the subnet being used.
  • B. Add an alias IP range to the subnet used by the GKE clusters.
  • C. Create a new VPC, and set up VPC peering with the existing VPC.
  • D. Expand the CIDR range of the relevant subnet for the cluster.

Answer: C

To create a VPC peering connection, first create a request to peer with another VPC.

You recently deployed a new version of an application to App Engine and then discovered a bug in the release. You need to immediately revert to the prior version of the application. What should you do?

  • A. Run gcloud app restore.
  • B. On the App Engine page of the GCP Console, select the application that needs to be reverted and click Revert.
  • C. On the App Engine Versions page of the GCP Console, route 100% of the traffic to the previous version.
  • D. Deploy the original version as a separate applicatio
  • E. Then go to App Engine settings and split traffic between applications so that the original version serves 100% of the requests.

Answer: D

You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want to test your application locally on your laptop with Cloud Datastore. What should you do?

  • A. Export Cloud Datastore data using gcloud datastore export.
  • B. Create a Cloud Datastore index using gcloud datastore indexes create.
  • C. Install the google-cloud-sdk-datastore-emulator component using the apt get install command.
  • D. Install the cloud-datastore-emulator component using the gcloud components install command.

Answer: D

Your organization has user identities in Active Directory. Your organization wants to use Active Directory as their source of truth for identities. Your organization wants to have full control over the Google accounts used by employees for all Google services, including your Google Cloud Platform (GCP) organization. What should you do?

  • A. Use Google Cloud Directory Sync (GCDS) to synchronize users into Cloud Identity.
  • B. Use the cloud Identity APIs and write a script to synchronize users to Cloud Identity.
  • C. Export users from Active Directory as a CSV and import them to Cloud Identity via the Admin Console.
  • D. Ask each employee to create a Google account using self signu
  • E. Require that each employee use their company email address and password.

Answer: A

You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?

  • A. Deploy the monitoring pod in a StatefulSet object.
  • B. Deploy the monitoring pod in a DaemonSet object.
  • C. Reference the monitoring pod in a Deployment object.
  • D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Answer: B

You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps. You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each. What should you do?

  • A. Create an HTTP load balancer with a backend configuration that references an existing instance group.Set the health check to healthy (HTTP).
  • B. Create an HTTP load balancer with a backend configuration that references an existing instance group.Define a balancing mode and set the maximum RPS to 10.
  • C. Create a managed instance grou
  • D. Set the Autohealing health check to healthy (HTTP).
  • E. Create a managed instance grou
  • F. Verify that the autoscaling setting is on.

Answer: A

You are assisting a new Google Cloud user who just installed the Google Cloud SDK on their VM. The server needs access to Cloud Storage. The user wants your help to create a new storage bucket. You need to make this change in multiple environments. What should you do?

  • A. Use a Deployment Manager script to automate creating storage buckets in an appropriate region
  • B. Use a local SSD to improve performance of the VM for the targeted workload
  • C. Use the gsutii command to create a storage bucket in the same region as the VM
  • D. Use a Persistent Disk SSD in the same zone as the VM to improve performance of the VM

Answer: A

You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?

  • A. – Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.
  • B. – Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.
  • C. – Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.
  • D. – Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups when notified.

Answer: C

Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on four GKE n1–standard–2 nodes. You need to deploy additional pods requiring n2–highmem–16 nodes without any downtime. What should you do?

  • A. Use gcloud container clusters upgrad
  • B. Deploy the new services.
  • C. Create a new Node Pool and specify machine type n2–highmem–16. Deploy the new pods.
  • D. Create a new cluster with n2–highmem–16 node
  • E. Redeploy the pods and delete the old cluster.
  • F. Create a new cluster with both n1–standard–2 and n2–highmem–16 node
  • G. Redeploy the pods and delete the old cluster.

Answer: B

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?

  • A. Create a Cloud Memorystore for Redis instance with 32-GB capacity.
  • B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
  • C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
  • D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Answer: B

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

  • A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination:bq://platform-logs.
  • B. 1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2.Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
  • C. 1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3.Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.
  • D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTOdataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp> DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Answer: C

Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?

  • A. Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.
  • B. Tag all the instances with the same network ta
  • C. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.
  • D. Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.
  • E. Ask the operations partner to generate SSH key pairs, and add the public keys to the VM instances.

Answer: B

Your company has a single sign-on (SSO) identity provider that supports Security Assertion Markup Language (SAML) integration with service providers. Your company has users in Cloud Identity. You would like users to authenticate using your company’s SSO provider. What should you do?

  • A. In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps.
  • B. In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider.
  • C. Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps.
  • D. Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications.

Answer: C

You need to immediately change the storage class of an existing Google Cloud bucket. You need to reduce service cost for infrequently accessed files stored in that bucket and for all files that will be added to that bucket in the future. What should you do?

  • A. Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket
  • B. Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket
  • C. Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket
  • D. Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket

Answer: B

The core business of your company is to rent out construction equipment at a large scale. All the equipment that is being rented out has been equipped with multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are billed based on the consumption monitored by these sensors. You expect high throughput – up to thousands of events per hour per device – and need to retrieve consistent data based on the time of the event. Storing and retrieving individual signals should be atomic. What should you do?

  • A. Create a file in Cloud Storage per device and append new data to that file.
  • B. Create a file in Cloud Filestore per device and append new data to that file.
  • C. Ingest the data into Datastor
  • D. Store data in an entity group based on the device.
  • E. Ingest the data into Cloud Bigtabl
  • F. Create a row key based on the event timestamp.

Answer: D

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory. What should you do?

  • A. Rely on live migration to move the workload to a machine with more memory.
  • B. Use gcloud to add metadata to the V
  • C. Set the key to required-memory-size and the value to 8 GB.
  • D. Stop the VM, change the machine type to n1-standard-8, and start the VM.
  • E. Stop the VM, increase the memory to 8 GB, and start the VM.

Answer: D

You are building an archival solution for your data warehouse and have selected Cloud Storage to archive your data. Your users need to be able to access this archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?

  • A. Cold Storage
  • B. Nearline Storage
  • C. Regional Storage
  • D. Multi-Regional Storage

Answer: B

Nearline, Coldline, and Archive offer ultra low-cost, highly-durable, highly available archival storage. For data accessed less than once a year, Archive is a cost-effective storage option for long-term preservation of data.
Coldline is also ideal for cold storage—data your business expects to touch less than once a quarter. For warmer storage, choose Nearline: data you expect to access less than once a month, but possibly multiple times throughout the year. All storage classes are available across all GCP regions and provide unparalleled sub-second access speeds with a consistent API.

You are creating an application that will run on Google Kubernetes Engine. You have identified MongoDB as the most suitable database system for your application and want to deploy a managed MongoDB environment that provides a support SLA. What should you do?

  • A. Create a Cloud Bigtable cluster and use the HBase API
  • B. Deploy MongoDB Alias from the Google Cloud Marketplace
  • C. Download a MongoDB installation package and run it on Compute Engine instances
  • D. Download a MongoDB installation package, and run it on a Managed Instance Group

Answer: D


Recommend!! Get the Full Associate-Cloud-Engineer dumps in VCE and PDF From Certshared, Welcome to Download: (New 190 Q&As Version)