Cause all that matters here is passing the Google Professional-Cloud-Architect exam. Cause all that you need is a high score of Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) exam. The only one thing you need to do is downloading Pass4sure Professional-Cloud-Architect exam study guides now. We will not let you down with our money-back guarantee.

Free demo questions for Google Professional-Cloud-Architect Exam Dumps Below:

NEW QUESTION 1

For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

  • A. Create a scalable environment in GCP for simulating production load.
  • B. Use the existing infrastructure to test the GCP-based backend at scale.
  • C. Build stress tests into each component of your application using resources internal to GCP to simulate load.
  • D. Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Answer: A

Explanation:
From scenario: Requirements for Game Backend Platform
Professional-Cloud-Architect dumps exhibit Dynamically scale up or down based on game activity
Professional-Cloud-Architect dumps exhibit Connect to a managed NoSQL database service
Professional-Cloud-Architect dumps exhibit Run customize Linux distro

NEW QUESTION 2

You want to automate the creation of a managed instance group and a startup script to install the OS package dependencies. You want to minimize the startup time for VMs in the instance group.
What should you do?

  • A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies.
  • B. Create a custom VM image with all OS package dependencie
  • C. Use Deployment Manager to create the managed instance group with the VM image.
  • D. Use Puppet to create the managed instance group and install the OS package dependencies.
  • E. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.

Answer: B

Explanation:
"Custom images are more deterministic and start more quickly than instances with startup scripts. However, startup scripts are more flexible and let you update the apps and settings in your instances more easily." https://cloud.google.com/compute/docs/instance-templates/create-instance-templates#using_custom_or_public_i

NEW QUESTION 3

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

  • A. Container Engine, Cloud Pub/Sub, and Cloud SQL
  • B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
  • C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
  • D. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow
  • E. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Answer: B

Explanation:
A real time requires Stream / Messaging so Pub/Sub, Analytics by Big Query.
Ingest millions of streaming events per second from anywhere in the world with Cloud Pub/Sub, powered by Google's unique, high-speed private network. Process the streams with Cloud Dataflow to ensure reliable, exactly-once, low-latency data transformation. Stream the transformed data into BigQuery, the cloud-native data warehousing service, for immediate analysis via SQL or popular visualization tools.
From scenario: They plan to deploy the game’s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics.
Requirements for Game Analytics Platform
Professional-Cloud-Architect dumps exhibit Dynamically scale up or down based on game activity
Professional-Cloud-Architect dumps exhibit Process incoming data on the fly directly from the game servers
Professional-Cloud-Architect dumps exhibit Process data that arrives late because of slow mobile networks
Professional-Cloud-Architect dumps exhibit Allow SQL queries to access at least 10 TB of historical data
Professional-Cloud-Architect dumps exhibit Process files that are regularly uploaded by users’ mobile devices
Professional-Cloud-Architect dumps exhibit Use only fully managed services
References: https://cloud.google.com/solutions/big-data/stream-analytics/

NEW QUESTION 4

A news teed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed. What is the most likely cause of this problem?
Professional-Cloud-Architect dumps exhibit

  • A. The session variable is local to just a single instance.
  • B. The session variable is being overwritten in Cloud Datastore.
  • C. The URL of the API needs to be modified to prevent caching.
  • D. The HTTP Expires header needs to be set to -1 to stop caching.

Answer: A

Explanation:
https://stackoverflow.com/questions/3164280/google-app-engine-cache-list-in-session-variable?rq=1

NEW QUESTION 5

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

  • A. Tests should scale well beyond the prior approaches.
  • B. Unit tests are no longer required, only end-to-end tests.
  • C. Tests should be applied after the release is in the production environment.
  • D. Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Answer: A

Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity

NEW QUESTION 6

You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on premises network and the GCP network.
What should you do?

  • A. Verify that Dedicated Interconnect can replicate files to GC
  • B. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
  • C. Verify that Dedicated Interconnect can replicate files to GC
  • D. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
  • E. Verify that the Transfer Appliance can replicate files to GC
  • F. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
  • G. Verify that the Transfer Appliance can replicate files to GC
  • H. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.

Answer: B

Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering

NEW QUESTION 7

For this question, refer to the TerramEarth case study.
TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

  • A. Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.
  • B. Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.
  • C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.
  • D. Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Answer: D

Explanation:
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For example:
Cold Data Storage - Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.
References: https://cloud.google.com/storage/docs/storage-classes

NEW QUESTION 8

Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and
live-processing some data as it comes in. Which technology should they use for this?

  • A. Google Cloud Dataproc
  • B. Google Cloud Dataflow
  • C. Google Container Engine with Bigtable
  • D. Google Compute Engine with Google BigQuery

Answer: B

Explanation:
Dataflow is for processing both the Batch and Stream.
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed.
References: https://cloud.google.com/dataflow/

NEW QUESTION 9

A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment You want to advocate for the adoption of Google Cloud Deployment Manager What are two business risks of migrating to Cloud Deployment Manager? Choose 2 answers

  • A. Cloud Deployment Manager uses Python.
  • B. Cloud Deployment Manager APIs could be deprecated in the future.
  • C. Cloud Deployment Manager is unfamiliar to the company's engineers.
  • D. Cloud Deployment Manager requires a Google APIs service account to run.
  • E. Cloud Deployment Manager can be used to permanently delete cloud resources.
  • F. Cloud Deployment Manager only supports automation of Google Cloud resources.

Answer: CF

Explanation:
https://cloud.google.com/deployment-manager/docs/deployments/deleting-deployments

NEW QUESTION 10

A development manager is building a new application He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must
* 1. Be based on open-source technology for cloud portability
* 2. Dynamically scale compute capacity based on demand
* 3. Support continuous software delivery
* 4. Run multiple segregated copies of the same application stack
* 5. Deploy application bundles using dynamic templates
* 6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?

  • A. Google Container Engine, Jenkins, and Helm
  • B. Google Container Engine and Cloud Load Balancing
  • C. Google Compute Engine and Cloud Deployment Manager
  • D. Google Compute Engine, Jenkins, and Cloud Load Balancing

Answer: A

Explanation:
Helm for managing Kubernetes
Kubernetes can base on the URL to route traffic to different location (path) https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer eg.apiVersion: networking.k8s.io/v1beta1
kind: Ingress metadata:
name: fanout-ingress spec:
rules:
- http: paths:
- path: /* backend: serviceName: web servicePort: 8080
- path: /v2/* backend: serviceName: web2 servicePort: 8080

NEW QUESTION 11

You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do?

  • A. Create a read replica instance in a different region
  • B. Create a failover replica instance in a different region
  • C. Create a read replica instance in the same region, but in a different zone
  • D. Create a failover replica instance in the same region, but in a different zone

Answer: B

Explanation:
https://cloud.google.com/sql/docs/mysql/high-availability

NEW QUESTION 12

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files The database is about to run out of storage space How can you remediate the problem with the least amount of downtime?

  • A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
  • B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine.
  • C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux.
  • D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk.
  • E. In the Cloud Platform Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.

Answer: A

Explanation:
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added.
Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.
sudo resize2fs /dev/[DISK_ID][PARTITION_NUMBER]
where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system.
References: https://cloud.google.com/compute/docs/disks/add-persistent-disk

NEW QUESTION 13

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?

  • A. Work with your ISP to diagnose the problem.
  • B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application.
  • C. Roll back to an earlier known good release initially, then use Stackdriver Trace and logging to diagnose the problem in a development/test/staging environment.
  • D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate.Then use Stackdriver Trace and logging to diagnose the problem.

Answer: C

Explanation:
Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs. Even better, you can analyze all that log data in real time.
References: https://cloud.google.com/logging/

NEW QUESTION 14

A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases. What should you do?

  • A. Set timeouts on your application so that you can fail requests faster.
  • B. Send custom metrics for each of your requests to Stackdriver Monitoring.
  • C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high.
  • D. Instrument your application with Stackdnver Trace in order to break down the request latencies at each microservice.

Answer: D

Explanation:
https://cloud.google.com/trace/docs/overview

NEW QUESTION 15

You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network. How should you deploy the VPN?

  • A. Use VPC Network Peering between the VPC and the on-premises network.
  • B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
  • C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
  • D. Deploy Cloud VPN Gateway in each regio
  • E. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.

Answer: C

Explanation:
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns

NEW QUESTION 16

As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication. What should they do?

  • A. Configure their replication to use UDP.
  • B. Configure a Google Cloud Dedicated Interconnect.
  • C. Restore their database daily using Google Cloud SQL.
  • D. Add additional VPN connections and load balance them.
  • E. Send the replicated transaction to Google Cloud Pub/Sub.

Answer: B

NEW QUESTION 17

One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
Professional-Cloud-Architect dumps exhibit
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? Choose 2 answers.

  • A. Remove Python after running pip.
  • B. Remove dependencies from requirements.txt.
  • C. Use a slimmed-down base image like Alpine linux.
  • D. Use larger machine types for your Google Container Engine node pools.
  • E. Copy the source after the package dependencies (Python and pip) are installed.

Answer: CE

Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
References: https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/

NEW QUESTION 18

Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths and ports you authorize, but you don’t want to rely on static IP addresses or subnets because the app can autoscale. How should you restrict communications?

  • A. Use separate VPCs to restrict traffic
  • B. Use firewall rules based on network tags attached to the compute instances
  • C. Use Cloud DNS and only allow connections from authorized hostnames
  • D. Use service accounts and configure the web application particular service accounts to have access

Answer: B

NEW QUESTION 19
......

Recommend!! Get the Full Professional-Cloud-Architect dumps in VCE and PDF From Dumps-hub.com, Welcome to Download: https://www.dumps-hub.com/Professional-Cloud-Architect-dumps.html (New 170 Q&As Version)