We provide real oracle database 12c installation and administration 1z0 062 pdf exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Oracle oracle database 12c installation and administration 1z0 062 pdf Exam quickly & easily. The oracle database 12c installation and administration 1z0 062 pdf PDF type is available for reading and printing. You can print more and practice many times. With the help of our Oracle 1z0 062 pdf dumps pdf and vce product and material, you can easily pass the 1z0 062 dumps exam.

Q1. What are two benefits of installing Grid Infrastructure software for a stand-alone server before installing and creating an Oracle database? 

A. Effectively implements role separation 

B. Enables you to take advantage of Oracle Managed Files. 

C. Automatically registers the database with Oracle Restart. 

D. Helps you to easily upgrade the database from a prior release. 

E. Enables the Installation of Grid Infrastructure files on block or raw devices. 

Answer: A,C 

Explanation: C: To use Oracle ASM or Oracle Restart, you must first install Oracle Grid Infrastructure for a standalone server before you install and create the database. Otherwise, you must manually register the database with Oracle Restart. 

Desupport of Block and Raw Devices With the release of Oracle Database 11g release 2 (11.2) and Oracle RAC 11g release 2 (11.2), using Database Configuration Assistant or the installer to store Oracle Clusterware or Oracle Database files directly on block or raw devices is not supported. If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you can use an existing raw or block device partition, and perform a rolling upgrade of your existing installation. Performing a new installation using block or raw devices is not allowed. 

Reference: Oracle Grid Infrastructure for a Standalone Server, Oracle Database, Installation Guide, 12c 


Q2. Examine the query and its output: 

SQL> SELECT REASON, metric_value FROM dba_outstanding_alerts; 

REASONMETRIC_VALUE 

Tablespace [TEST] is [28 perce 28.125 nt] full 

Metrics "Current Logons Count"29 

Metrics "Database Time Spent99.0375405 waiting (%)" is at 99.03754 for event class "Application" db_recovery_file_dest_size of97 4294967296 bytes is 97.298 used and has 116228096 remaining bytes available. 

After 30 minutes, you execute the same query: 

SQL> SELECT reason, metric_value FROM dba_outstanding_alerets; 

REASONMETRIC_VALUE 

Tablespace [TEST] is [28 percs 28.125 nt] full 

What might have caused three of the alerts to disappear? 

A. The threshold alerts were cleared and transferred to d0A_alert_history. 

B. An Automatic Workload Repository (AWR) snapshot was taken before the execution of the second 

C. An Automatic Database Diagnostic Monitor (ADOM) report was generated before the execution of the second query. 

D. The database instance was restarted before the execution of the second query. 

Answer:


Q3. Examine the following query output: 

You issue the following command to import tables into the hr schema: 

$ > impdp hr/hr directory = dumpdir dumpfile = hr_new.dmp schemas=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING: Y 

Which statement is true? 

A. All database operations performed by the impdp command are logged. 

B. Only CREATE INDEX and CREATE TABLE statements generated by the import are logged. 

C. Only CREATE TABLE and ALTER TABLE statements generated by the import are logged. 

D. None of the operations against the master table used by Oracle Data Pump to coordinate its activities are logged. 

Answer:

Explanation: Oracle Data Pump disable redo logging when loading data into tables and when creating indexes. The new TRANSFORM option introduced in data pumps import provides the flexibility to turn off the redo generation for the objects during the course of import. The Master Table is used to track the detailed progress information of a Data Pump job. The Master Table is created in the schema of the current user running the Pump Dump export or import, and it keeps tracks of lots of detailed information. 


Q4. You created a new database using the "create database" statement without specifying the "ENABLE PLUGGABLE" clause. 

What are two effects of not using the "ENABLE PLUGGABLE database" clause? 

A. The database is created as a non-CDB and can never contain a PDB. 

B. The database is treated as a PDB and must be plugged into an existing multitenant container database (CDB). 

C. The database is created as a non-CDB and can never be plugged into a CDB. 

D. The database is created as a non-CDB but can be plugged into an existing CDB. 

E. The database is created as a non-CDB but will become a CDB whenever the first PDB is plugged in. 

Answer: A,D 

Explanation: A (not B,not E): The CREATE DATABASE ... ENABLE PLUGGABLE DATABASE SQL statement creates a new CDB. If you do not specify the ENABLE PLUGGABLE DATABASE clause, then the newly created database is a non-CDB and can never contain PDBs. 

D: You can create a PDB by plugging in a Non-CDB as a PDB. The following graphic depicts the options for creating a PDB: 

Description of cncpt358.png follows 

Incorrect: 

Not E: For the duration of its existence, a database is either a CDB or a non-CDB. You cannot transform a non-CDB into a CDB or vice versa. You must define a database as a CDB at creation, and then create PDBs within this CDB. 


Q5. Your database is open and the listener LISTNENER is up. You issue the command: 

LSNRCTL> RELOAD 

What is the effect of reload on sessions that were originally established by listener? 

A. Only sessions based on static listener registrations are disconnected. 

B. Existing connections are not disconnected; however, they cannot perform any operations until the listener completes the re-registration of the database instance and service handlers. 

C. The sessions are not affected and continue to function normally. 

D. All the sessions are terminated and active transactions are rolled back. 

Answer:


Q6. Which three statements are true about the working of system privileges in a multitenant control database (CDB) that has pluggable databases (PDBs)? 

A. System privileges apply only to the PDB in which they are used. 

B. Local users cannot use local system privileges on the schema of a common user. 

C. The granter of system privileges must possess the set container privilege. 

D. Common users connected to a PDB can exercise privileges across other PDBs. 

E. System privileges with the with grant option container all clause must be granted to a common user before the common user can grant privileges to other users. 

Answer: A,C,E 

Explanation: A, Not D: In a CDB, PUBLIC is a common role. In a PDB, privileges granted locally to PUBLIC enable all local and common users to exercise these privileges in this 

PDB only. 

C: A user can only perform common operations on a common role, for example, granting privileges commonly to the role, when the following criteria are met: 

The user is a common user whose current container is root. 

The user has the SET CONTAINER privilege granted commonly, which means that the 

privilege applies in all containers. 

The user has privilege controlling the ability to perform the specified operation, and this 

privilege has been granted commonly 

Incorrect: 

Note: 

* Every privilege and role granted to Oracle-supplied users and roles is granted commonly except for system privileges granted to PUBLIC, which are granted locally. 


Q7. The ORCL database is configured to support shared server mode. You want to ensure that a user connecting remotely to the database instance has a one-to-one ratio between client and server processes. 

Which connection method guarantees that this requirement is met? 

A. connecting by using an external naming method 

B. connecting by using the easy connect method 

C. creating a service in the database by using the dbms_service.create_service procedure and using this service for creating a local naming service" 

D. connecting by using the local naming method with the server = dedicated parameter set in the tnsnames.ora file for the net service 

E. connecting by using a directory naming method 

Answer: C,E 


Q8. Examine the following command; 

ALTER SYSTEM SET enable_ddl_logging = TRUE; 

Which statement is true? 

A. Only the data definition language (DDL) commands that resulted in errors are logged in the alert log file. 

B. All DDL commands are logged in the alert log file. 

C. All DDL commands are logged in a different log file that contains DDL statements and their execution dates. 

D. Only DDL commands that resulted in the creation of new segments are logged. 

E. All DDL commands are logged in XML format in the alert directory under the Automatic Diagnostic Repository (ADR) home. 

Answer:

Explanation: Once DDL logging is turned on, every DDL command will be logged in the alert log file and also the log.xml file. 

Note: 

* By default Oracle database does not log any DDL operations performed by any user. The default settings for auditing only logs DML operations. 

* Oracle 12c DDL Logging – ENABLE_DDL_LOGGING 

The first method is by using the enabling a DDL logging feature built into the database. By default it is turned off and you can turn it on by setting the value of ENABLE_DDL_LOGGING initialization parameter to true. 

* We can turn it on using the following command. The parameter is dynamic and you can 

turn it on/off on the go. 

SQL> alter system set ENABLE_DDL_LOGGING=true; 

System altered. Elapsed: 00:00:00.05 SQL> 

Once it is turned on, every DDL command will be logged in the alert log file and also the log.xml file. 


Q9. Examine the current value for the following parameters in your database instance: 

SGA_MAX_SIZE = 1024M 

SGA_TARGET = 700M 

DB_8K_CACHE_SIZE = 124M 

LOG_BUFFER = 200M 

You issue the following command to increase the value of DB_8K_CACHE_SIZE: 

SQL> ALTER SYSTEM SET DB_8K_CACHE_SIZE=140M; 

Which statement is true? 

A. It fails because the DB_8K_CACHE_SIZE parameter cannot be changed dynamically. 

B. It succeeds only if memory is available from the autotuned components if SGA. 

C. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_TARGET. 

D. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_MAX_SIZE. 

Answer:

Explanation: * The SGA_TARGET parameter can be dynamically increased up to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced. 

* Example: 

For example, suppose you have an environment with the following configuration: 

SGA_MAX_SIZE = 1024M SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, the value of SGA_TARGET can be resized up to 1024M and can also be reduced until one or more of the automatically sized components reaches its minimum size. The exact value depends on environmental factors such as the number of CPUs on the system. However, the value of DB_8K_CACHE_SIZE remains fixed at all times at 128M 

* DB_8K_CACHE_SIZE Size of cache for 8K buffers 

* For example, consider this configuration: 

SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M is taken away from the automatically sized components. Likewise, reducing DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the automatically sized components. 


Q10. In your production database, data manipulation language (DML) operations are executed on the SALES table. 

You have noticed some dubious values in the SALES table during the last few days. You are able to track users, actions taken, and the time of the action for this particular period but the changes in data are not tracked. You decide to keep track of both the old data and new data in the table long with the user information. 

What action would you take to achieve this task? 

A. Apply fine-grained auditing. 

B. Implement value-based auditing. 

C. Impose standard database auditing to audit object privileges. 

D. Impose standard database auditing to audit SQL statements. 

Answer:


Q11. You plan to implement the distributed database system in your company. You invoke Database Configuration Assistant (DBCA) to create a database on the server. During the installation, DBCA prompts you to specify the Global Database Name. 

What must this name be made up of? 

A. It must be made up of a database name and a domain name. 

B. It must be made up of the value in ORACLE_SID and HOSTNAME. 

C. It must be made up of the value that you plan to assign for INSTANCE_NAME and HOSTNAME. 

D. It must be made up of the value that you plan to assign for ORACLE_SID and SERVICE_NAMES. 

Answer:

Explanation: Using the DBCA to Create a Database (continued) 

3. Database Identification: Enter the Global Database Name in The form database_name.domain_name, and the system identifier (SID). The SID defaults lo the database name and uniquely identifies the instance associated with the database. 

4. Management Options: Use this page to set up your database so that it can be managed with Oracle Enterprise Manager. Select the default: "Configure the Database with Enterprise Manager." Optionally, this page allows you to configure alert notifications and daily disk backup area settings. 

Note: Yon must configure the listener before you can configure Enterprise Manager (as shown earlier). 


Q12. What is the effect of specifying the "ENABLE PLUGGABLE DATABASE" clause in a "CREATE DATABASE” statement? 

A. It will create a multitenant container database (CDB) with only the root opened. 

B. It will create a CDB with root opened and seed read only. 

C. It will create a CDB with root and seed opened and one PDB mounted. 

D. It will create a CDB that must be plugged into an existing CDB. 

E. It will create a CDB with root opened and seed mounted. 

Answer:

Explanation: * The CREATE DATABASE ... ENABLE PLUGGABLE DATABASE SQL statement creates a new CDB. If you do not specify the ENABLE PLUGGABLE DATABASE clause, then the newly created database is a non-CDB and can never contain PDBs. 

Along with the root (CDB$ROOT), Oracle Database automatically creates a seed PDB (PDB$SEED). The following graphic shows a newly created CDB: 

Description of admin095.png follows 

* Creating a PDB Rather than constructing the data dictionary tables that define an empty PDB from scratch, and then populating its Obj$ and Dependency$ tables, the empty PDB is created when the CDB is created. (Here, we use empty to mean containing no customer-created artifacts.) It is referred to as the seed PDB and has the name PDB$Seed. Every CDB non-negotiably contains a seed PDB; it is non-negotiably always open in read-only mode. This has no conceptual significance; rather, it is just an optimization device. The create PDB operation is implemented as a special case of the clone PDB operation. 


Q13. Which three statements are true about Automatic Workload Repository (AWR)? 

A. All AWR tables belong to the SYSTEM schema. 

B. The AWR data is stored in memory and in the database. 

C. The snapshots collected by AWR are used by the self-tuning components in the database 

D. AWR computes time model statistics based on time usage for activities, which are displayed in the v$SYS time model and V$SESS_TIME_MODEL views. 

E. AWR contains system wide tracing and logging information. 

Answer: B,C,E 

Explanation: * A fundamental aspect of the workload repository is that it collects and persists database performance data in a manner that enables historical performance analysis. The mechanism for this is the AWR snapshot. On a periodic basis, AWR takes a “snapshot” of the current statistic values stored in the database instance’s memory and persists them to its tables residing in the SYSAUX tablespace. 

* AWR is primarily designed to provide input to higherlevel components such as automatic tuning algorithms and advisors, but can also provide a wealth of information for the manual tuning process. 


Q14. Which two statements are true about the Oracle Direct Network File system (DNFS)? 

A. It utilizes the OS file system cache. 

B. A traditional NFS mount is not required when using Direct NFS. 

C. Oracle Disk Manager can manage NFS on its own, without using the operating kernel NFS driver. 

D. Direct NFS is available only in UNIX platforms. 

E. Direct NFS can load-balance I/O traffic across multiple network adapters. 

Answer: C,E 

Explanation: E: Performance is improved by load balancing across multiple network interfaces (if available). 

Note: 

* To enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library with one that supports Direct NFS Client. 

Incorrect: Not A: Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system level caches and eliminates any operating system write-ordering locks Not B: 

* To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts. 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Not D: Direct NFS is provided as part of the database kernel, and is thus available on all supported database platforms - even those that don't support NFS natively, like Windows. 

Note: 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Direct NFS is built directly into the database kernel - just like ASM which is mainly used when using DAS or SAN storage. 

* Oracle Direct NFS (dNFS) is an internal I/O layer that provides faster access to large NFS files than traditional NFS clients. 


Q15. Flashback is enabled for your multitenant container database (CDB), which contains two pluggable database (PDBs). A local user was accidently dropped from one of the PDBs. 

You want to flash back the PDB to the time before the local user was dropped. You connect to the CDB and execute the following commands: 

SQL > SHUTDOWN IMMEDIATE SQL > STARTUP MOUNT SQL > FLASHBACK DATABASE to TIME “TO_DATE (‘08/20/12’ , ‘MM/DD/YY’)”; 

Examine following commands: 

1. ALTER PLUGGABLE DATABASE ALL OPEN; 

2. ALTER DATABASE OPEN; 

3. ALTER DATABASE OPEN RESETLOGS; 

Which command or commands should you execute next to allow updates to the flashback back schema? 

A. Only 1 

B. Only 2 

C. Only 3 

D. 3 and 1 

E. 1 and 2 

Answer:

Explanation: Example (see step23): 

Step 1: 

Run the RMAN FLASHBACK DATABASE command. 

You can specify the target time by using a form of the command shown in the following 

examples: 

FLASHBACK DATABASE TO SCN 46963; 

FLASHBACK DATABASE 

TO RESTORE POINT BEFORE_CHANGES; 

FLASHBACK DATABASE TO TIME 

"TO_DATE('09/20/05','MM/DD/YY')"; 

When the FLASHBACK DATABASE command completes, the database is left mounted and recovered to the specified target time. 

Step 2: 

Make the database available for updates by opening the database with the RESETLOGS option. If the database is currently open read-only, then execute the following commands in SQL*Plus: 

SHUTDOWN IMMEDIATE 

STARTUP MOUNT 

ALTER DATABASE OPEN RESETLOGS; 


Q16. To enable the Database Smart Flash Cache, you configure the following parameters: 

DB_FLASH_CACHE_FILE = ‘/dev/flash_device_1’ , ‘/dev/flash_device_2’ 

DB_FLASH_CACHE_SIZE=64G 

What is the result when you start up the database instance? 

A. It results in an error because these parameter settings are invalid. 

B. One 64G flash cache file will be used. 

C. Two 64G flash cache files will be used. 

D. Two 32G flash cache files will be used. 

Answer: