Cause all that matters here is passing the Confluent CCDAK exam. Cause all that you need is a high score of CCDAK Confluent Certified Developer for Apache Kafka Certification Examination exam. The only one thing you need to do is downloading Certleader CCDAK exam study guides now. We will not let you down with our money-back guarantee.

Confluent CCDAK Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
What is a generic unique id that I can use for messages I receive from a consumer?

  • A. topic + partition + timestamp
  • B. topic + partition + offset
  • C. topic + timestamp

Answer: B

Explanation:
(Topic,Partition,Offset) uniquely identifies a message in Kafka

NEW QUESTION 2
In Avro, adding an element to an enum without a default is a schema evolution

  • A. breaking
  • B. full
  • C. backward
  • D. forward

Answer: A

Explanation:
Since Confluent 5.4.0, Avro 1.9.1 is used. Since default value was added to enum complex type , the schema resolution changed from:
(<1.9.1) if both are enums:** if the writer's symbol is not present in the reader's enum, then an error is signalled. **(>=1.9.1) if both are enums:
if the writer's symbol is not present in the reader's enum and the reader has a default value, then that value is used, otherwise an error is signalled.

NEW QUESTION 3
A Zookeeper ensemble contains 5 servers. What is the maximum number of servers that can go missing and the ensemble still run?

  • A. 3
  • B. 4
  • C. 2
  • D. 1

Answer: C

Explanation:
majority consists of 3 zk nodes for 5 nodes zk cluster, so 2 can fail

NEW QUESTION 4
A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before. Where will the consumer read from?

  • A. offset 45
  • B. offset 10
  • C. it will crash
  • D. offset 2311

Answer: C

Explanation:
auto.offset.reset=none means that the consumer will crash if the offsets it's recovering from have been deleted from Kafka, which is the case here, as 10 < 45

NEW QUESTION 5
What is the default port that the KSQL server listens on?

  • A. 9092
  • B. 8088
  • C. 8083
  • D. 2181

Answer: B

Explanation:
Default port of KSQL server is 8088

NEW QUESTION 6
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 1. What is the maximum number of brokers that can be down so that a producer with acks=all can still produce to the topic?

  • A. 3
  • B. 2
  • C. 1

Answer: C

Explanation:
Two brokers can go down, and one replica will still be able to receive and serve data

NEW QUESTION 7
If I want to have an extremely high confidence that leaders and replicas have my data, I
should use

  • A. acks=all, replication factor=2, min.insync.replicas=1
  • B. acks=1, replication factor=3, min.insync.replicas=2
  • C. acks=all, replication factor=3, min.insync.replicas=2
  • D. acks=all, replication factor=3, min.insync.replicas=1

Answer: C

Explanation:
acks=all means the leader will wait for all in-sync replicas to acknowledge the record. Also the min in-sync replica setting specifies the minimum number of replicas that need to be in- sync for the partition to remain available for writes.

NEW QUESTION 8
When using plain JSON data with Connect, you see the following error messageorg.apache.kafka.connect.errors.DataExceptionJsonDeserializer with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. How will you fix the error?

  • A. Set key.converter, value.converter to JsonConverter and the schema registry url
  • B. Use Single Message Transforms to add schema and payload fields in the message
  • C. Set key.converter.schemas.enable and value.converter.schemas.enable to false
  • D. Set key.converter, value.converter to AvroConverter and the schema registry url

Answer: C

Explanation:
You will need to set the schemas.enable parameters for the converter to false for plain text with no schema.

NEW QUESTION 9
Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

  • A. After cleanup, only one message per key is retained with the first value
  • B. Each message stored in the topic is compressed
  • C. Kafka automatically de-duplicates incoming messages based on key hashes
  • D. After cleanup, only one message per key is retained with the latest value Compaction changes the offset of messages

Answer: D

Explanation:
Log compaction retains at least the last known value for each record key for a single topic partition. All compacted log offsets remain valid, even if record at offset has been compacted away as a consumer will get the next highest offset.

NEW QUESTION 10
What exceptions may be caught by the following producer? (select two) ProducerRecord<String, String> record =
new ProducerRecord<>("topic1", "key1", "value1"); try {
producer.send(record);
} catch (Exception e) { e.printStackTrace();
}

  • A. BrokerNotAvailableException
  • B. SerializationException
  • C. InvalidPartitionsException
  • D. BufferExhaustedException

Answer: BD

Explanation:
These are the client side exceptions that may be encountered before message is sent to the broker, and before a future is returned by the .send() method.

NEW QUESTION 11
I am producing Avro data on my Kafka cluster that is integrated with the Confluent Schema Registry. After a schema change that is incompatible, I know my data will be rejected. Which component will reject the data?

  • A. The Confluent Schema Registry
  • B. The Kafka Broker
  • C. The Kafka Producer itself
  • D. Zookeeper

Answer: A

Explanation:
The Confluent Schema Registry is your safeguard against incompatible schema changes and will be the component that ensures no breaking schema evolution will be possible. Kafka Brokers do not look at your payload and your payload schema, and therefore will not reject data

NEW QUESTION 12
If I supply the setting compression.type=snappy to my producer, what will happen? (select two)

  • A. The Kafka brokers have to de-compress the data
  • B. The Kafka brokers have to compress the data
  • C. The Consumers have to de-compress the data
  • D. The Consumers have to compress the data
  • E. The Producers have to compress the data

Answer: C

Explanation:
Kafka transfers data with zero copy and no transformation. Any transformation (including compression) is the responsibility of clients.

NEW QUESTION 13
Once sent to a topic, a message can be modified

  • A. No
  • B. Yes

Answer: A

Explanation:
Kafka logs are append-only and the data is immutable

NEW QUESTION 14
In Avro, removing or adding a field that has a default is a schema evolution

  • A. full
  • B. backward
  • C. breaking
  • D. forward

Answer: A

Explanation:
Clients with new schema will be able to read records saved with old schema and clients with old schema will be able to read records saved with new schema.

NEW QUESTION 15
You are receiving orders from different customer in an "orders" topic with multiple partitions. Each message has the customer name as the key. There is a special customer named ABC that generates a lot of orders and you would like to reserve a partition exclusively for ABC. The rest of the message should be distributed among other partitions. How can this be achieved?

  • A. Add metadata to the producer record
  • B. Create a custom partitioner
  • C. All messages with the same key will go the same partition, but the same partition may have messages with different key
  • D. It is not possible to reserve
  • E. Define a Kafka Broker routing rule

Answer: B

Explanation:
A Custom Partitioner allows you to easily customise how the partition number gets computed from a source message.

NEW QUESTION 16
Using the Confluent Schema Registry, where are Avro schema stored?

  • A. In the Schema Registry embedded SQL database
  • B. In the Zookeeper node /schemas
  • C. In the message bytes themselves
  • D. In the _schemas topic

Answer: D

Explanation:
The Schema Registry stores all the schemas in the _schemas Kafka topic

NEW QUESTION 17
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?

  • A. The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
  • B. The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
  • C. The broker will crash
  • D. The broker will start, and won't have any dat
  • E. If the broker comes leader, we have a data loss

Answer: B

Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!

NEW QUESTION 18
A Zookeeper ensemble contains 3 servers. Over which ports the members of the ensemble should be able to communicate in default configuration? (select three)

  • A. 2181
  • B. 3888
  • C. 443
  • D. 2888
  • E. 9092
  • F. 80

Answer: ABD

Explanation:
2181 - client port, 2888 - peer port, 3888 - leader port

NEW QUESTION 19
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=1 can't produce?

  • A. 3
  • B. 1
  • C. 2

Answer: D

Explanation:
min.insync.replicas does not impact producers when acks=1 (only when acks=all)

NEW QUESTION 20
......

P.S. Easily pass CCDAK Exam with 150 Q&As Surepassexam Dumps & pdf Version, Welcome to Download the Newest Surepassexam CCDAK Dumps: https://www.surepassexam.com/CCDAK-exam-dumps.html (150 New Questions)