Top Tips Of Renew CCDAK Free Dumps

2024 Confluent Official New Released CCDAK ♥♥
https://www.certleader.com/CCDAK-dumps.html


Proper study guides for Refresh Confluent Confluent Certified Developer for Apache Kafka Certification Examination certified begins with Confluent CCDAK preparation products which designed to deliver the Virtual CCDAK questions by making you pass the CCDAK test at your first time. Try the free CCDAK demo right now.

Free CCDAK Demo Online For Confluent Certifitcation:

NEW QUESTION 1
Select all the way for one consumer to subscribe simultaneously to the following topics - topic.history, topic.sports, topic.politics? (select two)

  • A. consumer.subscribe(Pattern.compile("topic\..*"));
  • B. consumer.subscribe("topic.history"); consumer.subscribe("topic.sports"); consumer.subscribe("topic.politics");
  • C. consumer.subscribePrefix("topic.");
  • D. consumer.subscribe(Arrays.asList("topic.history", "topic.sports", "topic.politics"));

Answer: AD

Explanation:
Multiple topics can be passed as a list or regex pattern.

NEW QUESTION 2
What is the default port that the KSQL server listens on?

  • A. 9092
  • B. 8088
  • C. 8083
  • D. 2181

Answer: B

Explanation:
Default port of KSQL server is 8088

NEW QUESTION 3
When using plain JSON data with Connect, you see the following error messageorg.apache.kafka.connect.errors.DataExceptionJsonDeserializer with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. How will you fix the error?

  • A. Set key.converter, value.converter to JsonConverter and the schema registry url
  • B. Use Single Message Transforms to add schema and payload fields in the message
  • C. Set key.converter.schemas.enable and value.converter.schemas.enable to false
  • D. Set key.converter, value.converter to AvroConverter and the schema registry url

Answer: C

Explanation:
You will need to set the schemas.enable parameters for the converter to false for plain text with no schema.

NEW QUESTION 4
A consumer has auto.offset.reset=latest, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group never committed offsets for the topic before. Where will the consumer read from?

  • A. offset 2311
  • B. offset 0
  • C. offset 45
  • D. it will crash

Answer: A

Explanation:
Latest means that data retrievals will start from where the offsets currently end

NEW QUESTION 5
A client connects to a broker in the cluster and sends a fetch request for a partition in a topic. It gets an exception Not Leader For Partition Exception in the response. How does client handle this situation?

  • A. Get the Broker id from Zookeeper that is hosting the leader replica and send request to it
  • B. Send metadata request to the same broker for the topic and select the broker hosting the leader replica
  • C. Send metadata request to Zookeeper for the topic and select the broker hosting the leader replica
  • D. Send fetch request to each Broker in the cluster

Answer: B

Explanation:
In case the consumer has the wrong leader of a partition, it will issue a metadata request. The Metadata request can be handled by any node, so clients know afterwards which broker are the designated leader for the topic partitions. Produce and consume requests can only be sent to the node hosting partition leader.

NEW QUESTION 6
There are two consumers C1 and C2 belonging to the same group G subscribed to topics T1 and T2. Each of the topics has 3 partitions. How will the partitions be assigned to consumers with Partition Assigner being Round Robin Assigner?

  • A. C1 will be assigned partitions 0 and 2 from T1 and partition 1 from T2. C2 will have partition 1 from T1 and partitions 0 and 2 from T2.
  • B. Two consumers cannot read from two topics at the same time
  • C. C1 will be assigned partitions 0 and 1 from T1 and T2, C2 will be assigned partition 2 from T1 and T2.
  • D. All consumers will read from all partitions

Answer: A

Explanation:
The correct option is the only one where the two consumers share an equal number of partitions amongst the two topics of three partitions. An interesting article to read ishttps://medium.com/@anyili0928/what-i-have-learned-from-kafka-partition-assignment- strategy-799fdf15d3ab

NEW QUESTION 7
You are using JDBC source connector to copy data from 3 tables to three Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?

  • A. 2
  • B. 1
  • C. 3
  • D. 6

Answer: A

Explanation:
here, we have three tables, but the max.tasks is 2, so that's the maximum number of tasks that will be created

NEW QUESTION 8
You are running a Kafka Streams application in a Docker container managed by Kubernetes, and upon application restart, it takes a long time for the docker container to replicate the state and get back to processing the data. How can you improve dramatically the application restart?

  • A. Mount a persistent volume for your RocksDB
  • B. Increase the number of partitions in your inputs topic
  • C. Reduce the Streams caching property
  • D. Increase the number of Streams threads

Answer: A

Explanation:
Although any Kafka Streams application is stateless as the state is stored in Kafka, it can take a while and lots of resources to recover the state from Kafka. In order to speed up recovery, it is advised to store the Kafka Streams state on a persistent volume, so that only the missing part of the state needs to be recovered.

NEW QUESTION 9
A consumer wants to read messages from a specific partition of a topic. How can this be achieved?

  • A. Call subscribe(String topic, int partition) passing the topic and partition number as the arguments
  • B. Call assign() passing a Collection of TopicPartitions as the argument
  • C. Call subscribe() passing TopicPartition as the argument

Answer: B

Explanation:
assign() can be used for manual assignment of a partition to a consumer, in which case subscribe() must not be used. Assign() takes a collection of TopicPartition object as an argument https://kafka.apache.org/23/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.ht
ml#assign-java.util.Collection-

NEW QUESTION 10
How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?

  • A. kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions
  • B. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions
  • C. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
  • D. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions

Answer: D

NEW QUESTION 11
The kafka-console-consumer CLI, when used with the default options

  • A. uses a random group id
  • B. always uses the same group id
  • C. does not use a group id

Answer: A

Explanation:
If a group is not specified, the kafka-console-consumer generates a random consumer group.

NEW QUESTION 12
If I want to send binary data through the REST proxy, it needs to be base64 encoded. Which component needs to encode the binary data into base 64?

  • A. The Producer
  • B. The Kafka Broker
  • C. Zookeeper
  • D. The REST Proxy

Answer: A

Explanation:
The REST Proxy requires to receive data over REST that is already base64 encoded, hence it is the responsibility of the producer

NEW QUESTION 13
If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what will happen?

  • A. Kafka will automatically create the topic with 1 partition and 1 replication factor
  • B. Kafka will automatically create the topic with the indicated producer settings num.partitions and default.replication.factor
  • C. Kafka will automatically create the topic with the broker settings num.partitions and default.replication.factor
  • D. Kafka will automatically create the topic with num.partitions=#of brokers and replication.factor=3

Answer: C

Explanation:
The broker settings comes into play when a topic is auto created

NEW QUESTION 14
Which actions will trigger partition rebalance for a consumer group? (select three)

  • A. Increase partitions of a topic
  • B. Remove a broker from the cluster
  • C. Add a new consumer to consumer group
  • D. A consumer in a consumer group shuts down Add a broker to the cluster

Answer: ACD

Explanation:
Rebalance occurs when a new consumer is added, removed or consumer dies or paritions increased.

NEW QUESTION 15
You have a consumer group of 12 consumers and when a consumer gets killed by the process management system, rather abruptly, it does not trigger a graceful shutdown of your consumer. Therefore, it takes up to 10 seconds for a rebalance to happen. The business would like to have a 3 seconds rebalance time. What should you do? (select two)

  • A. Increase session.timeout.ms
  • B. Decrease session.timeout.ms
  • C. Increase heartbeat.interval.ms
  • D. decrease max.poll.interval.ms
  • E. increase max.poll.interval.ms
  • F. Decrease heartbeat.interval.ms

Answer: BE

Explanation:
session.timeout.ms must be decreased to 3 seconds to allow for a faster rebalance, and the heartbeat thread must be quicker, so we also need to decrease heartbeat.interval.ms

NEW QUESTION 16
Your streams application is reading from an input topic that has 5 partitions. You run 5 instances of your application, each with num.streams.threads set to 5. How many stream tasks will be created and how many will be active?

  • A. 5 created, 1 active
  • B. 5 created, 5 active
  • C. 25 created, 25 active
  • D. 25 created, 5 active

Answer: D

Explanation:
One partition is assigned a thread, so only 5 will be active, and 25 threads (i.e. tasks) will be created

NEW QUESTION 17
is KSQL ANSI SQL compliant?

  • A. Yes
  • B. No

Answer: B

Explanation:
KSQL is not ANSI SQL compliant, for now there are no defined standards on streaming SQL languages

NEW QUESTION 18
Using the Confluent Schema Registry, where are Avro schema stored?

  • A. In the Schema Registry embedded SQL database
  • B. In the Zookeeper node /schemas
  • C. In the message bytes themselves
  • D. In the _schemas topic

Answer: D

Explanation:
The Schema Registry stores all the schemas in the _schemas Kafka topic

NEW QUESTION 19
What happens when broker.rack configuration is provided in broker configuration in Kafka cluster?

  • A. You can use the same broker.id as long as they have different broker.rack configuration
  • B. Replicas for a partition are placed in the same rack
  • C. Replicas for a partition are spread across different racks
  • D. Each rack contains all the topics and partitions, effectively making Kafka highly available

Answer: C

Explanation:
Partitions for newly created topics are assigned in a rack alternating manner, this is the only change broker.rack does

NEW QUESTION 20
......

Thanks for reading the newest CCDAK exam dumps! We recommend you to try the PREMIUM Certleader CCDAK dumps in VCE and PDF here: https://www.certleader.com/CCDAK-dumps.html (150 Q&As Dumps)