All About Actual CCDAK Dumps
2024 Confluent Official New Released CCDAK ♥♥
https://www.certleader.com/CCDAK-dumps.html
Master the CCDAK Confluent Certified Developer for Apache Kafka Certification Examination content and be ready for exam day success quickly with this Actualtests CCDAK exam price. We guarantee it!We make it a reality and give you real CCDAK questions in our Confluent CCDAK braindumps.Latest 100% VALID Confluent CCDAK Exam Questions Dumps at below page. You can use our Confluent CCDAK braindumps and pass your exam.
Confluent CCDAK Free Dumps Questions Online, Read and Test Now.
NEW QUESTION 1
Two consumers share the same group.id (consumer group id). Each consumer will
- A. Read mutually exclusive offsets blocks on all the partitions
- B. Read all the data on mutual exclusive partitions
- C. Read all data from all partitions
Answer: B
Explanation:
Each consumer is assigned a different partition of the topic to consume.
NEW QUESTION 2
To get acknowledgement of writes to only the leader partition, we need to use the config...
- A. acks=1
- B. acks=0
- C. acks=all
Answer: A
Explanation:
Producers can set acks=1 to get acknowledgement from partition leader only.
NEW QUESTION 3
In Kafka, every broker... (select three)
- A. contains all the topics and all the partitions
- B. knows all the metadata for all topics and partitions
- C. is a controller
- D. knows the metadata for the topics and partitions it has on its disk
- E. is a bootstrap broker
- F. contains only a subset of the topics and the partitions
Answer: BEF
Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the metadata and each broker is a bootstrap broker, but only one of them is elected controller
NEW QUESTION 4
What is the protocol used by Kafka clients to securely connect to the Confluent REST Proxy?
- A. Kerberos
- B. SASL
- C. HTTPS (SSL/TLS)
- D. HTTP
Answer: C
Explanation:
TLS - but it is still called SSL.
NEW QUESTION 5
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 1. What is the maximum number of brokers that can be down so that a producer with acks=all can still produce to the topic?
- A. 3
- B. 2
- C. 1
Answer: C
Explanation:
Two brokers can go down, and one replica will still be able to receive and serve data
NEW QUESTION 6
You have a Zookeeper cluster that needs to be able to withstand the loss of 2 servers and still be able to function. What size should your Zookeeper cluster have?
- A. 4
- B. 5
- C. 2
- D. 3
- E. 6
Answer: B
Explanation:
Your Zookeeper cluster needs to have an odd number of servers, and must maintain a majority of servers up to be able to vote. Therefore, a 2N+1 zookeeper cluster can survive to N zookeeper being down, so here the right answer is N=2, 2*N+1=5
NEW QUESTION 7
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?
- A. The broker will start, and other topics will also be deleted as the broker data on the disk got deleted
- B. The broker will start, and won't be online until all the data it needs to have is replicated from other leaders
- C. The broker will crash
- D. The broker will start, and won't have any dat
- E. If the broker comes leader, we have a data loss
Answer: B
Explanation:
Kafka replication mechanism makes it resilient to the scenarios where the broker lose data on disk, but can recover from replicating from other brokers. This makes Kafka amazing!
NEW QUESTION 8
Once sent to a topic, a message can be modified
- A. No
- B. Yes
Answer: A
Explanation:
Kafka logs are append-only and the data is immutable
NEW QUESTION 9
To read data from a topic, the following configuration is needed for the consumers
- A. all brokers of the cluster, and the topic name
- B. any broker to connect to, and the topic name
- C. the list of brokers that have the data, the topic name and the partitions list
- D. any broker, and the list of topic partitions
Answer: B
Explanation:
All brokers can respond to Metadata request, so a client can connect to any broker in the cluster.
NEW QUESTION 10
Which of the following Kafka Streams operators are stateful? (select all that apply)
- A. flatmap
- B. reduce
- C. joining
- D. count
- E. peek
- F. aggregate
Answer: BCDF
Explanation:
Seehttps://kafka.apache.org/20/documentation/streams/developer-guide/dsl- api.html#stateful-transformations
NEW QUESTION 11
To import data from external databases, I should use
- A. Confluent REST Proxy
- B. Kafka Connect Sink
- C. Kafka Streams
- D. Kafka Connect Source
Answer: D
Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka
Connect Source is used to import from external databases into Kafka.
NEW QUESTION 12
What client protocol is supported for the schema registry? (select two)
- A. HTTP
- B. HTTPS
- C. JDBC
- D. Websocket
- E. SASL
Answer: AB
Explanation:
clients can interact with the schema registry using the HTTP or HTTPS interface
NEW QUESTION 13
To allow consumers in a group to resume at the previously committed offset, I need to set the proper value for...
- A. value.deserializer
- B. auto.offset.resets
- C. group.id
- D. enable.auto.commit
Answer: C
Explanation:
Setting a group.id that's consistent across restarts will allow your consumers part of the same group to resume reading from where offsets were last committed for that group
NEW QUESTION 14
You are using JDBC source connector to copy data from 2 tables to two Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?
- A. 6
- B. 1
- C. 2
- D. 3
Answer: C
Explanation:
we have two tables, so the max number of tasks is 2
NEW QUESTION 15
Your topic is log compacted and you are sending a message with the key K and value null. What will happen?
- A. The broker will delete all messages with the key K upon cleanup
- B. The producer will throw a Runtime exception
- C. The broker will delete the message with the key K and null value only upon cleanup
- D. The message will get ignored by the Kafka broker
Answer: A
Explanation:
Sending a message with the null value is called a tombstone in Kafka and will ensure the log compacted topic does not contain any messages with the key K upon compaction
NEW QUESTION 16
What are the requirements for a Kafka broker to connect to a Zookeeper ensemble? (select two)
- A. Unique value for each broker's zookeeper.connect parameter
- B. Unique values for each broker's broker.id parameter
- C. All the brokers must share the same broker.id
- D. All the brokers must share the same zookeeper.connect parameter
Answer: BD
Explanation:
Each broker must have a unique broker id and connect to the same zk ensemble and root zNode
NEW QUESTION 17
To prevent network-induced duplicates when producing to Kafka, I should use
- A. max.in.flight.requests.per.connection=1
- B. enable.idempotence=true
- C. retries=200000
- D. batch.size=1
Answer: B
Explanation:
Producer idempotence helps prevent the network introduced duplicates. More details herehttps://cwiki.apache.org/confluence/display/KAFKA/Idempotent+Producer
NEW QUESTION 18
By default, which replica will be elected as a partition leader? (select two)
- A. Preferred leader broker if it is in-sync and auto.leader.rebalance.enable=true
- B. Any of the replicas
- C. Preferred leader broker if it is in-sync and auto.leader.rebalance.enable=false
- D. An in-sync replica
Answer: BD
Explanation:
Preferred leader is a broker that was leader when topic was created. It is preferred because when partitions are first created, the leaders are balanced between brokers. Otherwise, any of the in-sync replicas (ISR) will be elected leader, as long as unclean.leader.election=false (by default)
NEW QUESTION 19
I am producing Avro data on my Kafka cluster that is integrated with the Confluent Schema Registry. After a schema change that is incompatible, I know my data will be rejected. Which component will reject the data?
- A. The Confluent Schema Registry
- B. The Kafka Broker
- C. The Kafka Producer itself
- D. Zookeeper
Answer: A
Explanation:
The Confluent Schema Registry is your safeguard against incompatible schema changes and will be the component that ensures no breaking schema evolution will be possible. Kafka Brokers do not look at your payload and your payload schema, and therefore will not reject data
NEW QUESTION 20
......
Recommend!! Get the Full CCDAK dumps in VCE and PDF From DumpSolutions.com, Welcome to Download: https://www.dumpsolutions.com/CCDAK-dumps/ (New 150 Q&As Version)