The Avant-garde Guide To DBS-C01 Exam Price
Master the DBS-C01 AWS Certified Database - Specialty content and be ready for exam day success quickly with this Certleader DBS-C01 exam fees. We guarantee it!We make it a reality and give you real DBS-C01 questions in our Amazon-Web-Services DBS-C01 braindumps.Latest 100% VALID Amazon-Web-Services DBS-C01 Exam Questions Dumps at below page. You can use our Amazon-Web-Services DBS-C01 braindumps and pass your exam.
Online DBS-C01 free questions and answers of New Version:
NEW QUESTION 1
A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time-consuming, so it is not an option.
How should the Database Specialist satisfy this new requirement?
- A. Create a snapshot of the unencrypted RDS DB instanc
- B. Create an encrypted copy of the unencryptedsnapsho
- C. Restore the encrypted snapshot copy.
- D. Modify the RDS DB instanc
- E. Enable the AWS KMS encryption option that leverages the AWS CLI.
- F. Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
- G. Create an encrypted read replica of the RDS DB instanc
- H. Promote it the master.
NEW QUESTION 2
A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:
ERROR: cloud not write block 7507718 of temporary file: No space left on device
What is the cause of this error and what should the Database Specialist do to resolve this issue?
- A. The scaling of Aurora storage cannot catch up with the data loadin
- B. The Database Specialist needs tomodify the workload to load the data slowly.
- C. The scaling of Aurora storage cannot catch up with the data loadin
- D. The Database Specialist needs toenable Aurora storage scaling.
- E. The local storage used to store temporary tables is ful
- F. The Database Specialist needs to scale up theinstance.
- G. The local storage used to store temporary tables is ful
- H. The Database Specialist needs to enable localstorage scaling.
NEW QUESTION 3
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?
- A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
- B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture(CDC)
- C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
- D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change datacapture (CDC)
NEW QUESTION 4
A company is planning to close for several days. A Database Specialist needs to stop all applications alongwith the DB instances to ensure employees do not have access to the systems during this time. All databasesare running on Amazon RDS for MySQL.
The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs,the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?
- A. Stop the source instances before stopping their read replicas
- B. Delete each read replica before stopping its corresponding source instance
- C. Stop the read replicas before stopping their source instances
- D. Use the AWS CLI to stop each read replica and source instance at the same
NEW QUESTION 5
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect tothe restored RDS DB instance. What is the likely cause of this problem?
- A. The restored DB instance does not have Enhanced Monitoring enabled
- B. The production DB instance is using a custom parameter group
- C. The restored DB instance is using the default security group
- D. The production DB instance is using a custom option group
NEW QUESTION 6
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a
database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server: Connection times out” error message to Amazon CloudWatch Logs.
What is the cause of this error?
- A. The user name and password the application is using are incorrect.
- B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
- C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
- D. The user name and password are correct, but the user is not authorized to use the DB instance.
NEW QUESTION 7
A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours.
Which solution will meet these requirements and is the MOST operationally efficient?
- A. Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.Move the snapshot to the company’s Amazon S3 bucket.
- B. Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.
- C. Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.
- D. Create an AWS Lambda function to run on the first day of every month to create an automated RDSsnapshot.
NEW QUESTION 8
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?
- A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucke
- B. Set a lifecycle policy to expire the objects after 90 days.
- C. Modify the RDS databases to publish log to Amazon CloudWatch Log
- D. Change the log retention policy for each log group to expire the events after 90 days.
- E. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucke
- F. Set a lifecycle policy to expire the objects after 90 days.
- G. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Log
- H. Change the log retention policy for the log group to expire the events after 90 days.
NEW QUESTION 9
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?
- A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint
- B. The client-side application is caching the DNS data and its TTL is set too high
- C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections
- D. There were no active Aurora Replicas in the Aurora DB cluster
NEW QUESTION 10
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.
How should the Database Specialist apply the parameter group change for the DB instance?
- A. Select the option to apply the change immediately
- B. Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
- C. Apply the change manually by rebooting the DB instance during the approved maintenance window
- D. Reboot the secondary Multi-AZ DB instance
NEW QUESTION 11
An IT consulting company wants to reduce costs when operating its development environment databases. The company’s workflow creates multiple Amazon Aurora MySQL DB clusters for each development group. The Aurora DB clusters are only used for 8 hours a day. The DB clusters can then be deleted at the end of the development cycle, which lasts 2 weeks.
Which of the following provides the MOST cost-effective solution?
- A. Use AWS CloudFormation template
- B. Deploy a stack with the DB cluster for each development group.Delete the stack at the end of the development cycle.
- C. Use the Aurora DB cloning featur
- D. Deploy a single development and test Aurora DB instance, and createclone instances for the development group
- E. Delete the clones at the end of the development cycle.
- F. Use Aurora Replica
- G. From the master automatic pause compute capacity option, create replicas for eachdevelopment group, and promote each replica to maste
- H. Delete the replicas at the end of the developmentcycle.
- I. Use Aurora Serverles
- J. Restore current Aurora snapshot and deploy to a serverless cluster for eachdevelopment grou
- K. Enable the option to pause the compute capacity on the cluster and set an appropriatetimeout.
NEW QUESTION 12
A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?
- A. Create an Aurora PostgreSQL DB cluste
- B. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
- C. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
- D. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
- E. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replic
- F. Promote the replica during the cutover.
NEW QUESTION 13
A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.
How should the Database Specialist optimize the database migration using AWS DMS?
- A. Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBstogether
- B. Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2without LOBs
- C. Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB andtask 2 without LOBs
- D. Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data andLOBs together
NEW QUESTION 14
A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.
Which AWS services should the Database Specialist consider? (Choose two.)
- A. Amazon DynamoDB
- B. Amazon Redshift
- C. Amazon Neptune
- D. Amazon Elasticsearch Service
- E. Amazon ElastiCache
NEW QUESTION 15
A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?
- A. Use a specific instance endpoint for each replica and add the instance endpoint to each read-onlyapplication connection string.
- B. Use reader endpoints for both the read-only workload applications.
- C. Use a reader endpoint for one read-only application and use an instance endpoint for the other read-onlyapplication.
- D. Use custom endpoints for the two read-only applications.
NEW QUESTION 16
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and
the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?
- A. Restart the DB cluster to apply the SSL change.
- B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
- C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security groupassigned to the DB cluster.
- D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.
NEW QUESTION 17
A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.
What should the company do to eliminate this application performance issue?
- A. Configure both of the Aurora Replicas to the same instance class as the primary DB instance.Enablecache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign afailover priority of tier-1 to the replicas.
- B. Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instancehas failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be theprimary DB instanc
- C. Configure an Amazon RDS event subscription to send a notification to an AmazonSNS topic to which the Lambda function is subscribed.
- D. Configure one Aurora Replica to have the same instance class as the primary DB instance.ImplementAurora PostgreSQL DB cluster cache managemen
- E. Set the failover priority to tier-0 for the primary DBinstance and one replica with the same instance clas
- F. Set the failover priority to tier-1 for the otherreplicas.
- G. Configure both Aurora Replicas to have the same instance class as the primary DB instance.ImplementAurora PostgreSQL DB cluster cache managemen
- H. Set the failover priority to tier-0 for the primary DBinstance and to tier-1 for the replicas.
NEW QUESTION 18
A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:
“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)
- A. Check that Amazon S3 has an IAM role granting read access to Neptune
- B. Check that an Amazon S3 VPC endpoint exists
- C. Check that a Neptune VPC endpoint exists
- D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
- E. Check that Neptune has an IAM role granting read access to Amazon S3
NEW QUESTION 19
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
Update scores in real time whenever a player is playing the game.
Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?
- A. Create a global secondary index with game_id as the partition key
- B. Create a global secondary index with user_id as the partition key
- C. Create a composite primary key with game_id as the partition key and user_id as the sort key
- D. Create a composite primary key with user_id as the partition key and game_id as the sort key
NEW QUESTION 20
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.
Only certain on-premises corporate network IPs should connect to the DB instance.
Connectivity is allowed from the corporate network only. Which combination of steps does the Database Specialist need to take to meet these new requirements?
- A. Modify the pg_hba.conf fil
- B. Add the required corporate network IPs and remove the unwanted IPs.
- C. Modify the associated security grou
- D. Add the required corporate network IPs and remove the unwanted IPs.
- E. Move the DB instance to a private subnet using AWS DMS.
- F. Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
- G. Disable the publicly accessible setting.
- H. Connect to the DB instance using private IPs and a VPN.
NEW QUESTION 21
A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.
Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?
- A. Create the database with the MasterUserName and MasterUserPassword properties set to the default value
- B. Then, create the secret with the user name and password set to the same default value
- C. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the databas
- D. Finally, update the secret’s password value with a randomly generated string set by the GenerateSecretString property.
- E. Add a Mapping property from the database Amazon Resource Name (ARN) to the secret AR
- F. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString propert
- G. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.
- H. Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property.Then, define the database user name in the SecureStringTemplate templat
- I. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword propertie
- J. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
- K. Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString propert
- L. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database AR
- M. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.
NEW QUESTION 22
100% Valid and Newest Version DBS-C01 Questions & Answers shared by Allfreedumps.com, Get Full Dumps HERE: https://www.allfreedumps.com/DBS-C01-dumps.html (New 85 Q&As)