Top Tips Of Most Up-to-date DAS-C01 Test Question

2022 Amazon-Web-Services Official New Released DAS-C01 ♥♥
https://www.certleader.com/DAS-C01-dumps.html


Want to know Actualtests DAS-C01 Exam practice test features? Want to lear more about Amazon-Web-Services AWS Certified Data Analytics - Specialty certification experience? Study Practical Amazon-Web-Services DAS-C01 answers to Most up-to-date DAS-C01 questions at Actualtests. Gat a success with an absolute guarantee to pass Amazon-Web-Services DAS-C01 (AWS Certified Data Analytics - Specialty) test on your first attempt.

Online Amazon-Web-Services DAS-C01 free dumps demo Below:

NEW QUESTION 1
An insurance company has raw data in JSON format that is sent without a predefined schedule through an Amazon Kinesis Data Firehose delivery stream to an Amazon S3 bucket. An AWS Glue crawler is scheduled to run every 8 hours to update the schema in the data catalog of the tables stored in the S3 bucket. Data analysts analyze the data using Apache Spark SQL on Amazon EMR set up with AWS Glue Data Catalog as the metastore. Data analysts say that, occasionally, the data they receive is stale. A data engineer needs to provide access to the most up-to-date data.
Which solution meets these requirements?

  • A. Create an external schema based on the AWS Glue Data Catalog on the existing Amazon Redshift cluster to query new data in Amazon S3 with Amazon Redshift Spectrum.
  • B. Use Amazon CloudWatch Events with the rate (1 hour) expression to execute the AWS Glue crawler every hour.
  • C. Using the AWS CLI, modify the execution schedule of the AWS Glue crawler from 8 hours to 1 minute.
  • D. Run the AWS Glue crawler from an AWS Lambda function triggered by an S3:ObjectCreated:* event notification on the S3 bucket.

Answer: D

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html "you can use a wildcard (for example, s3:ObjectCreated:*) to request notification when an object is created regardless of the API used" "AWS Lambda can run custom code in response to Amazon S3 bucket events. You upload your custom code to AWS Lambda and create what is called a Lambda function. When Amazon S3 detects an event of a specific type (for example, an object created event), it can publish the event to AWS Lambda and invoke your function in Lambda. In response, AWS Lambda runs your function."

NEW QUESTION 2
An online retailer needs to deploy a product sales reporting solution. The source data is exported from an external online transaction processing (OLTP) system for reporting. Roll-up data is calculated each day for the previous day’s activities. The reporting system has the following requirements:
Have the daily roll-up data readily available for 1 year.
After 1 year, archive the daily roll-up data for occasional but immediate access.
The source data exports stored in the reporting system must be retained for 5 years. Query access will be needed only for re-evaluation, which may occur within the first 90 days.
Which combination of actions will meet these requirements while keeping storage costs to a minimum? (Choose two.)

  • A. Store the source data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage clas
  • B. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation.
  • C. Store the source data initially in the Amazon S3 Glacier storage clas
  • D. Apply a lifecycle configuration that changes the storage class from Amazon S3 Glacier to Amazon S3 Glacier Deep Archive 90 days after creation, and then deletes the data 5 years after creation.
  • E. Store the daily roll-up data initially in the Amazon S3 Standard storage clas
  • F. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier Deep Archive 1 year after data creation.
  • G. Store the daily roll-up data initially in the Amazon S3 Standard storage clas
  • H. Apply a lifecycle configuration that changes the storage class to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) 1 year afterdata creation.
  • I. Store the daily roll-up data initially in the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage clas
  • J. Apply a lifecycle configuration that changes the storage class to Amazon S3 Glacier 1 year after data creation.

Answer: AD

NEW QUESTION 3
An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code.
A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application.
How should the data analyst meet this requirement while minimizing costs?

  • A. Store the contents of the mapping file in an Amazon DynamoDB tabl
  • B. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exist
  • C. Change the SQL query in the application to include the new field in the SELECT statement.
  • D. Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the.csv file in the Kinesis Data Analytics applicatio
  • E. Change the SQL query in the application to include a join to the file’s S3 Amazon Resource Name (ARN), and add the territory code field to the SELECT columns.
  • F. Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics applicatio
  • G. Change the SQL query in the application to include a join to the reference table and add the territory code field to the SELECT columns.
  • H. Store the contents of the mapping file in an Amazon DynamoDB tabl
  • I. Change the Kinesis DataAnalytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exist
  • J. Forward the record from the Lambda function to the original application destination.

Answer: C

NEW QUESTION 4
A media content company has a streaming playback application. The company wants to collect and analyze the data to provide near-real-time feedback on playback issues. The company needs to consume this data and return results within 30 seconds according to the service-level agreement (SLA). The company needs the consumer to identify playback issues, such as quality during a specified timeframe. The data will be emitted as JSON and may change schemas over time.
Which solution will allow the company to collect data for processing while meeting these requirements?

  • A. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure an S3 event trigger an AWS Lambda function to process the dat
  • B. The Lambda function will consume the data and process it to identify potential playback issue
  • C. Persist the raw data to Amazon S3.
  • D. Send the data to Amazon Managed Streaming for Kafka and configure an Amazon Kinesis Analytics for Java application as the consume
  • E. The application will consume the data and process it to identify potential playback issue
  • F. Persist the raw data to Amazon DynamoDB.
  • G. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Configure Amazon S3 to trigger an event for AWS Lambda to proces
  • H. The Lambda function will consume the data and process it to identify potential playback issue
  • I. Persist the raw data to Amazon DynamoDB.
  • J. Send the data to Amazon Kinesis Data Streams and configure an Amazon Kinesis Analytics for Java application as the consume
  • K. The application will consume the data and process it to identify potential playback issue
  • L. Persist the raw data to Amazon S3.

Answer: D

Explanation:
https://aws.amazon.com/blogs/aws/new-amazon-kinesis-data-analytics-for-java/

NEW QUESTION 5
A human resources company maintains a 10-node Amazon Redshift cluster to run analytics queries on the company’s data. The Amazon Redshift cluster contains a product table and a transactions table, and both tables have a product_sku column. The tables are over 100 GB in size. The majority of queries run on both tables.
Which distribution style should the company use for the two tables to achieve optimal query performance?

  • A. An EVEN distribution style for both tables
  • B. A KEY distribution style for both tables
  • C. An ALL distribution style for the product table and an EVEN distribution style for the transactions table
  • D. An EVEN distribution style for the product table and an KEY distribution style for the transactions table

Answer: B

NEW QUESTION 6
A smart home automation company must efficiently ingest and process messages from various connected devices and sensors. The majority of these messages are comprised of a large number of small files. These messages are ingested using Amazon Kinesis Data Streams and sent to Amazon S3 using a Kinesis data stream consumer application. The Amazon S3 message data is then passed through a processing pipeline built on Amazon EMR running scheduled PySpark jobs.
The data platform team manages data processing and is concerned about the efficiency and cost of downstream data processing. They want to continue to use PySpark.
Which solution improves the efficiency of the data processing jobs and is well architected?

  • A. Send the sensor and devices data directly to a Kinesis Data Firehose delivery stream to send the data to Amazon S3 with Apache Parquet record format conversion enable
  • B. Use Amazon EMR running PySpark to process the data in Amazon S3.
  • C. Set up an AWS Lambda function with a Python runtime environmen
  • D. Process individual Kinesis data stream messages from the connected devices and sensors using Lambda.
  • E. Launch an Amazon Redshift cluste
  • F. Copy the collected data from Amazon S3 to Amazon Redshift and move the data processing jobs from Amazon EMR to Amazon Redshift.
  • G. Set up AWS Glue Python jobs to merge the small data files in Amazon S3 into larger files and transform them to Apache Parquet forma
  • H. Migrate the downstream PySpark jobs from Amazon EMR to AWS Glue.

Answer: D

Explanation:
https://aws.amazon.com/it/about-aws/whats-new/2020/04/aws-glue-now-supports-serverless-streaming-etl/

NEW QUESTION 7
A company wants to enrich application logs in near-real-time and use the enriched dataset for further analysis. The application is running on Amazon EC2 instances across multiple Availability Zones and storing its logs using Amazon CloudWatch Logs. The enrichment source is stored in an Amazon DynamoDB table.
Which solution meets the requirements for the event collection and enrichment?

  • A. Use a CloudWatch Logs subscription to send the data to Amazon Kinesis Data Firehos
  • B. Use AWS Lambda to transform the data in the Kinesis Data Firehose delivery stream and enrich it with the data in the DynamoDB tabl
  • C. Configure Amazon S3 as the Kinesis Data Firehose delivery destination.
  • D. Export the raw logs to Amazon S3 on an hourly basis using the AWS CL
  • E. Use AWS Glue crawlers to catalog the log
  • F. Set up an AWS Glue connection for the DynamoDB table and set up an AWS Glue ETL job to enrich the dat
  • G. Store the enriched data in Amazon S3.
  • H. Configure the application to write the logs locally and use Amazon Kinesis Agent to send the data to Amazon Kinesis Data Stream
  • I. Configure a Kinesis Data Analytics SQL application with the Kinesis data stream as the sourc
  • J. Join the SQL application input stream with DynamoDB records, and then store the enriched output stream in Amazon S3 using Amazon Kinesis Data Firehose.
  • K. Export the raw logs to Amazon S3 on an hourly basis using the AWS CL
  • L. Use Apache Spark SQL on Amazon EMR to read the logs from Amazon S3 and enrich the records with the data from DynamoD
  • M. Store the enriched data in Amazon S3.

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample

NEW QUESTION 8
An online retail company uses Amazon Redshift to store historical sales transactions. The company is required to encrypt data at rest in the clusters to comply with the Payment Card Industry Data Security Standard (PCI DSS). A corporate governance policy mandates management of encryption keys using an on-premises hardware security module (HSM).
Which solution meets these requirements?

  • A. Create and manage encryption keys using AWS CloudHSM Classi
  • B. Launch an Amazon Redshift cluster in a VPC with the option to use CloudHSM Classic for key management.
  • C. Create a VPC and establish a VPN connection between the VPC and the on-premises networ
  • D. Create an HSM connection and client certificate for the on-premises HS
  • E. Launch a cluster in the VPC with the option to use the on-premises HSM to store keys.
  • F. Create an HSM connection and client certificate for the on-premises HS
  • G. Enable HSM encryption on the existing unencrypted cluster by modifying the cluste
  • H. Connect to the VPC where the Amazon Redshift cluster resides from the on-premises network using a VPN.
  • I. Create a replica of the on-premises HSM in AWS CloudHS
  • J. Launch a cluster in a VPC with the option to use CloudHSM to store keys.

Answer: B

NEW QUESTION 9
A real estate company has a mission-critical application using Apache HBase in Amazon EMR. Amazon EMR is configured with a single master node. The company has over 5 TB of data stored on an Hadoop Distributed File System (HDFS). The company wants a cost-effective solution to make its HBase data highly available.
Which architectural pattern meets company’s requirements?

  • A. Use Spot Instances for core and task nodes and a Reserved Instance for the EMR master node.Configurethe EMR cluster with multiple master node
  • B. Schedule automated snapshots using AmazonEventBridge.
  • C. Store the data on an EMR File System (EMRFS) instead of HDF
  • D. Enable EMRFS consistent view.Create an EMR HBase cluster with multiple master node
  • E. Point the HBase root directory to an Amazon S3 bucket.
  • F. Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view.Run two separate EMR clusters in two different Availability Zone
  • G. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.
  • H. Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view.Create a primary EMR HBase cluster with multiple master node
  • I. Create a secondary EMR HBase read- replica cluster in a separate Availability Zon
  • J. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.

Answer: D

NEW QUESTION 10
A company’s marketing team has asked for help in identifying a high performing long-term storage service for their data based on the following requirements:
DAS-C01 dumps exhibit The data size is approximately 32 TB uncompressed.
DAS-C01 dumps exhibit There is a low volume of single-row inserts each day.
DAS-C01 dumps exhibit There is a high volume of aggregation queries each day.
DAS-C01 dumps exhibit Multiple complex joins are performed.
DAS-C01 dumps exhibit The queries typically involve a small subset of the columns in a table. Which storage service will provide the MOST performant solution?

  • A. Amazon Aurora MySQL
  • B. Amazon Redshift
  • C. Amazon Neptune
  • D. Amazon Elasticsearch

Answer: B

NEW QUESTION 11
A data engineering team within a shared workspace company wants to build a centralized logging system for all weblogs generated by the space reservation system. The company has a fleet of Amazon EC2 instances that process requests for shared space reservations on its website. The data engineering team wants to ingest all weblogs into a service that will provide a near-real-time search engine. The team does not want to manage the maintenance and operation of the logging system.
Which solution allows the data engineering team to efficiently set up the web logging system within AWS?

  • A. Set up the Amazon CloudWatch agent to stream weblogs to CloudWatch logs and subscribe the Amazon Kinesis data stream to CloudWatc
  • B. Choose Amazon Elasticsearch Service as the end destination of the weblogs.
  • C. Set up the Amazon CloudWatch agent to stream weblogs to CloudWatch logs and subscribe the Amazon Kinesis Data Firehose delivery stream to CloudWatc
  • D. Choose Amazon Elasticsearch Service as the end destination of the weblogs.
  • E. Set up the Amazon CloudWatch agent to stream weblogs to CloudWatch logs and subscribe the Amazon Kinesis data stream to CloudWatc
  • F. Configure Splunk as the end destination of the weblogs.
  • G. Set up the Amazon CloudWatch agent to stream weblogs to CloudWatch logs and subscribe the Amazon Kinesis Firehose delivery stream to CloudWatc
  • H. Configure Amazon DynamoDB as the end destinationof the weblog

Answer: B

Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_ES_Stream.html

NEW QUESTION 12
A company wants to improve the data load time of a sales data dashboard. Data has been collected as .csv files and stored within an Amazon S3 bucket that is partitioned by date. The data is then loaded to an Amazon Redshift data warehouse for frequent analysis. The data volume is up to 500 GB per day.
Which solution will improve the data loading performance?

  • A. Compress .csv files and use an INSERT statement to ingest data into Amazon Redshift.
  • B. Split large .csv files, then use a COPY command to load data into Amazon Redshift.
  • C. Use Amazon Kinesis Data Firehose to ingest data into Amazon Redshift.
  • D. Load the .csv files in an unsorted key order and vacuum the table in Amazon Redshift.

Answer: B

Explanation:
https://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html

NEW QUESTION 13
A company receives data from its vendor in JSON format with a timestamp in the file name. The vendor uploads the data to an Amazon S3 bucket, and the data is registered into the company’s data lake for analysis and reporting. The company has configured an S3 Lifecycle policy to archive all files to S3 Glacier after 5 days.
The company wants to ensure that its AWS Glue crawler catalogs data only from S3 Standard storage and ignores the archived files. A data analytics specialist must implement a solution to achieve this goal without changing the current S3 bucket configuration.
Which solution meets these requirements?

  • A. Use the exclude patterns feature of AWS Glue to identify the S3 Glacier files for the crawler to exclude.
  • B. Schedule an automation job that uses AWS Lambda to move files from the original S3 bucket to a new S3 bucket for S3 Glacier storage.
  • C. Use the excludeStorageClasses property in the AWS Glue Data Catalog table to exclude files on S3 Glacier storage
  • D. Use the include patterns feature of AWS Glue to identify the S3 Standard files for the crawler to include.

Answer: A

NEW QUESTION 14
A data analyst is designing a solution to interactively query datasets with SQL using a JDBC connection. Users will join data stored in Amazon S3 in Apache ORC format with data stored in Amazon Elasticsearch Service (Amazon ES) and Amazon Aurora MySQL.
Which solution will provide the MOST up-to-date results?

  • A. Use AWS Glue jobs to ETL data from Amazon ES and Aurora MySQL to Amazon S3. Query the data with Amazon Athena.
  • B. Use Amazon DMS to stream data from Amazon ES and Aurora MySQL to Amazon Redshif
  • C. Query the data with Amazon Redshift.
  • D. Query all the datasets in place with Apache Spark SQL running on an AWS Glue developer endpoint.
  • E. Query all the datasets in place with Apache Presto running on Amazon EMR.

Answer: C

NEW QUESTION 15
A company uses Amazon Redshift as its data warehouse. A new table has columns that contain sensitive data. The data in the table will eventually be referenced by several existing queries that run many times a day.
A data analyst needs to load 100 billion rows of data into the new table. Before doing so, the data analyst must ensure that only members of the auditing group can read the columns containing sensitive data.
How can the data analyst meet these requirements with the lowest maintenance overhead?

  • A. Load all the data into the new table and grant the auditing group permission to read from the tabl
  • B. Load all the data except for the columns containing sensitive data into a second tabl
  • C. Grant the appropriate users read-only permissions to the second table.
  • D. Load all the data into the new table and grant the auditing group permission to read from the tabl
  • E. Use the GRANT SQL command to allow read-only access to a subset of columns to the appropriate users.
  • F. Load all the data into the new table and grant all users read-only permissions to non-sensitive columns.Attach an IAM policy to the auditing group with explicit ALLOW access to the sensitive data columns.
  • G. Load all the data into the new table and grant the auditing group permission to read from the table.Create a view of the new table that contains all the columns, except for those considered sensitive, and grant the appropriate users read-only permissions to the table.

Answer: B

Explanation:
https://aws.amazon.com/blogs/big-data/achieve-finer-grained-data-security-with-column-level-access-control-in

NEW QUESTION 16
A regional energy company collects voltage data from sensors attached to buildings. To address any known dangerous conditions, the company wants to be alerted when a sequence of two voltage drops is detected within 10 minutes of a voltage spike at the same building. It is important to ensure that all messages are delivered as quickly as possible. The system must be fully managed and highly available. The company also needs a solution that will automatically scale up as it covers additional cites with this monitoring feature. The alerting system is subscribed to an Amazon SNS topic for remediation.
Which solution meets these requirements?

  • A. Create an Amazon Managed Streaming for Kafka cluster to ingest the data, and use an Apache Spark Streaming with Apache Kafka consumer API in an automatically scaled Amazon EMR cluster to process the incoming dat
  • B. Use the Spark Streaming application to detect the known event sequence and send the SNS message.
  • C. Create a REST-based web service using Amazon API Gateway in front of an AWS Lambda function.Create an Amazon RDS for PostgreSQL database with sufficient Provisioned IOPS (PIOPS). In the Lambda function, store incoming events in the RDS database and query the latest data to detect the known event sequence and send the SNS message.
  • D. Create an Amazon Kinesis Data Firehose delivery stream to capture the incoming sensor dat
  • E. Use an AWS Lambda transformation function to detect the known event sequence and send the SNS message.
  • F. Create an Amazon Kinesis data stream to capture the incoming sensor data and create another stream for alert message
  • G. Set up AWS Application Auto Scaling on bot
  • H. Create a Kinesis Data Analytics for Java application to detect the known event sequence, and add a message to the message strea
  • I. Configure an AWS Lambda function to poll the message stream and publish to the SNS topic.

Answer: D

NEW QUESTION 17
A company has collected more than 100 TB of log files in the last 24 months. The files are stored as raw text in a dedicated Amazon S3 bucket. Each object has a key of the form year-month-day_log_HHmmss.txt where HHmmss represents the time the log file was initially created. A table was created in Amazon Athena that points to the S3 bucket. One-time queries are run against a subset of columns in the table several times an hour.
A data analyst must make changes to reduce the cost of running these queries. Management wants a solution with minimal maintenance overhead.
Which combination of steps should the data analyst take to meet these requirements? (Choose three.)

  • A. Convert the log files to Apace Avro format.
  • B. Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data.
  • C. Convert the log files to Apache Parquet format.
  • D. Add a key prefix of the form year-month-day/ to the S3 objects to partition the data.
  • E. Drop and recreate the table with the PARTITIONED BY claus
  • F. Run the ALTER TABLE ADD PARTITION statement.
  • G. Drop and recreate the table with the PARTITIONED BY claus
  • H. Run the MSCK REPAIR TABLE statement.

Answer: BCF

NEW QUESTION 18
A media analytics company consumes a stream of social media posts. The posts are sent to an Amazon Kinesis data stream partitioned on user_id. An AWS Lambda function retrieves the records and validates the content before loading the posts into an Amazon Elasticsearch cluster. The validation process needs to receive the posts for a given user in the order they were received. A data analyst has noticed that, during peak hours, the social media platform posts take more than an hour to appear in the Elasticsearch cluster.
What should the data analyst do reduce this latency?

  • A. Migrate the validation process to Amazon Kinesis Data Firehose.
  • B. Migrate the Lambda consumers from standard data stream iterators to an HTTP/2 stream consumer.
  • C. Increase the number of shards in the stream.
  • D. Configure multiple Lambda functions to process the stream.

Answer: D

NEW QUESTION 19
A company uses Amazon Elasticsearch Service (Amazon ES) to store and analyze its website clickstream data. The company ingests 1 TB of data daily using Amazon Kinesis Data Firehose and stores one day’s worth of data in an Amazon ES cluster.
The company has very slow query performance on the Amazon ES index and occasionally sees errors from Kinesis Data Firehose when attempting to write to the index. The Amazon ES cluster has 10 nodes running a single index and 3 dedicated master nodes. Each data node has 1.5 TB of Amazon EBS storage attached and the cluster is configured with 1,000 shards. Occasionally, JVMMemoryPressure errors are found in the cluster logs.
Which solution will improve the performance of Amazon ES?

  • A. Increase the memory of the Amazon ES master nodes.
  • B. Decrease the number of Amazon ES data nodes.
  • C. Decrease the number of Amazon ES shards for the index.
  • D. Increase the number of Amazon ES shards for the index.

Answer: C

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/high-jvm-memory-pressure-elasticsearch/

NEW QUESTION 20
A manufacturing company has been collecting IoT sensor data from devices on its factory floor for a year and is storing the data in Amazon Redshift for daily analysis. A data analyst has determined that, at an expected ingestion rate of about 2 TB per day, the cluster will be undersized in less than 4 months. A long-term solution is needed. The data analyst has indicated that most queries only reference the most recent 13 months of data, yet there are also quarterly reports that need to query all the data generated from the past 7 years. The chief technology officer (CTO) is concerned about the costs, administrative effort, and performance of a long-term solution.
Which solution should the data analyst use to meet these requirements?

  • A. Create a daily job in AWS Glue to UNLOAD records older than 13 months to Amazon S3 and delete those records from Amazon Redshif
  • B. Create an external table in Amazon Redshift to point to the S3 locatio
  • C. Use Amazon Redshift Spectrum to join to data that is older than 13 months.
  • D. Take a snapshot of the Amazon Redshift cluste
  • E. Restore the cluster to a new cluster using dense storage nodes with additional storage capacity.
  • F. Execute a CREATE TABLE AS SELECT (CTAS) statement to move records that are older than 13 months to quarterly partitioned data in Amazon Redshift Spectrum backed by Amazon S3.
  • G. Unload all the tables in Amazon Redshift to an Amazon S3 bucket using S3 Intelligent-Tierin
  • H. Use AWS Glue to crawl the S3 bucket location to create external tables in an AWS Glue Data Catalo
  • I. Create an Amazon EMR cluster using Auto Scaling for any daily analytics needs, and use Amazon Athena for the quarterly reports, with both using the same AWS Glue Data Catalog.

Answer: A

NEW QUESTION 21
......

Recommend!! Get the Full DAS-C01 dumps in VCE and PDF From Surepassexam, Welcome to Download: https://www.surepassexam.com/DAS-C01-exam-dumps.html (New 130 Q&As Version)