vendredi 28 avril 2023

What’s Fundamental to know in AWS

 

What’s Fundamental to know in AWS

Author: Donatien MBADI OUM, Oracle | AWS | Azure

 

1.     Building blocks


AWS Global Infrastructure Map 

The AWS Cloud spans around 99 Availability Zones within 31 regions geographic regions around the word, with announced plans for 15 mores Availability Zones and 5 more Regions in Canada, Israel, Malaysia, New Zealand and Thailand.

-          A Region is a physical location in the world that consists of 3 or more isolated and physically separate Availability zones (AZs).

-          An Availability Zone is one or more discrete data center; each with redundant powers, networking and connectivity, housed in separate facilities in Region.

-          Edge Locations are endpoints that are used for caching content; typically this consists for Cloud Front, Amazon Content Delivery Network (CDN).

-          Services are a set of global cloud-based products including compute, storage, database, analytics, networking, machine learning and artificial Intelligence, mobile, developer tools, IoT, security, enterprise applications and much more.

-          Local Zones place services closer to end-users. With Local Zones, you can easily run highly-demanding applications that requires single-digit millisecond latencies to your end-users.

-          Wavelength enables developers to build applications that deliver single-digit millisecond latencies to mobile devices and end-users.

-          Outposts bring native AWS services, infrastructure and operating models to virtually any databa center, co-location space or on-premises facility.

2.    How to choose a Region

 


-          Compliance with data governance and legal requirements: Data never leaves a region without your explicit permission

-          Proximity to customer: reduced latency

-          Availability services within a Region: New services and new features aren’t available in every Region

-          Pricing: varies region to region and is transparent in the service pricing page.

 

3.     Six pillars of the Well-Architected Framework


The six pillars of the framework

Creating a software is like constructing a building. If the foundation is not solid, structural problems can undetermined the integrity and function of the building.

When building technology solutions on AWS, if you neglect the six pillars of operational excellence, security, reliability, performance efficiency, cost optimization and sustainability, it can become challenging to build a system that delivers on your expectations and requirements.

-          Operational Excellence: It includes the ability to support development and run workloads effectively, gain insight into their operation, and continuously improve supporting processes and procedures to deliver the business value. Operations teams need to understand their business and customer needs so they can support business outcomes. There are five design principles for operational excellence in the cloud:

o   Perform operation as code

o   Make frequent, small, reversible changes

o   Refine operations procedures frequently

o   Anticipate failure

o   Learn from all operational failures

-          Security: it includes the ability to protect data, systems and assets to take advantage of cloud technologies to improve your security. Before you architect any workload, you need to put in place practices that influence security. You’ll want to control who can do what. There are seven design principles for security in the cloud:

o   Implement a strong identity foundation

o   Enable traceability

o   Apply security at all layers

o   Automate security best practices

o   Protect data in transit and at rest

o   Keep people away from data

o   Prepare for security events

-          Reliability: this pillars encompasses the ability of workload to perform its intended function correctly and consistency when it’s expected to. This includes the ability to operate and test the workload through its total lifecycle. There are five design principles for reliability in the cloud:

o   Automatically recover from failure

o   Test recovery procedures

o   Scale horizontally to increase aggregate workload availability

o   Stop guessing capacity

o   Manage change in automation

-          Performance efficiency: it includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiently as demand changes and technologies evolve. Take a data-driven approach to building a high-performance architecture. There are five principles for performance efficiency in the cloud:

o   Democratize advanced technologies

o   Go to global in minutes

o   Use serverless architectures

o   Experiment more often

o   Consider mechanical sympathy

-          Cost optimization: it includes the ability to run systems to deliver business value at the lowest price point. Using the appropriate services, resources and configurations for your workloads is key to cost savings. The re five principles for cost optimization in the cloud:

o   Implement cloud financial management

o   Adopt a consumption model

o   Measure overall efficiency

o   Stop spending money on undifferentiated heavy lifting

o   Analyze and attribute expenditure

-          Sustainability: this pillar addresses the long-term environmental, economic and societal impact of your business activities. Choose Regions where you will implement workloads based on your business requirement and sustainability goals. There are six design principles for sustainability in the cloud:

o   Understand your impact

o   Establish sustainability goals

o   Maximize utilization

o   Anticipate and adopt new, more efficient hardware and software offerings

o   Use managed services

o   Reduce the downstream impact of your cloud workloads.


4 Shared Responsibility Model

A Shared Responsibility Model

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operates.

AWS is responsible to the “Security of the Cloud”: AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking and facilities that run AWS Cloud services.

Customer is responsible of the “Security in the Cloud”: Customer is responsible will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities.


jeudi 27 avril 2023

Amazon AWS Certified Database - Specialty Exam Practice

 


 

Amazon AWS Certified Database - Specialty Exam Practice

Donatien MBADI, AWS Database Speciality Certified

 

I have tried to answer the questions below from Exam Topics, aws Exam Prep, and Cloud Guru, based on aws references and other aws blogs. I am not guaranteed the veracity of all answers, so your suggestions are welcome.

 

 

Question 1

 

A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database 1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details. When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a ‘could not connect to server: Connection times out’ error message to Amazon CloudWatch Logs. What is the cause of this error?

A. 

The user name and password the application is using are incorrect.

 

B. 

The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.

 

C. 

The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.

 

D. 

The user name and password are correct, but the user is not authorized to use the DB instance.

 

 

 

Question 2

 

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)

 

A. 

Set DeletionProtection to True

 

https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-rds-now-provides-database-deletion-protection/

You can now enable deletion protection for your Amazon RDS database instances and Amazon Aurora database clusters. When a database instance or cluster is configured with deletion protection, the database cannot be deleted by any user. Deletion protection is available for Amazon Aurora and Amazon RDS for MySQL, MariaDB, Oracle, PostgreSQL, and SQL Server database instances in all AWS Regions.

Deletion protection is now enabled by default when you select the "production" option for database instances created through the AWS Console. You can also turn on or off deletion protection for an existing database instance or cluster with a few clicks in the AWS Console or the AWS Command Line Interface. Deletion protection is enforced in the AWS Console, the CLI, and API.

 

B. 

Set MultiAZ to True

 

C. 

Set TerminationProtection to True

 

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html

 

D. 

Set DeleteAutomatedBackups to False

 

E. 

Set DeletionPolicy to Delete

 

F. 

Set DeletionPolicy to Retain

 

https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-accidental-updates/?nc1=h_ls

You can prevent a stack from being accidentally deleted by enabling termination protection on the stack. If a user attempts to delete a stack with termination protection enabled, the deletion fails and the stack, including its status, remains unchanged. You can enable termination protection on a stack when you create it. Termination protection on stacks is disabled by default. You can set termination protection on a stack with any status except DELETE_IN_PROGRESS or DELETE_COMPLETE

 

Question 3

 

A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete. What is the MOST likely cause of the 5-minute connection outage?

 

A. 

After a database crash, Aurora needed to replay the redo log from the last database checkpoint

 

B. 

The client-side application is caching the DNS data and its TTL is set too high

 

https://repost.aws/knowledge-center/aurora-mysql-redirected-endpoint-writer

 

A client trying to connect to a database using a DNS name must resolve that DNS name to an IP address by querying a DNS server. The client then caches the responses. Per protocol, DNS responses specify the Time to Live (TTL), which governs how long the client should cache the record. Aurora DNS zones use a short TTL of five seconds. But many systems implement client caches with different settings, which can make the TTL longer.

 

C. 

After failover, the Aurora DB cluster needs time to warm up before accepting client connections

 

D. There were no active Aurora Replicas in the Aurora DB cluster

 

 

Question 4

 

A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company's data center. The company's Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine. Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts' inability to connect?

 

 

 

A. 

Restart the DB cluster to apply the SSL change.

 

B. 

Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.

 

https://aws.amazon.com/premiumsupport/knowledge-center/rds-connect-ssl-connection/

You can use SSL or Transport Layer Security (TLS) from your application to encrypt a connection to a DB instance running MySQL, MariaDB, Microsoft SQL Server, Oracle, or PostgreSQL. SSL/TLS connections provide one layer of security by encrypting data that's transferred between your client and the DB instance. A server certificate provides an extra layer of security by validating that the connection is being made to an Amazon RDS DB instance.

 

You can download a certificate bundle that contains both the intermediate and root certificates for all AWS Regions from AWS Trust Services. If your application is on Microsoft Windows and requires a PKCS7 file, then you can download the PKCS7 certificate bundle from Amazon Trust Services. This bundle contains both the intermediate and root certificates.

 

C. 

Add explicit mappings between the Data Analysts' IP addresses and the instance in the security group assigned to the DB cluster.

 

D. 

Modify the Data Analysts' local client firewall to allow network traffic to AWS.

 

 

Question 5

 

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.
What can the Database Specialist do to reduce the overall cost?

 

A. 

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

 

B. 

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

 

C. 

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html

 

When enabling TTL on a DynamoDB table, you must identify a specific attribute name that the service will look for when determining if an item is eligible for expiration. After you enable TTL on a table, a per-partition scanner background process automatically and continuously evaluates the expiry status of items in the table.

The scanner background process compares the current time, in Unix epoch time format in seconds, to the value stored in the user-defined attribute of an item.

 

D. 

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

 

 

Question 6

 

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup. The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?

 

A. 

Create an Amazon CloudWatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

 

B. 

Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.

 

C. 

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html

 

Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint.

 

D. 

Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

 

 

Question 7

 

A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on-premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.
Which approach should the Database Specialist take to securely manage the database credentials?

 

A. 

Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.

 

B. 

Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.

 

C. 

Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.

 

 

 

D. 

Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.

 

Question 8

 

A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

 

A. 

Enable in-transit and at-rest encryption on the ElastiCache cluster.

 

B. 

Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.

 

C. 

Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.

 

D. 

Create an IAM policy to allow the application service roles to access all ElastiCache API actions.

 

E. 

Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster's security group.

 

F. 

Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

 

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/encryption.html

Amazon ElastiCache for Redis also provides optional encryption features for data on clusters running Redis versions 3.2.6, 4.0.10 or later:

·         In-transit encryption encrypts your data whenever it is moving from one place to another, such as between nodes in your cluster or between your cluster and your application.

·         At-rest encryption encrypts your on-disk data during sync and backup operations.

 

If you want to enable in-transit or at-rest encryption, you must meet the following conditions.

·         Your cluster or replication group must be running Redis 3.2.6, 4.0.10 or later.

·         Your cluster or replication group must be created in a VPC based on Amazon VPC.

·         Optionally, you can also use AUTH and the AUTH token (password) needed to perform operations on this cluster or replication group.

 

 

Question 9

 

A company is running an Amazon RDS for PostgreSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?

 

A. 

Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.

B. 

Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.

 

C. 

Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.

 

D. 

Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html

 

You can also migrate from an RDS for PostgreSQL DB instance by creating an Aurora PostgreSQL read replica of an RDS for PostgreSQL DB instance. When the replica lag between the RDS for PostgreSQL DB instance and the Aurora PostgreSQL read replica is zero, you can stop replication. At this point, you can make the Aurora read replica a standalone Aurora PostgreSQL DB cluster for reading and writing.

 

 

Question 10

 

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?

 

A. 

In the same Region and VPC of the source DB instance

 

B. 

In the same Region and VPC as the target DB instance

 

C. 

In the same VPC and Availability Zone as the target DB instance

 

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.htmlCHAP_ReplicationInstance.VPC.Configurations.ScenarioVPCPeer

 

D. 

In the same VPC and Availability Zone as the source DB instance

 

 

Question 11

 

The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality. This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows. The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?

 

A. 

Quickly rewind the DB cluster to a point in time before the release using Backtrack.

 

B. 

Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.

 

C. 

Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.

 

D. 

Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html

 

Question 12

 

A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)

 

A. 

Review the stack drift before modifying the template

 

B. 

Create and review a change set before applying it

 

C. 

Export the database resources as stack outputs

 

D. 

Define the database resources in a nested stack

 

E. 

Set a stack policy for the database resources

 

https://docs.amazonaws.cn/en_us/AWSCloudFormation/latest/UserGuide/best-practices.htmlcfn-best-practices-changesets

 

Change sets allow you to see how proposed changes to a stack might impact your running resources before you implement them. CloudFormation doesn't make any changes to your stack until you run the change set, allowing you to decide whether to proceed with your proposed changes or create another change set.

 

 

Question 13

 

A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

 

A. 

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

 

B. 

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

 

C. 

Edit and enable Aurora DB cluster cache management in parameter groups.

 

D. 

Set TCP keepalive parameters to a high value.

 

E. 

Set JDBC connection string timeout variables to a low value.

 

F. 

 

Set Java DNS caching timeouts to a high value.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.FastFailover.html

Following, you can learn how to make sure that failover occurs as fast as possible. To recover quickly after failover, you can use cluster cache management for your Aurora PostgreSQL DB cluster. For more information, see Fast recovery after failover with cluster cache management for Aurora PostgreSQL.

Some of the steps that you can take to make failover perform fast include the following:

·         Set Transmission Control Protocol (TCP) keepalives with short time frames, to stop longer running queries before the read timeout expires if there's a failure.

·         Set timeouts for Java Domain Name System (DNS) caching aggressively. Doing this helps ensure the Aurora read-only endpoint can properly cycle through read-only nodes on later connection attempts.

·         Set the timeout variables used in the JDBC connection string as low as possible. Use separate connection objects for short- and long-running queries.

·         Use the read and write Aurora endpoints that are provided to connect to the cluster.

·         Use RDS API operations to test application response on server-side failures. Also, use a packet dropping tool to test application response for client-side failures.

·         Use the AWS JDBC Driver for PostgreSQL to take full advantage of the failover capabilities of Aurora PostgreSQL

 

Question 14

 

A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

 

A. 

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.

 

B. 

Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.

 

C. 

Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.

 

https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.htmlcross-region-snapshot-copy

You can configure Amazon Redshift to automatically copy snapshots (automated or manual) for a cluster to another AWS Region. When a snapshot is created in the cluster's primary AWS Region, it's copied to a secondary AWS Region. The two AWS Regions are known respectively as the source AWS Region and destination AWS Region. If you store a copy of your snapshots in another AWS Region, you can restore your cluster from recent data if anything affects the primary AWS Region. You can configure your cluster to copy snapshots to only one destination AWS Region at a time. For a list of Amazon Redshift Regions, see Regions and endpoints in the Amazon Web Services General Reference.

When you enable Amazon Redshift to automatically copy snapshots to another AWS Region, you specify the destination AWS Region to copy the snapshots to. For automated snapshots, you can also specify the retention period to keep them in the destination AWS Region. After an automated snapshot is copied to the destination AWS Region and it reaches the retention time period there, it's deleted from the destination AWS Region. Doing this keeps your snapshot usage low. To keep the automated snapshots for a shorter or longer time in the destination AWS Region, change this retention period.

 

D. 

Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

 

Question 15

 

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six- node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?

 

A. 

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

 

B. 

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

 

C. 

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

 

D. 

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

 

Question 16

 

A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.
Which step will provide additional security?

 

A. 

Set up NACLs that allow the entire EC2 subnet to access the DB instance

 

B. 

Disable the master user account

 

C. 

Set up a security group that blocks SSH to the DB instance

 

D. 

Set up RDS to use SSL for data in transit

 

Question 17

 

A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?

 

A. 

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

 

B. 

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

 

C. 

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

 

https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html

With the Concurrency Scaling feature, you can support thousands of concurrent users and concurrent queries, with consistently fast query performance.

When you turn on concurrency scaling, Amazon Redshift automatically adds additional cluster capacity to process an increase in both read and write queries. Users see the most current data, whether the queries run on the main cluster or a concurrency-scaling cluster. You're charged for concurrency-scaling clusters only for the time they're actively running queries.

 

D. 

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

 

 

Question 18

 

A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?

 

A. 

Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.

 

B. 

Create an AWS CloudFormation template and deploy the template to all the Regions.

 

C. 

Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.

 

https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-multiple-aws-accounts-and-regions/

 

AWS CloudFormation helps AWS customers implement an Infrastructure as Code model. Instead of setting up their environments and applications by hand, they build a template and use it to create all of the necessary resources, collectively known as a CloudFormation stack. This model removes opportunities for manual error, increases efficiency, and ensures consistent configurations over time.

 

D. 

Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by-step guide for future deployments.

 

 

Question 19

 

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.
How can the Database Specialists accomplish this?

 

A. 

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

 

B. 

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

 

C. 

Enable Amazon RDS Performance Insights and review the appropriate dashboard

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.UsingDashboard.AnalyzeDBLoad.html

 

DB load grouped by waits and top SQL queries is the default Performance Insights dashboard view. This combination typically provides the most insight into performance issues. DB load grouped by waits shows if there are any resource or concurrency bottlenecks in the database. In this case, the SQL tab of the top load items table shows which queries are driving that load.

 

D. 

Enable Enhanced Monitoring will the appropriate settings

 

 

Question 20

 

A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.
What should the company do to achieve this in the shortest amount of time?

 

A. 

Use a blue-green deployment with a complete application-level failover test

 

B. 

Use the RDS console to reboot the DB instance by choosing the option to reboot with failover

 

C. 

Use RDS fault injection queries to simulate the primary node failure

 

D. 

Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

 

 

 

 

 

 

 

 

Question 21

 

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?

 

A. 

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

 

B. 

Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.html

In an on-premises database, the database logs reside on the file system. Amazon RDS doesn't provide host access to the database logs on the file system of your DB instance. For this reason, Amazon RDS lets you export database logs to Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data. You can also store the data in highly durable storage and manage the data with the CloudWatch Logs Agent.

 

C. 

Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

 

D. 

Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

 

 

Question 22

 

A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium-sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.
In the event of a primary failure, what will occur?

 

A. 

Aurora will promote an Aurora Replica that is of the same size as the primary instance

 

B. 

Aurora will promote an arbitrary Aurora Replica

 

C. 

Aurora will promote the largest-sized Aurora Replica

 

D. 

Aurora will not promote an Aurora Replica

 

 

 

 

 

 

 

Question 23

 

A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.
Which migration method should a Database Specialist use?

 

A. 

Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.

 

B. 

Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.

 

C. 

Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.

 

https://aws.amazon.com/blogs/database/best-practices-for-migrating-rds-for-mysql-databases-to-amazon-aurora/

 

D. 

Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.

 

 

Question 24

 

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.
Which approach will meet these requirements?

 

A. 

Use pg_audit to generate audit logs and send the logs to the Security team.

 

B. 

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.

 

C. 

Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

 

https://aws.amazon.com/about-aws/whats-new/2019/05/amazon-aurora-with-postgresql-compatibility-supports-database-activity-streams/

Database Activity Streams for Amazon Aurora with PostgreSQL compatibility provides a near real-time data stream of the database activity in your relational database to help you monitor activity. When integrated with third party database activity monitoring tools, Database Activity Streams can monitor and audit database activity to provide safeguards for your database and help meet compliance and regulatory requirements.

 

D. 

Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

 

 

 

 

 

 

 

Question 25

 

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours for the restores to complete. Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?

 

A. 

Deploy multiple read replicas and have the team members make changes to separate replica instances

 

B. 

Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot

 

C. 

Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature

 

D. 

Enable the Amazon RDS for MySQL Backtrack feature

 

 

Question 26

 

A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-Architected Framework review, a Database Specialist was given new security requirements.
Only certain on-premises corporate network IPs should connect to the DB instance.
Connectivity is allowed from the corporate network only.
Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

 

A. 

Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.

 

B. 

Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.

 

C. 

Move the DB instance to a private subnet using AWS DMS.

 

D. 

Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.

 

E. 

Disable the publicly accessible setting.

 

F. 

Connect to the DB instance using private IPs and a VPN.

 

 

 

 

 

 

 

Question 27

 

A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.
What should the Database Specialist do to meet these requirements?

 

A. 

Restore a snapshot from the production cluster into test clusters

 

B. 

Create logical dumps of the production cluster and restore them into new test clusters

 

C. 

Use database cloning to create clones of the production cluster

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html

 

By using Aurora cloning, you can create a new cluster that initially shares the same data pages as the original, but is a separate and independent volume. The process is designed to be fast and cost-effective. The new cluster with its associated data volume is known as a clone. Creating a clone is faster and more space-efficient than physically copying the data using other techniques, such as restoring a snapshot.

 

D. 

Add an additional read replica to the production cluster and use that node for testing

 

 

Question 28

 

A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us- east-2 Regions. This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.
Which set of actions will meet these requirements?

 

A. 

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap- southeast-1 front-end dashboard to access this instance.

 

B. 

Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us-west-2 Region. Reconfigure the ap-southeast-1 front- end dashboard to access this instance.

 

C. 

Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

 

D. 

Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

 

 

 

Question 29

 

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.
Which approach will MOST effectively meet these requirements?

 

A. 

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.

 

B. 

Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.

 

C. 

Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.

 

D. 

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html

 

AWS DMS provides support for data validation to ensure that your data was migrated accurately from the source to the target. If enabled, validation begins immediately after a full load is performed for a table. Validation compares the incremental changes for a CDC-enabled task as they occur.

 

 

Question 30

 

A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL. The Database Specialist wrote and ran a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?

 

A. 

Stop the source instances before stopping their read replicas

 

B. 

Delete each read replica before stopping its corresponding source instance

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html

 

C. 

Stop the read replicas before stopping their source instances

 

D. 

Use the AWS CLI to stop each read replica and source instance at the same time

 

 

 

 

 

 

 

Question 31

 

A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user's browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.
Which database solution meets these requirements?

 

A. 

Amazon DocumentDB

 

B. 

Amazon RDS Multi-AZ deployment

 

C. 

Amazon DynamoDB global table

 

D. 

Amazon Aurora Global Database

 

 

Question 32

 

A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.
How should the Database Specialist apply the parameter group change for the DB instance?

 

A. 

Select the option to apply the change immediately

 

B. 

Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied

 

C. 

Apply the change manually by rebooting the DB instance during the approved maintenance window

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.htmlparameter-groups-overview.db-instance

 

DB instance parameters are either static or dynamic. They differ as follows:

·         When you change a static parameter and save the DB parameter group, the parameter change takes effect after you manually reboot the associated DB instances. For static parameters, the console always uses pending-reboot for the ApplyMethod.

·         When you change a dynamic parameter, by default the parameter change takes effect immediately, without requiring a reboot. When you use the AWS Management Console to change DB instance parameter values, it always uses immediate for the ApplyMethod for dynamic parameters. To defer the parameter change until after you reboot an associated DB instance, use the AWS CLI or RDS API. Set the ApplyMethod to pending-reboot for the parameter change.

 

D. 

Reboot the secondary Multi-AZ DB instance

 

Question 33

 

A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.
Which solution meets these requirements in the MOST efficient way?

 

A. 

Use Amazon RDS for MySQL as the database and use Amazon ElastiCache

 

B. 

Use Amazon DynamoDB as the database and use DynamoDB Accelerator

 

https://aws.amazon.com/dynamodb/dax/:~:text=Amazon%20DynamoDB%20Accelerator%20(DAX)%20is,millions%20of%20requests%20per%20second

 

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.

DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.

 

C. 

Use Amazon Aurora MySQL as the database and use Aurora's buffer cache

 

D. 

Use Amazon DynamoDB as the database and use Amazon API Gateway

 

 

Question 34

 

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.
What should the company do to eliminate this application performance issue?

 

A. 

Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.

 

B. 

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.

 

C. 

Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.

 

D. 

Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.

https://aws.amazon.com/blogs/database/introduction-to-aurora-postgresql-cluster-cache-management/

 

 

Question 35

 

A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.
Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

 

A. 

Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.

 

B. 

Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.

 

C. 

Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.

D. Use Amazon QuickSight to view the SQL statement being run.

 

E. 

Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.

 

https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/

 

Increases in CPU utilization can be caused by several factors, such as user-initiated heavy workloads, multiple concurrent queries, or long-running transactions.

To identify the source of the CPU usage in your Amazon RDS for MySQL instance, review the following approaches:

·         Enhanced Monitoring

·         Performance Insights

·         Queries that detect the cause of CPU utilization in the workload

·         Logs with activated monitoring

After you identify the source, you can analyze and optimize your workload to reduce CPU usage.

Using Enhanced Monitoring

Enhanced Monitoring provides a view at the operating system (OS) level. This view can help identify the cause of a high CPU load at a granular level. For example, you can review the load average, CPU distribution (system% or nice%), and OS process list.

 

Using Performance Insights

You can use Performance Insights to identify the exact queries that are running on the instance and causing high CPU usage. First, activate Performance Insights for MySQL. Then, you can use Performance Insights to optimize your workload. Be sure to consult with your DBA.

 

 

Question 36

 

A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?

 

 

A. 

Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.

 

B. 

Use reader endpoints for both the read-only workload applications.

 

C. 

Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.

 

D. 

Use custom endpoints for the two read-only applications.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.html

 

Custom endpoint

custom endpoint for an Aurora cluster represents a set of DB instances that you choose. When you connect to the endpoint, Aurora performs load balancing and chooses one of the instances in the group to handle the connection. You define which instances this endpoint refers to, and you decide what purpose the endpoint serves.

 

 

Question 37

 

An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
Update scores in real time whenever a player is playing the game.
Retrieve a player's score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?

 

A. 

Create a global secondary index with game_id as the partition key

 

B. 

Create a global secondary index with user_id as the partition key

 

C. 

Create a composite primary key with game_id as the partition key and user_id as the sort key

 

D. 

Create a composite primary key with user_id as the partition key and game_id as the sort key

https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

 

 

Question 38

 

A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time-consuming, so it is not an option.
How should the Database Specialist satisfy this new requirement?

 

A. 

Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

 

B. 

Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.

C. 

Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.

 

D. 

Create an encrypted read replica of the RDS DB instance. Promote it the master.

 

 

Question 39

 

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.
What is the most likely reason for this?

 

A. The source DB instance has to be converted to Single-AZ first to create a read replica from it.

 

B. Enhanced Monitoring is not enabled on the source DB instance.

 

C. The minor MySQL version in the source DB instance does not support read replicas.

 

D. Automated backups are not enabled on the source DB instance.

https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstanceReadReplica.html

 

 

Question 40

 

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

 

A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.

 

B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.

 

C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.

 

D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

 

 

Question 41

 

A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company's disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient.
What should a Database Specialist do to copy the database backup into a different Region?

 

 

A. 

Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region

 

B. 

Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region

 

C. 

Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region

 

D. 

Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica

 

Question 42

 

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.
What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?

 

A. 

Increase the size of the DB instance storage

 

B. 

Change the underlying EBS storage type to General Purpose SSD (gp2)

 

C. 

Disable EBS optimization on the DB instance

 

D. 

Change the DB instance to an instance class with a higher maximum bandwidth

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html

 

 

Question 43

 

After restoring an Amazon RDS snapshot from 3 days ago, a company's Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

 

A. 

The restored DB instance does not have Enhanced Monitoring enabled

 

B. 

The production DB instance is using a custom parameter group

 

C. 

The restored DB instance is using the default security group

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html

 

We recommend that you retain the DB parameter group for any DB snapshots you create, so that you can associate your restored DB instance with the correct parameter group.

The default DB parameter group is associated with the restored instance, unless you choose a different one. No custom parameter settings are available in the default parameter group.

You can specify the parameter group when you restore the DB instance.

 

D. 

The production DB instance is using a custom option group

 

 

Question 44

 

A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.

B. Increase the size of the ElastiCache cluster nodes to a larger instance size.

C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.

D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.Redis-RedisCluster.html

 

Business needs change. You need to either provision for peak demand or scale as demand changes. Redis (cluster mode disabled) supports scaling. You can scale read capacity by adding or deleting replica nodes, or you can scale capacity by scaling up to a larger node type. Both of these operations take time.

 

 

Question 45

 

An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update. The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.
Which solution meets these requirements?

 

A. 

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

 

B. 

Provision a clone of the existing DB cluster for the new Application team.

 

C. 

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

 

D. 

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html

 

Question 46

 

A retail company is about to migrate its online and mobile store to AWS. The company's CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.
What should the Database Specialist do to meet these requirements?

 

A. 

Use Amazon DynamoDB global tables to synchronize transactions

 

https://aws.amazon.com/dynamodb/global-tables/

 

B. 

Use Amazon EMR to copy the orders table data across Regions

 

C. 

Use Amazon Aurora Global Database to synchronize all transactions

 

D. 

Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them

 

Question 47

 

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?

 

A. 

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.

 

B. 

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.

 

https://aws.amazon.com/blogs/database/new-aws-dms-and-aws-snowball-integration-enables-mass-database-migrations-and-migrations-of-large-databases/

 

C. 

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.

 

D. 

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

 

 

Question 48

 

A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company's Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.
How should the Database Specialist optimize the database migration using AWS DMS?

 

A. 

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

 

B. 

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

 

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.htmlCHAP_BestPractices.LOBS

 

C. 

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

 

D. 

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

 

 

Question 49

 

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.
To prepare the new table with identical settings, which steps should be performed? (Choose two.)

 

A. 

Re-create global secondary indexes in the new table

 

B. 

Define IAM policies for access to the new table

 

C. 

Define the TTL settings

 

D. 

Encrypt the table from the AWS Management Console or use the update-table command

 

E. 

Set the provisioned read and write capacity

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CreateBackup.html

 

Restores can be faster and more cost-efficient if you choose to exclude some or all secondary indexes from being created on the new restored table.

You must manually set up the following on the restored table:

·         Auto scaling policies

·         AWS Identity and Access Management (IAM) policies

·         Amazon CloudWatch metrics and alarms

·         Tags

·         Stream settings

·         Time to Live (TTL) settings

You can only restore the entire table data to a new table from a backup. You can write to the restored table only after it becomes active.

 

 

 

 

 

Question 50

 

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?

 

A. 

Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.

 

https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/

 

B. 

Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.

 

C. 

Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.

 

D. 

Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.

 

Question 51

 

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?

 

A. 

Log in to the host and run the rm $PGDATA/pg_logs/* command

 

B. 

Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html

 

Setting the log retention period

The rds.log_retention_period parameter specifies how long your Aurora PostgreSQL DB cluster keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere from 1 day (1,440 minutes) to 7 days (10,080 minutes). Be sure that your Aurora PostgreSQL DB cluster has sufficient storage to hold the log files for the period of time.

 

C. 

Create a ticket with AWS Support to have the logs deleted

 

D. 

Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

 

 

Question 52

 

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost-effective and able to handle unpredictable application traffic.
What should a Database Specialist recommend for this user?

 

A. 

Create an Amazon DynamoDB table with provisioned capacity mode

 

B. 

Create an Amazon DocumentDB cluster

 

C. 

Create an Amazon DynamoDB table with on-demand capacity mode

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.htmlHowItWorks

 

On-demand mode

Amazon DynamoDB on-demand is a flexible billing option capable of serving thousands of requests per second without capacity planning. DynamoDB on-demand offers pay-per-request pricing for read and write requests so that you pay only for what you use.

When you choose on-demand mode, DynamoDB instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, DynamoDB adapts rapidly to accommodate the workload. Tables that use on-demand mode deliver the same single-digit millisecond latency, service-level agreement (SLA) commitment, and security that DynamoDB already offers. You can choose on-demand for both new and existing tables and you can continue using the existing DynamoDB APIs without changing code.

On-demand mode is a good option if any of the following are true:

·         You create new tables with unknown workloads.

·         You have unpredictable application traffic.

·         You prefer the ease of paying for only what you use.

 

D. 

Create an Amazon Aurora Serverless DB cluster

 

 

Question 53

 

A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.
Which solution meets these requirements?

 

A. 

Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling

 

https://aws.amazon.com/blogs/database/how-to-use-amazon-dynamodb-global-tables-to-power-multiregion-architectures/

 

B. 

Use Amazon Aurora for storage and enable cross-Region Aurora Replicas

 

C. 

Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache

 

D. Use Amazon Neptune for storage

 

Question 54

 

A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.
How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

 

A. 

Set the TCP keepalive parameters low

 

B. 

Call the AWS CLI failover-db-cluster command

 

C. 

Enable Enhanced Monitoring on the DB cluster

 

D. 

Start a database activity stream on the DB cluster

 

 

Question 55

 

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?

 

A. 

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

 

B. 

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

 

C. 

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

 

D. 

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

 

 

Question 56

 

A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enable in the cluster parameter group.
What should the Database Specialist do to automatically collect the database logs for the Administrator?

 

A. 

Enable DocumentDB to export the logs to Amazon CloudWatch Logs

 

https://docs.aws.amazon.com/documentdb/latest/developerguide/event-auditing.html

 

When auditing is enabled, Amazon DocumentDB records Data Definition Language (DDL), Data Manipulation Language (DML), authentication, authorization, and user management events to Amazon CloudWatch Logs. When auditing is enabled, Amazon DocumentDB exports your cluster’s auditing records (JSON documents) to Amazon CloudWatch Logs. You can use Amazon CloudWatch Logs to analyze, monitor, and archive your Amazon DocumentDB auditing events.

 

B. 

Enable DocumentDB to export the logs to AWS CloudTrail

 

C. 

Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs

 

D. 

Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3

 

 

Question 57

 

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.
What is the quickest way for the company to gather data on the migration compatibility?

 

A. 

Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.

 

B. 

Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.

 

C. 

Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.

 

D. 

Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.

 

Question 58

 

An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.
How should a Database Specialist address these requirements?

 

A. 

Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB

 

B. 

Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift

 

C. 

Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance

 

D. 

Use DynamoDB Accelerator to offload the reads

 

https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/DAX.html

 

 

Question 59

 

An IT consulting company wants to reduce costs when operating its development environment databases. The company's workflow creates multiple Amazon Aurora MySQL DB clusters for each development group. The Aurora DB clusters are only used for 8 hours a day. The DB clusters can then be deleted at the end of the development cycle, which lasts 2 weeks.
Which of the following provides the MOST cost-effective solution?

 

A. 

Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.

 

B. 

Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.

 

C. 

Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.

 

D. 

Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.

 

Question 60

 

A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.
Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

 

A. 

CONNECT

 

B. 

QUERY_DCL

 

C. 

QUERY_DDL

 

D. 

QUERY_DML

 

E. 

TABLE

 

F. 

QUERY

 

Question 61

 

A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season.
Which solution will meet these requirements at the lowest cost?

 

A. 

DynamoDB Streams

 

B. 

DynamoDB with DynamoDB Accelerator

 

C. 

DynamoDB with on-demand capacity mode

 

D. 

DynamoDB with provisioned capacity mode with Auto Scaling

 

Question 62

 

A company's Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.
Which combination of actions should the Database Specialist take? (Choose three.)

 

A. 

Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.

 

B. 

Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.

 

C. 

Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.

 

D. 

Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.

 

E. 

Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.

 

F. 

Configure the AWS Managed Microsoft AD domain controller Security Group.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerWinAuth.html

To set up Windows authentication for a SQL Server DB instance, do the following steps, explained in greater detail in Setting up Windows Authentication for SQL Server DB instances:

1.       Use AWS Managed Microsoft AD, either from the AWS Management Console or AWS Directory Service API, to create an AWS Managed Microsoft AD directory.

2.       If you use the AWS CLI or Amazon RDS API to create your SQL Server DB instance, create an AWS Identity and Access Management (IAM) role. This role uses the managed IAM policy AmazonRDSDirectoryServiceAccess and allows Amazon RDS to make calls to your directory. If you use the console to create your SQL Server DB instance, AWS creates the IAM role for you.

For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated in the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS Regions, and you can use them without any further actions. For more information, see Managing AWS STS in an AWS Region in the IAM User Guide.

3.       Create and configure users and groups in the AWS Managed Microsoft AD directory using the Microsoft Active Directory tools. For more information about creating users and groups in your Active Directory, see Manage users and groups in AWS Managed Microsoft AD in the AWS Directory Service Administration Guide.

4.       If you plan to locate the directory and the DB instance in different VPCs, enable cross-VPC traffic.

5.       Use Amazon RDS to create a new SQL Server DB instance either from the console, AWS CLI, or Amazon RDS API. In the create request, you provide the domain identifier ("d-*" identifier) that was generated when you created your directory and the name of the role you created. You can also modify an existing SQL Server DB instance to use Windows Authentication by setting the domain and IAM role parameters for the DB instance.

6.       Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do any other DB instance. Because the DB instance is joined to the AWS Managed Microsoft AD domain, you can provision SQL Server logins and users from the Active Directory users and groups in their domain. (These are known as SQL Server "Windows" logins.) Database permissions are managed through standard SQL Server permissions granted and revoked to these Windows logins.

 

 

Question 63

 

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:
ERROR: cloud not write block 7507718 of temporary file: No space left on device
What is the cause of this error and what should the Database Specialist do to resolve this issue?

 

A. 

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

 

B. 

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

 

C. 

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

 

D. 

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

 

Question 64

 

A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL- based connections should be disallowed access to the database.
Which solution addresses these requirements?

 

 

A. 

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.

 

B. 

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.

 

C. 

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.

 

D. 

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.

https://jdbc.postgresql.org/documentation/ssl/

 

rds.force_ssl=1 to force ssl in RDS and sslmode=verify-full to encrypt the connection and validate server identity.

 

Question 65

 

A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours.
Which solution will meet these requirements and is the MOST operationally efficient?

 

A. 

Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company's Amazon S3 bucket.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html

 

B. 

Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.

 

C. 

Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.

 

D. 

Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.

 

 

Question 66

 

A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.
Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?

 

A. 

Create the database with the MasterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database. Finally, update the secret's password value with a randomly generated string set by the GenerateSecretString property.

 

B. 

Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.

 

C. 

Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.

 

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-secrettargetattachment.html

 

The AWS::SecretsManager::SecretTargetAttachment resource completes the final link between a Secrets Manager secret and the associated database by adding the database connection information to the secret JSON. If you want to turn on automatic rotation for a database credential secret, the secret must contain the database connection information.

 

D. 

Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add a SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database ARN. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.

 

 

Question 67

 

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.
How can the Database Specialist meet these requirements?

 

A. 

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

 

B. 

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

 

C. 

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

 

https://aws.amazon.com/blogs/database/managing-postgresql-users-and-roles/

 

D. 

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

 

 

 

 

 

 

 

Question 68

 

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.
Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

 

A. 

Stop the DB cluster and analyze how the website responds

 

B. 

Use Aurora fault injection to crash the master DB instance

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.FaultInjectionQueries.html

 

You can test the fault tolerance of your Aurora MySQL DB cluster by using fault injection queries. Fault injection queries are issued as SQL commands to an Amazon Aurora instance. They let you schedule a simulated occurrence of one of the following events:

·         A crash of a writer or reader DB instance

·         A failure of an Aurora Replica

·         A disk failure

·         Disk congestion

When a fault injection query specifies a crash, it forces a crash of the Aurora MySQL DB instance. The other fault injection queries result in simulations of failure events, but don't cause the event to occur. When you submit a fault injection query, you also specify an amount of time for the failure event simulation to occur for.

 

C. 

Remove the DB cluster endpoint to simulate a master DB instance failure

 

D. 

Use Aurora Backtrack to crash the DB cluster

 

Question 69

 

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.
Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

 

A. 

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

 

B. 

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

 

C. 

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

 

D. 

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

https://aws.amazon.com/about-aws/whats-new/2021/10/rds-performance-insights-more-regions/

 

Question 70

 

A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when survey responses are being collected, a Database Specialist sees the ProvisionedThroughputExceededException error.
What can the Database Specialist do to resolve this error? (Choose two.)

 

A. 

Change the table to use Amazon DynamoDB Streams

 

B. 

Purchase DynamoDB reserved capacity in the affected Region

 

C. 

Increase the write capacity units for the specific table

 

D. 

Change the table capacity mode to on-demand

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/switching.capacitymode.html

 

E. 

Change the table type to throughput optimized

 

 

Question 71

 

A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.
Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

 

A. 

Grant least privilege to groups, users, and roles

 

B. 

Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database

 

C. 

Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations

 

D. 

Use policy conditions to restrict access to selective IP addresses

 

E. 

Use AccessList Controls policy type to restrict users for database instance deletion

 

F. 

Enable AWS CloudTrail logging and Enhanced Monitoring

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/security_iam_id-based-policy-examples.html

 

Policy best practices

Identity-based policies determine whether someone can create, access, or delete Amazon RDS resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:

·         Get started with AWS managed policies and move toward least-privilege permissions – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see AWS managed policies or AWS managed policies for job functions in the IAM User Guide.

·         Apply least-privilege permissions – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions. For more information about using IAM to apply permissions, see Policies and permissions in IAM in the IAM User Guide.

·         Use conditions in IAM policies to further restrict access – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as AWS CloudFormation. For more information, see IAM JSON policy elements: Condition in the IAM User Guide.

·         Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see IAM Access Analyzer policy validation in the IAM User Guide.

·         Require multi-factor authentication (MFA) – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies.

 

 

Question 72

 

A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors.
Which of the following will resolve this issue?

 

A. 

Edit the my.cnf file for the DB cluster to increase max_connections

 

B. 

Increase the instance size of the DB cluster

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Performance.html

 

C. 

Change the DB cluster to Multi-AZ

 

D. 

Increase the number of Aurora Replicas

 

 

Question 73

 

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company's code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.
What is most secure solution to store the master password?

 

A. 

Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.

 

B. 

Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.

 

C. 

Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.

 

D. 

Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.

 

Question 74

 

A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle the anticipated volume.
Which AWS services should the Database Specialist consider? (Choose two.)

 

A. 

Amazon DynamoDB

 

B. 

Amazon Redshift

 

C. 

Amazon Neptune

 

D. 

Amazon Elasticsearch Service

 

E. 

Amazon ElastiCache

 

https://aws.amazon.com/pm/dynamodb/?trk=d1003b1b-ffc2-4fbd-9ce6-e70c668663bc&sc_channel=ps&ef_id=CjwKCAjwov6hBhBsEiwAvrvN6OgnBJlBy5GeH5T4uyUNbWTLPgE1mJLe2hyZxUtIOxaZGi8s2hHX1BoCpgAQAvD_BwE:G:s&s_kwcid=AL!4422!3!536393505298!e!!g!!aws%20dynamodb!11346198423!112250794958

https://aws.amazon.com/products/databases/real-time-apps-elasticache-for-redis/

 

 

Question 75

 

A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data. The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.
Which migration approach will be the fastest and most cost-effective to implement?

 

A. 

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html

 

By using Aurora cloning, you can create a new cluster that initially shares the same data pages as the original, but is a separate and independent volume. The process is designed to be fast and cost-effective. The new cluster with its associated data volume is known as a clone. Creating a clone is faster and more space-efficient than physically copying the data using other techniques, such as restoring a snapshot.

Aurora supports many different types of cloning:

·         You can create an Aurora provisioned clone from a provisioned Aurora DB cluster.

·         You can create an Aurora Serverless v1 clone from an Aurora Serverless v1 DB cluster.

·         You can also create Aurora Serverless v1 clones from Aurora provisioned DB clusters, and you can create provisioned clones from Aurora Serverless v1 DB clusters.

·         Clusters with Aurora Serverless v2 instances follow the same rules as provisioned clusters.

When you create a clone using a different deployment configuration than the source, the clone is created using the latest minor version of the source's Aurora DB engine.

 

B. 

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

 

C. 

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

 

D. 

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

 

 

Question 76

 

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?

 

A. 

Ensure the table is always provisioned to meet peak needs

 

B. 

Allow burst capacity to handle the additional load

 

C. 

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

 

D. 

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

 

Question 77

 

A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?

 

A. 

Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)

 

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html

 

B. 

Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)

 

C. 

Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)

 

D. 

Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)

 

 

Question 78

 

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.
Which solution would meet these requirements?

 

A. 

Create a snapshot of the old databases and restore the snapshot with the required storage

 

B. 

Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS

 

https://repost.aws/knowledge-center/rds-db-storage-size

 

Short description

After you create an Amazon RDS DB instance, you can't modify the allocated storage size of the DB instance to decrease the total storage space it uses. To decrease the storage size of your DB instance, create a new DB instance that has less provisioned storage size. Then, migrate your data into the new DB instance using one of the following methods:

·         Use the database engine's native dump and restore method. This method causes some downtime.

·         Use AWS Database Migration Service (AWS DMS) for minimal downtime.

 

C. 

Create a new database using native backup and restore

 

D. 

Create a new read replica and make it the primary by terminating the existing primary

 

 

Question 79

 

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?

 

A. 

Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine's IP address

 

B. 

Ensure that the RDS DB instance's subnet group includes a public subnet to allow the Developer to connect

 

C. 

Ensure that the RDS DB instance has not reached its maximum connections limit

 

D. 

Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections

 

 

Question 80

 

A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?

 

A. 

Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB

 

B. 

Enable cross-Region read replicas and direct read traffic to them when Amazon RDS is down

 

C. 

Enable a read replica and direct read traffic to it when Amazon RDS is down

 

D. 

Enable an Amazon RDS for MySQL Multi-AZ configuration

 

https://repost.aws/knowledge-center/rds-required-maintenance

OS maintenance

After OS maintenance is scheduled for the next maintenance window, maintenance can be postponed by adjusting your preferred maintenance window. Maintenance can also be deferred by choosing Defer Upgrade from the Actions dropdown menu. To minimize downtime, modify the Amazon RDS DB instance to a Multi-AZ deployment. For Multi-AZ deployments, OS maintenance is applied to the secondary instance first, then the instance fails over, and then the primary instance is updated. The downtime is during failover.

 

Question 81

 

A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times.
What could be causing these slow response times?

 

A. 

New volumes created from snapshots load lazily in the background

 

https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ebs-fast-snapshot-restore-eliminates-need-for-prewarming-data-into-volumes-created-snapshots/

 

B. 

Long-running statements on the master

 

C. 

Insufficient resources on the master

 

D. 

Overload of a single replication thread by excessive writes on the master

 

 

Question 82

 

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.
Which solution will enable this change?

 

A. 

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack's mappings.

 

B.

 Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

 

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html

 

C. 

Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.

 

D. 

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

 

 

Question 83

 

A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company's main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application data. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second.
Which solution meets these requirements?

 

A. 

Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.

 

B. 

Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1 Region. Use Amazon QuickSight for displaying dashboard results.

 

C. Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.

D. 

Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica ap-northeast-1 Region.

https://aws.amazon.com/rds/aurora/global-database/

 

Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS Regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each Region, and provides disaster recovery from Region-wide outages.

 

 

Question 84

 

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.
Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

 

A. 

Update the log_connections parameter in the default parameter group

 

B. 

Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance

 

C. 

Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days

 

D. 

Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days

 

E. 

Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html

 

 

Question 85

 

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load data from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:
`Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.`
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

 

A. 

Check that Amazon S3 has an IAM role granting read access to Neptune

 

B. 

Check that an Amazon S3 VPC endpoint exists

 

C. 

Check that a Neptune VPC endpoint exists

D. 

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

 

E. 

Check that Neptune has an IAM role granting read access to Amazon S3

 

https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-data.html

 

 

Question 86

 

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from 0.1% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.
What is the MOST operationally efficient and cost-effective solution?

 

A. 

Configure RDS Storage Auto Scaling.

 

https://aws.amazon.com/vi/about-aws/whats-new/2019/06/rds-storage-auto-scaling/

 

Previously, you had to manually provision storage capacity based on anticipated application demands. Under-provisioning could result in application downtime, and over-provisioning could result in underutilized resources and higher costs. With RDS Storage Auto Scaling, you simply set your desired maximum storage limit, and Auto Scaling takes care of the rest.

RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales capacity up automatically when actual utilization approaches provisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few clicks in the AWS Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to run your applications.

 

B. 

Configure RDS instance Auto Scaling.

 

C. 

Modify the DB instance allocated storage to meet the forecasted requirements.

 

D. 

Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.

 

 

Question 87

 

A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud. The migration should incur the least possible downtime on the downstream database applications. The company's network infrastructure has limited network bandwidth that is shared with other applications.
Which solution should a database specialist use for a timely migration?

 

A. 

Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.

 

B. 

Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.

C. 

Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.

 

D. 

Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.

 

 

Question 88

 

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.
Which should the database specialist do to allow the database team to create the test tables?

 

A. 

Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.

 

B. 

Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.

 

C. 

Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.

 

D. 

Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.

 

 

Question 89

 

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.
Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

 

A. 

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.

 

B. 

Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.

 

C. 

Create additional readers to cater to the different scenarios.

 

D. 

Use custom endpoints to satisfy the different workloads.

https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-custom-endpoints/

 

For example, you may provision a set of Aurora Replicas to use an instance type with higher memory capacity in order to run an analytics workload. A custom endpoint can then help you route the analytics workload to these appropriately-configured instances, while keeping other instances in your cluster isolated from this workload. As you add or remove instances from the custom endpoint to match your workload, the endpoint helps spread the load around.

When you create a custom endpoint, you can specify which instances are covered by it, and you can configure whether to automatically add newly created instances in the cluster to your custom endpoint. You can also add or delete instances from the endpoint at any time. You can continue to use the reader endpoint to distribute your read workload to all Aurora Replicas in the cluster, the cluster endpoint to connect to the writer instance, and instance endpoints to connect to a specific instance in your cluster.  

 

 

Question 90

 

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:
`Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.`
The developers need to load this data soon, so a database specialist must act quickly to solve this issue.
What is the MOST secure solution?

 

A. 

Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.

 

B. 

Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.

 

C. 

Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.

 

D. 

Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.

 

 

Question 91

 

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.
Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

 

A. 

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

 

B. 

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

 

C. 

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

 

D. 

Create an AWS Backup plan and assign the DynamoDB table as a resource.

 

 

Question 92

 

A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance.
Which strategy would allow for a successful migration with the LEAST amount of downtime?

 

A. 

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.

 

B. 

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.

 

C. 

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.

 

D. 

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate over. Point the application to the DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html

 

 

Question 93

 

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.
Which solution meets these requirements?

 

A. 

Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.

 

B. 

Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.

 

C. 

Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.

 

D. 

Change the DB clusters to the burstable instance family.

https://aws.amazon.com/rds/aurora/serverless/

 

 

Question 94

 

A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application's cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time.
Which approach will meet these requirements?

 

A. 

Use an Amazon RDS DB instance. Shut down the instance once the data has been read.

 

B. 

Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.

 

C. 

Use Amazon DynamoDB in on-demand capacity mode.

 

D. 

Use Amazon S3 and load the data from flat files.

 

Key words " static vendor dataset" & "lowest cost"

 

Question 95

 

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.
How can this solution be implemented?

 

A. 

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

 

https://repost.aws/knowledge-center/back-up-dynamodb-s3

 

B. 

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

 

C. 

Use the AWS CLI to update the DynamoDB table and modify the partition key.

 

D. 

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

 

 

 

 

 

 

Question 96

 

A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.
What should a database specialist do to mitigate this risk?

 

A. 

Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.

 

B. 

Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.

 

https://aws.amazon.com/blogs/infrastructure-and-automation/securing-passwords-in-aws-quick-starts-using-aws-secrets-manager/

 

C. 

Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.

 

D. 

Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.

 

 

Question 97

 

A company's database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint.
What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?

 

A. 

Change the restored cluster's parameter group to the original cluster's custom parameter group.

 

B. 

Change the restored cluster's parameter group to the Amazon DocumentDB default parameter group.

 

C. 

Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.

 

D. 

Run the syncInstances command in AWS DataSync.

 

If the restored Amazon DocumentDB cluster endpoint is not accessible, the database specialist should check the cluster's security group settings to ensure that the appropriate inbound rules are configured to allow incoming connections.

 

 

Question 98

 

A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature.
Which AWS solution meets these requirements?

 

A. 

Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.

 

B. 

Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.

 

C. 

Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.

 

D. 

Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.

 

Question 99

 

A company needs to migrate Oracle Database Standard Edition running on an Amazon EC2 instance to an Amazon RDS for Oracle DB instance with Multi-AZ. The database supports an ecommerce website that runs continuously. The company can only provide a maintenance window of up to 5 minutes.
Which solution will meet these requirements?

 

A. 

Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.

 

B. 

Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the database connection string and then restart the application.

 

C. 

Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.

 

D. 

Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.

Question 100

 

A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.
What should the company do to resolve these performance issues?

 

 

A. 

Add an Aurora Replica to scale the read traffic.

 

B. 

Scale up the DB instance class.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html

 

C. 

Modify applications to commit transactions in batches.

 

D. 

Modify applications to avoid conflicts by taking locks.

 

 

Question 101

 

A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev-VPC1.
What is likely causing the timeouts?

 

A. 

The database is deployed in a VPC that is in a different Region.

 

B. 

The database is deployed in a VPC that is in a different Availability Zone.

 

C. 

The database is deployed with misconfigured security groups.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html

 

D. 

The database is deployed with the wrong client connect timeout configuration.

 

 

Question 102

 

A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.
How should the company identify the source of the problem?

 

A. 

Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.

 

B. 

Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.

 

https://aws.amazon.com/rds/performance-insights/

 

C. 

Use AWS X-Ray deployed with Amazon RDS to track query system traces.

 

D. 

Create a support request and work with AWS Support to identify the source of the issue.

 

 

Question 103

 

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.
Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

 

A. 

Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

 

B. 

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

 

C. 

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

 

D. 

Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

 

https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

 

 

Question 104

 

A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users' devices read the latest statuses of their teammates from the table using the BatchGetltemn operation.
Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation.
Which recommendation would resolve this issue?

 

A. 

Ensure the DynamoDB table is configured to be always consistent.

 

B. 

Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.

 

C. 

Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.

 

D. 

Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html

Question 105

 

A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.
Which process should the database specialist recommend?

 

A. 

Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.

 

B. 

Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.

 

C. 

Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.htmlOverview.Encryption.Limitations

 

D. 

Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.

 

 

Question 106

 

A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.
Which actions would improve the data migration speed? (Choose three.)

 

A. 

Create multiple AWS DMS tasks to migrate the large table.

 

B. 

Configure the AWS DMS replication instance with Multi-AZ.

 

C. 

Increase the capacity of the AWS DMS replication server.

 

D. 

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

 

E. 

Enable an Amazon RDS Multi-AZ configuration.

 

F. 

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

 

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.htmlCHAP_BestPractices.LargeTables

 

 

 

 

Question 107

 

A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.
Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)

 

A. 

Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.

 

B. 

Use Oracle's Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.

 

C. 

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.

 

D. 

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

 

E. 

Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

 

Question 108

 

A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company's development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.
How should this be accomplished?

 

A. 

Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.

 

B. 

Create a dump file from the DB cluster. Load the dump file into a new DB cluster.

 

C. 

Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.

 

D. 

Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.

 

Question 109

 

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.
Which action will allow AVS DMS to perform the replication?

 

A. 

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

 

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html

 

B. 

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

 

C. 

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

 

D. 

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

 

 

Question 110

 

A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.
Which approach meets these requirements with no negative performance impact?

 

A. 

Enable synchronous replication.

 

B. 

Enable asynchronous binlog replication.

 

C. 

Create an Aurora Global Database.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html

 

D. 

Copy Aurora incremental snapshots to the us-east-1 Region.

 

Question 111

 

A gaming company is developing a new mobile game and decides to store the data for each user in Amazon DynamoDB. To make the registration process as easy as possible, users can log in with their existing Facebook or Amazon accounts. The company expects more than 10,000 users.
How should a database specialist implement access control with the LEAST operational effort?

 

A. 

Use web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WIF.html

 

B. 

Use web identity federation on the mobile app and create individual IAM users with credentials to access DynamoDB.

 

C. 

Use a self-developed user management system on the mobile app that lets users access the data from DynamoDB through an API.

 

D. 

Use a single IAM user on the mobile app to access DynamoDB.

 

 

Question 112

 

A large retail company recently migrated its three-tier ecommerce applications to AWS. The company's backend database is hosted on Amazon Aurora PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.
How should this issue be resolved?

 

A. 

Modify the application to commit transactions in batches

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/apg-waits.xactsync.html

 

B. 

Add a new Aurora Replica to the Aurora DB cluster.

 

C. 

Add an Amazon ElastiCache for Redis cluster and change the application to write through.

 

D. 

Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).

 

 

Question 113

 

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off- peak hours. The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.
What should a database specialist do to meet these requirements?

 

A. 

Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.

 

B. 

Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.

 

C. 

Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption. (cost-effective)

 

D. 

Use on-demand capacity.

 

Question 114

 

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.
Which action will meet these requirements?

 

A. 

Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

 

B. 

Modify the DB instance and enable encryption.

 

C. 

Restore a DB instance from the most recent automated snapshot and enable encryption.

 

D. 

Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

 

Question 115

 

A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance.
The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.
What will happen when the modification is submitted?

 

A. 

The request will fail because this storage capacity is too large.

 

B. 

The request will succeed only if the primary instance is in active status.

 

C. 

The request will succeed only if CPU utilization is less than 10%.

 

D. 

The request will fail as the most recent modification was too soon.

 

Question 116

 

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.
Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

 

A. 

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

 

B. 

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

 

C. 

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

 

D. 

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

 

E. 

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html

 

 

Question 117

 

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi-AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.
Which approach should the database specialist take to resolve this issue without changing the application?

 

A. 

Implement sharding to distribute the load to multiple RDS for MySQL databases.

 

B. 

Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.

 

C. 

Add an RDS for MySQL read replica.

 

D.

Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

 

 

Question 118

 

A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.
Which of the following are possible reasons why the snapshot was not created? (Choose two.)

 

A. 

A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.

 

B. 

A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.

 

C. 

The RDS maintenance window is not configured.

 

D. 

The RDS DB instance is in the STORAGE_FULL state.

 

E. RDS event notifications have not been enabled.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

Question 119

 

An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company's Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.
What should the database specialist do to achieve this? (Choose two.)

 

A. 

Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.

 

B. 

Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.

 

C. 

Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.

 

D. 

Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.

 

E. 

Enable email notifications for AWS Trusted Advisor.

 

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html

 

 

Question 120

 

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories. A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.
Which AWS service or feature will help automate and achieve this objective?

 

A. 

AWS Systems Manager Parameter Store

 

B. 

DB parameter group

 

C. 

AWS Config

 

D. 

AWS Secrets Manager

 

Question 121

 

A company is developing a new web application. An AWS CloudFormation template was created as a part of the build process. Recently, a change was made to an AWS::RDS::DBInstance resource in the template. The CharacterSetName property was changed to allow the application to process international text. A change set was generated using the new template, which indicated that the existing DB instance should be replaced during an upgrade.
What should a database specialist do to prevent data loss during the stack upgrade?

 

A. 

Create a snapshot of the DB instance. Modify the template to add the DBSnapshotIdentifier property with the ID of the DB snapshot. Update the stack.

 

B. 

Modify the stack policy using the aws cloudformation update-stack command and the set-stack-policy command, then make the DB resource protected.

 

C. 

Create a snapshot of the DB instance. Update the stack. Restore the database to a new instance.

 

D. 

Deactivate any applications that are using the DB instance. Create a snapshot of the DB instance. Modify the template to add the DBSnapshotIdentifier property with the ID of the DB snapshot. Update the stack and reactivate the applications.

 

Question 122

 

A company recently acquired a new business. A database specialist must migrate an unencrypted 12 TB Amazon RDS for MySQL DB instance to a new AWS account. The database specialist needs to minimize the amount of time required to migrate the database.
Which solution meets these requirements?

 

A. 

Create a snapshot of the source DB instance in the source account. Share the snapshot with the destination account. In the target account, create a DB instance from the snapshot.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html

 

B. 

Use AWS Resource Access Manager to share the source DB instance with the destination account. Create a DB instance in the destination account using the shared resource.

 

C. 

Create a read replica of the DB instance. Give the destination account access to the read replica. In the destination account, create a snapshot of the shared read replica and provision a new RDS for MySQL DB instance.

 

D. 

Use mysqldump to back up the source database. Create an RDS for MySQL DB instance in the destination account. Use the mysql command to restore the backup in the destination database.

 

Question 123

 

A company has applications running on Amazon EC2 instances in a private subnet with no internet connectivity. The company deployed a new application that uses Amazon DynamoDB, but the application cannot connect to the DynamoDB tables. A developer already checked that all permissions are set correctly.
What should a database specialist do to resolve this issue while minimizing access to external resources?

 

A. 

Add a route to an internet gateway in the subnet's route table.

 

B. 

Add a route to a NAT gateway in the subnet's route table.

 

C. 

Assign a new security group to the EC2 instances with an outbound rule to ports 80 and 443.

 

D. 

Create a VPC endpoint for DynamoDB and add a route to the endpoint in the subnet's route table.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html

Many customers have legitimate privacy and security concerns about sending and receiving data across the public internet. You can address these concerns by using a virtual private network (VPN) to route all DynamoDB network traffic through your own corporate network infrastructure. However, this approach can introduce bandwidth and availability challenges.

VPC endpoints for DynamoDB can alleviate these challenges. A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP addresses, and you don't need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.

 

Question 124

 

The Amazon CloudWatch metric for FreeLocalStorage on an Amazon Aurora MySQL DB instance shows that the amount of local storage is below 10 MB. A database engineer must increase the local storage available in the Aurora DB instance.
How should the database engineer meet this requirement?

 

A. 

Modify the DB instance to use an instance class that provides more local SSD storage.

 

https://repost.aws/knowledge-center/aurora-mysql-local-storage

·         Storage for persistent data (called the cluster volume). This storage type increases automatically when more space is required. For more information, see What the cluster volume contains.

·         Local storage for each Aurora instance in the cluster, based on the instance class. This storage type and size is bound to the instance class, and can be changed only by moving to a larger DB instance class. Aurora for MySQL uses local storage for storing error logs, general logs, slow query logs, audit logs, and non-InnoDB temporary tables.

 

B. 

Modify the Aurora DB cluster to enable automatic volume resizing.

 

C. 

Increase the local storage by upgrading the database engine version.

 

D. 

Modify the DB instance and configure the required storage volume in the configuration section.

 

Question 125

 

A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has noticed some unexpected updates to the product and pricing information on the website, which is impacting sales targets. The marketing team wants a database specialist to audit future database activity to help identify how and when the changes are being made.
What should the database specialist do to meet these requirements? (Choose two.)

 

A. 

Create an RDS event subscription to the audit event type.

 

B. 

Enable auditing of CONNECT and QUERY_DML events.

 

C. 

SSH to the DB instance and review the database logs.

 

D. 

Publish the database logs to Amazon CloudWatch Logs.

 

E. 

Enable Enhanced Monitoring on the DB instance.

 

Question 126

 

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games' geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.
Which solution meets these requirements?

 

A. 

Amazon RDS for MySQL with multi-Region read replicas

 

B. 

Amazon Aurora global database

 

C. 

Amazon RDS for Oracle with GoldenGate

 

D. 

Amazon DynamoDB global tables

https://aws.amazon.com/dynamodb/?nc1=h_ls

 

Question 127

 

A company is running an on-premises application comprised of a web tier, an application tier, and a MySQL database tier. The database is used primarily during business hours with random activity peaks throughout the day. A database specialist needs to improve the availability and reduce the cost of the MySQL database tier as part of the company's migration to AWS.
Which MySQL database option would meet these requirements?

 

A. 

Amazon RDS for MySQL with Multi-AZ

 

B. 

Amazon Aurora Serverless MySQL cluster

https://aws.amazon.com/rds/aurora/serverless/

 

C. 

Amazon Aurora MySQL cluster

 

D. 

Amazon RDS for MySQL with read replica

 

Question 128

 

A company wants to migrate its Microsoft SQL Server Enterprise Edition database instance from on-premises to AWS. A deep review is performed and the AWS Schema Conversion Tool (AWS SCT) provides options for running this workload on Amazon RDS for SQL Server Enterprise Edition, Amazon RDS for SQL Server Standard Edition, Amazon Aurora MySQL, and Amazon Aurora PostgreSQL. The company does not want to use its own SQL server license and does not want to change from Microsoft SQL Server.
What is the MOST cost-effective and operationally efficient solution?

 

A. 

Run SQL Server Enterprise Edition on Amazon EC2.

 

B. 

Run SQL Server Standard Edition on Amazon RDS.

 

C. 

Run SQL Server Enterprise Edition on Amazon RDS.

 

D. 

Run Amazon Aurora MySQL leveraging SQL Server on Linux compatibility libraries.

 

Question 129

 

A company's ecommerce website uses Amazon DynamoDB for purchase orders. Each order is made up of a Customer ID and an Order ID. The DynamoDB table uses the Customer ID as the partition key and the Order ID as the sort key. To meet a new requirement, the company also wants the ability to query the table by using a third attribute named Invoice ID. Queries using the Invoice ID must be strongly consistent. A database specialist must provide this capability with optimal performance and minimal overhead.
What should the database administrator do to meet these requirements?

 

A. 

Add a global secondary index on Invoice ID to the existing table.

 

B. 

Add a local secondary index on Invoice ID to the existing table.

 

C. 

Recreate the table by using the latest snapshot while adding a local secondary index on Invoice ID.

 

D. 

Use the partition key and a FilterExpression parameter with a filter on Invoice ID for all queries.

 

Question 130

 

A company wants to migrate its on-premises MySQL databases to Amazon RDS for MySQL. To comply with the company's security policy, all databases must be encrypted at rest. RDS DB instance snapshots must also be shared across various accounts to provision testing and staging environments.
Which solution meets these requirements?

 

A. 

Create an RDS for MySQL DB instance with an AWS Key Management Service (AWS KMS) customer managed CMK. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html

 

B. 

Create an RDS for MySQL DB instance with an AWS managed CMK. Create a new key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

 

C. 

Create an RDS for MySQL DB instance with an AWS owned CMK. Create a new key policy to include the administrator user name of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

 

D. 

Create an RDS for MySQL DB instance with an AWS CloudHSM key. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

 

Question 131

 

A retail company manages a web application that stores data in an Amazon DynamoDB table. The company is undergoing account consolidation efforts. A database engineer needs to migrate the DynamoDB table from the current AWS account to a new AWS account.
Which strategy meets these requirements with the LEAST amount of administrative work?

 

A. 

Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.

 

B. 

Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.

 

C. 

Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.

 

https://repost.aws/knowledge-center/dynamodb-cross-account-migration

 

D. 

Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items to a DynamoDB table in the new account.

 

Question 132

 

A company uses the Amazon DynamoDB table contractDB in us-east-1 for its contract system with the following schema: orderID (primary key) timestamp (sort key) contract (map) createdBy (string) customerEmail (string) After a problem in production, the operations team has asked a database specialist to provide an IAM policy to read items from the database to debug the application. In addition, the developer is not allowed to access the value of the customerEmail field to stay compliant.
Which IAM policy should the database specialist use to achieve these requirements?


A.



B.


C.


D.

 

Question 133

 

A company has an application that uses an Amazon DynamoDB table to store user data. Every morning, a single-threaded process calls the DynamoDB API Scan operation to scan the entire table and generate a critical start-of-day report for management. A successful marketing campaign recently doubled the number of items in the table, and now the process takes too long to run and the report is not generated in time. A database specialist needs to improve the performance of the process. The database specialist notes that, when the process is running, 15% of the table's provisioned read capacity units (RCUs) are being used.
What should the database specialist do?

 

A. 

Enable auto scaling for the DynamoDB table.

 

B. 

Use four threads and parallel DynamoDB API Scan operations.

 

C. 

Double the table's provisioned RCUs.

 

D. 

Set the Limit and Offset parameters before every call to the API.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.htmlScan.ParallelScan

 

 

Question 134

 

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the Amazon DynamoDB API. After the call returns, the script attempts to call PutItem. Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.
How should a database specialist fix this issue?

 

A. 

Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.

 

B. 

Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.

 

C. 

Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.

 

D. 

Add a ConditionExpression parameter in the PutItem request.

 

Question 135

 

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.
Which solution meets these requirements?

 

A. 

Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.

 

B. 

Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html

 

C. 

Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

 

D. 

Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.

 

Question 136

 

A company developed a new application that is deployed on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances use the security group named sg-application-servers. The company needs a database to store the data from the application and decides to use an Amazon RDS for MySQL DB instance. The DB instance is deployed in a private DB subnet.
What is the MOST restrictive configuration for the DB instance security group?

 

A. 

Only allow incoming traffic from the sg-application-servers security group on port 3306.

 

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html

 

B. 

Only allow incoming traffic from the sg-application-servers security group on port 443.

 

C. 

Only allow incoming traffic from the subnet of the application servers on port 3306.

 

D. 

Only allow incoming traffic from the subnet of the application servers on port 443.

 

Question 137

 

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company's network bandwidth is available.
How should the company perform this data load?

 

A. 

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

 

B. 

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

 

C. 

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

 

https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html

https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html

 

AWS DataSync is an online data movement and discovery service that simplifies data migration and helps customers quickly, easily, and securely move their file or object data to, from, and between AWS storage services.

These are some of the main use cases for DataSync:

·         Data migration – Move active datasets rapidly over the network into Amazon S3, Amazon EFS, FSx for Windows File Server, FSx for Lustre, or FSx for OpenZFS. DataSync includes automatic encryption and data integrity validation to help make sure that your data arrives securely, intact, and ready to use.

·         Amazon Neptune provides a Loader command for loading data from external files directly into a Neptune DB cluster. You can use this command instead of executing a large number of INSERT statements, addV and addE steps, or other API calls.

·         The Neptune Loader command is faster, has less overhead, is optimized for large datasets, and supports both Gremlin data and the RDF (Resource Description Framework) data used by SPARQL.

 

D. 

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

 

 

Question 138

 

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.
Which approach meets these requirements?

 

A. 

Set the max_connections parameter to 16,000 in the instance-level parameter group.

 

B. 

Modify the client connection timeout to 300 seconds.

 

C. 

Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.

 

https://aws.amazon.com/rds/proxy/

 

D. 

Enable the query cache at the instance level.

 

Question 139

 

A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company's application team needs to encrypt the DB instance.
What should the team do to meet this requirement?

 

A. 

Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.

 

B. 

Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

 

C. 

Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

 

D. 

Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

 

Question 140

 

A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.
Which backup methodology should the database specialist use to MINIMIZE management overhead?

 

A. 

Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that runs the command on a nightly basis.

 

B. 

Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that runs the Lambda function on a nightly basis.

 

C. 

Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.

 

D. 

Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.

 

 

Question 141

 

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries run. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.
Which actions can a database specialist perform to resolve this issue? (Choose two.)

 

A. 

Restart the application tool used to run queries.

 

B. 

Change to a database instance class with higher throughput.

 

C. 

Convert from Single-AZ to Multi-AZ.

 

D. 

Increase the I/O parameter in Amazon RDS Enhanced Monitoring.

 

E. 

Convert from General Purpose to Provisioned IOPS (PIOPS).

 

https://aws.amazon.com/blogs/database/best-storage-practices-for-running-production-workloads-on-hosted-databases-with-amazon-rds-or-amazon-ec2/

 

Question 142

 

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.
What is the MOST operationally efficient solution to meet these requirements?

 

A. 

Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.

 

B. 

Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.

 

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-secretsmanager-rotationschedule.html

 

C. 

Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.

 

D. 

Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.

 

Question 143

 

A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:
The networks and routes affected if a particular component fails.
The networks that have redundant routes between them.
The networks that do not have redundant routes between them.
The fastest path between two networks.
Which database engine meets these requirements?

 

A. 

Amazon Aurora MySQL

 

B. 

Amazon Neptune

 

C. 

Amazon ElastiCache for Redis

 

D. 

Amazon DynamoDB

 

Question 144

 

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.
Which solution meets these requirements?

 

A. 

Amazon DynamoDB with on-demand capacity mode

 

B. 

Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled

 

C. 

Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)

 

D. 

Amazon Aurora with one writer node and two cross-Region Aurora Replicas

 

Question 145

 

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.
This application has two parts:
An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.
A database specialist needs to design a cost-effective database solution to handle this workload.
Which solution meets these requirements?

 

A. 

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

 

B. 

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

 

C. 

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

 

D. 

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

 

Question 146

 

An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator (DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster. During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.
What is the MOST likely reason for this occurrence?

 

A. 

A VPC endpoint was not added to access DynamoDB.

 

B. 

Strongly consistent reads are always passed through DAX to DynamoDB.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html

 

C. 

DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.

 

D. 

A VPC endpoint was not added to access CloudWatch.

 

Question 147

 

A financial company recently launched a portfolio management solution. The backend of the application is powered by Amazon Aurora with MySQL compatibility. The company requires an RTO of 5 minutes and an RPO of 5 minutes. A database specialist must configure an efficient disaster recovery solution with minimal replication lag.
Which approach should the database specialist take to meet these requirements?

 

A. 

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

 

B. 

Configure an Amazon Aurora global database and add a different AWS Region.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html

Amazon Aurora global databases span multiple AWS Regions, enabling low latency global reads and providing fast recovery from the rare outage that might affect an entire AWS Region.

 

C. 

Configure a binlog and create a replica in a different AWS Region.

 

D. 

Configure a cross-Region read replica.

 

Question 148

 

A company hosts an internal file-sharing application running on Amazon EC2 instances in VPC_A. This application is backed by an Amazon ElastiCache cluster, which is in VPC_B and peered with VPC_A. The company migrates its application instances from VPC_A to VPC_B. Logs indicate that the file-sharing application no longer can connect to the ElastiCache cluster.
What should a database specialist do to resolve this issue?

 

A. 

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

 

B. 

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

 

C. 

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC_B's CIDR blocks from the ElastiCache cluster.

 

D. 

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances' security group to the ElastiCache cluster.

 

https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html

 

 

 

 

 

 

 

 

Question 149

 

A database specialist must load 25 GB of data files from a company's on-premises storage to an Amazon Neptune database.
Which approach to load the data is FASTEST?

 

A. 

Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.

 

https://docs.aws.amazon.com/neptune/latest/userguide/load-api-reference-load.html

 

B. 

Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.

 

C. 

Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.

 

D. 

Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.

 

 

Question 150

 

A finance company needs to make sure that its MySQL database backups are available for the most recent 90 days. All of the MySQL databases are hosted on Amazon RDS for MySQL DB instances. A database specialist must implement a solution that meets the backup retention requirement with the least possible development effort.
Which approach should the database specialist take?

 

A. 

Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.

 

https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/aws-backup.html

AWS Backup is a fully managed backup service centralizing and automating the backup of data across AWS services. AWS Backup provides an orchestration layer that integrates Amazon CloudWatch, AWS CloudTrail, AWS Identity and Access Management (IAM), AWS Organizations, and other services. This centralized, AWS Cloud native solution provides global backup capabilities that can help you achieve your disaster recovery and compliance requirements. Using AWS Backup, you can centrally configure backup policies and monitor backup activity for AWS resources.

 

B. 

Modify the DB instances to enable the automated backup option. Select the required backup retention period.

 

C. 

Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.

 

D. 

Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.

 

 

 

 

 

Question 151

 

An online advertising company uses an Amazon DynamoDb table as its data store. The table has Amazon DynamoDB Streams enabled and has a global secondary index on one of the keys. The table is encrypted using an AWS Key Management Service (AWS KMS) customer managed key.
The company has decided to expand its operations globally and wants to replicate the database in a different AWS Region by using DynamoDB global tables.
Upon review, an administrator notices the following:
No role with the dynamodb: CreateGlobalTable permission exists in the account.
An empty table with the same name exists in the new Region where replication is desired.
A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.
Which configurations will block the creation of a global table or the creation of a replica in the new Region? (Choose two.)

 

A. 

A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.

 

B. 

An empty table with the same name exists in the Region where replication is desired.

 

C. 

No role with the dynamodb:CreateGlobalTable permission exists in the account.

 

D. 

DynamoDB Streams is enabled for the table.

 

E. 

The table is encrypted using a KMS customer managed key.

 

Question 152

 

A large automobile company is migrating the database of a critical financial application to Amazon DynamoDB. The company's risk and compliance policy requires that every change in the database be recorded as a log entry for audits. The system is anticipating more than 500,000 log entries each minute. Log entries should be stored in batches of at least 100,000 records in each file in Apache Parquet format.
How should a database specialist implement these requirements with DynamoDB?

 

A. 

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.

 

B. 

Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.

 

C. 

Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.

 

D. 

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.

 

https://aws.amazon.com/blogs/big-data/streaming-amazon-dynamodb-data-into-a-centralized-data-lake/

 

Question 153

 

A company released a mobile game that quickly grew to 10 million daily active users in North America. The game's backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL attribute. When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds. The game logic relies on old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that are several hours past their TTL expiry.
How should a database specialist fix this issue?

 

A. 

Use a client library that supports the TTL functionality for DynamoDB.

 

B. 

Include a query filter expression to ignore items with an expired TTL.

 

C. 

Set the ConsistentRead parameter to true when querying the table.

 

D. 

Create a local secondary index on the TTL attribute.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html

 

Question 154

 

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.
The lead developer created a single DynamoDB table for the events with the following schema:
Partition key: game name
Sort key: event identifier
Local secondary index: player identifier
Event time
The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.
Which design change should a database specialist recommend to the development team?

 

A. 

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

 

B. 

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

 

C. 

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

 

D. 

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

 

 

 

 

Question 155

 

An ecommerce company recently migrated one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition DB instance. The company expects a spike in read traffic due to an upcoming sale. A database specialist must create a read replica of the DB instance to serve the anticipated read traffic.
Which actions should the database specialist take before creating the read replica? (Choose two.)

 

A. 

Identify a potential downtime window and stop the application calls to the source DB instance.

 

B. 

Ensure that automatic backups are enabled for the source DB instance.

 

C. 

Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.ReadReplicas.html

Configuring read replicas for SQL Server

Before a DB instance can serve as a source instance for replication, you must enable automatic backups on the source DB instance. To do so, you set the backup retention period to a value other than 0. The source DB instance must be a Multi-AZ deployment with Always On Availability Groups (AGs). Setting this type of deployment also enforces that automatic backups are enabled.

 

D. 

Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).

 

E. 

Modify the read replica parameter group setting and set the value to 1.

 

Question 156

 

A company is running a two-tier ecommerce application in one AWS account. The application is backed by an Amazon RDS for MySQL Multi-AZ DB instance. A developer mistakenly deleted the DB instance in the production environment. The company restores the database, but this event results in hours of downtime and lost revenue.
Which combination of changes would minimize the risk of this mistake occurring in the future? (Choose three.)

 

A. 

Grant least privilege to groups, IAM users, and roles.

 

B. 

Allow all users to restore a database from a backup.

 

C. 

Enable deletion protection on existing production DB instances.

 

D. 

Use an ACL policy to restrict users from DB instance deletion.

 

E. 

Enable AWS CloudTrail logging and Enhanced Monitoring.

 

 

 

 

 

Question 157

 

A financial services company uses Amazon RDS for Oracle with Transparent Data Encryption (TDE). The company is required to encrypt its data at rest at all times. The key required to decrypt the data has to be highly available, and access to the key must be limited. As a regulatory requirement, the company must have the ability to rotate the encryption key on demand. The company must be able to make the key unusable if any potential security breaches are spotted. The company also needs to accomplish these tasks with minimum overhead.
What should the database administrator use to set up the encryption to meet these requirements?

 

A. 

AWS CloudHSM

 

B. 

AWS Key Management Service (AWS KMS) with an AWS managed key

 

C. 

AWS Key Management Service (AWS KMS) with server-side encryption

 

D. 

AWS Key Management Service (AWS KMS) CMK with customer-provided material

 

AWS KMS is replacing the term customer master key (CMK) with AWS KMS key and KMS key. The concept has not changed. To prevent breaking changes, AWS KMS is keeping some variations of this term.

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.htmlkey-mgmt

 

Question 158

 

A company is setting up a new Amazon RDS for SQL Server DB instance. The company wants to enable SQL Server auditing on the database.
Which combination of steps should a database specialist take to meet this requirement? (Choose two.)

 

A. 

Create a service-linked role for Amazon RDS that grants permissions for Amazon RDS to store audit logs on Amazon S3.

 

B. 

Set up a parameter group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the parameter group with the DB instance.

 

C. 

Disable Multi-AZ on the DB instance, and then enable auditing. Enable Multi-AZ after auditing is enabled.

 

D. 

Disable automated backup on the DB instance, and then enable auditing. Enable automated backup after auditing is enabled.

 

E. 

Set up an options group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the options group with the DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.Audit.html

 

Question 159

 

A database specialist is creating an AWS CloudFormation stack. The database specialist wants to prevent accidental deletion of an Amazon RDS ProductionDatabase resource in the stack.
Which solution will meet this requirement?

 

A. 

Create a stack policy to prevent updates. Include ג€Effectג€ : ג€ProductionDatabaseג€ and ג€Resourceג€ : ג€Denyג€ in the policy.

 

B. 

Create an AWS CloudFormation stack in XML format. Set xAttribute as false.

 

C. 

Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.

 

D. 

Create a stack policy to prevent updates. Include ג€Effectג€ : ג€Denyג€ and ג€Resourceג€ : ג€ProductionDatabaseג€ in the policy.

 

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html

 

Question 160

 

An ecommerce company migrates an on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility). After the migration, a database specialist realizes that encryption at rest has not been turned on for the Amazon DocumentDB cluster.
What should the database specialist do to enable encryption at rest for the Amazon DocumentDB cluster?

 

A. 

Take a snapshot of the Amazon DocumentDB cluster. Restore the unencrypted snapshot as a new cluster while specifying the encryption option, and provide an AWS Key Management Service (AWS KMS) key.

 

https://docs.aws.amazon.com/documentdb/latest/developerguide/encryption-at-rest.html

 

B. 

Enable encryption for the Amazon DocumentDB cluster on the AWS Management Console. Reboot the cluster.

 

C. 

Modify the Amazon DocumentDB cluster by using the modify-db-cluster command with the --storage-encrypted parameter set to true.

 

D. 

Add a new encrypted instance to the Amazon DocumentDB cluster, and then delete an unencrypted instance from the cluster. Repeat until all instances are encrypted.

 

 

Question 161

 

A company that analyzes the stock market has two offices: one in the us-east-1 Region and another in the eu-west-2 Region. The company wants to implement an AWS database solution that can provide fast and accurate updates. The office in eu-west-2 has dashboards with complex analytical queries to display the data. The company will use these dashboards to make buying decisions, so the dashboards must have access to the application data in less than 1 second.
Which solution meets these requirements and provides the MOST up-to-date dashboard?

 

A. 

Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.

 

B. 

Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multi-active replication to ensure that updates are quickly propagated to eu-west-2.

 

C. 

Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html

 

D. 

Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west-2. Configure the dashboard application to read from the read replica.

 

Question 162

 

A company is running its customer feedback application on Amazon Aurora MySQL. The company runs a report every day to extract customer feedback, and a team reads the feedback to determine if the customer comments are positive or negative. It sometimes takes days before the company can contact unhappy customers and take corrective measures. The company wants to use machine learning to automate this workflow.
Which solution meets this requirement with the LEAST amount of effort?

 

A. 

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.

 

B. 

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.

 

C. 

Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.

 

https://aws.amazon.com/getting-started/hands-on/sentiment-analysis-amazon-aurora-ml-integration/

When you run an ML query, Aurora calls Amazon SageMaker for a wide variety of ML algorithms or Amazon Comprehend for sentiment analysis, so your application doesn't need to call these services directly. This makes Aurora machine learning suitable for low-latency, real-time use cases such as fraud detection, ad targeting, and product recommendations.

 

D. 

Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.

 

Question 163

 

A bank plans to use an Amazon RDS for MySQL DB instance. The database should support read-intensive traffic with very few repeated queries.
Which solution meets these requirements?

 

A. 

Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.

 

B. 

Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.

 

C. 

Change the DB instance to Multi-AZ with a standby instance in another AWS Region.

 

D. 

Create a read replica of the DB instance. Use the read replica to distribute the read traffic.

 

Question 164

 

A database specialist has a fleet of Amazon RDS DB instances that use the default DB parameter group. The database specialist needs to associate a custom parameter group with some of the DB instances.
After the database specialist makes this change, when will the instances be assigned to this new parameter group?

 

A. 

Instantaneously after the change is made to the parameter group

 

B. 

In the next scheduled maintenance window of the DB instances

 

C. 

After the DB instances are manually rebooted

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

 

D. 

Within 24 hours after the change is made to the parameter group

 

 

Question 165

 

A company is planning on migrating a 500-GB database from Oracle to Amazon Aurora PostgreSQL using the AWS Schema Conversion Tool (AWS SCT) and AWS DMS. The database does not have any stored procedures to migrate but has some tables that are large or partitioned. The application is critical for business so a migration with minimal downtime is preferred.
Which combination of steps should a database specialist take to accelerate the migration process? (Choose three.)

 

A. 

Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.

 

B. 

For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.

 

C. 

For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.

 

D. 

Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.

 

E. 

Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.

 

F. 

Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.

 

Question 166

 

A company is migrating an IBM Informix database to a Multi-AZ deployment of Amazon RDS for SQL Server with Always On Availability Groups (AGs). SQL Server Agent jobs on the Always On AG listener run at 5-minute intervals to synchronize data between the Informix database and the SQL Server database. Users experience hours of stale data after a successful failover to the secondary node with minimal latency.
What should a database specialist do to ensure that users see recent data after a failover?

 

A. 

Set TTL to less than 30 seconds for cached DNS values on the Always On AG listener.

 

B. 

Break up large transactions into multiple smaller transactions that complete in less than 5 minutes.

 

C.

Set the databases on the secondary node to read-only mode.

 

D. 

 

Create the SQL Server Agent jobs on the secondary node from a script when the secondary node takes over after a failure.

 

Question 167

 

A database specialist needs to configure an Amazon RDS for MySQL DB instance to close non-interactive connections that are inactive after 900 seconds.
What should the database specialist do to accomplish this task?

 

A. 

Create a custom DB parameter group and set the wait_timeout parameter value to 900. Associate the DB instance with the custom parameter group.

 

https://aws.amazon.com/fr/blogs/database/best-practices-for-configuring-parameters-for-amazon-rds-for-mysql-part-3-parameters-related-to-security-operational-manageability-and-connectivity-timeout/

 

wait_timeout

 

This parameter indicates the number of seconds the server waits for activity on a noninteractive connection before closing it (non-interactive timeout). The default value is 28,800. If a client is doing nothing for wait_timeout seconds, the MySQL server terminates the connection. The proper setting for this variable depends on the particular environment.

 

B. 

Connect to the MySQL database and run the SET SESSION wait_timeout=900 command.

 

C. 

Edit the my.cnf file and set the wait_timeout parameter value to 900. Restart the DB instance.

 

D. 

Modify the default DB parameter group and set the wait_timeout parameter value to 900.

 

Question 168

 

A company is running its production databases in a 3 TB Amazon Aurora MySQL DB cluster. The DB cluster is deployed to the us-east-1 Region. For disaster recovery (DR) purposes, the company's database specialist needs to make the DB cluster rapidly available in another AWS Region to cover the production load with an RTO of less than 2 hours.
What is the MOST operationally efficient solution to meet these requirements?

 

 

 

A. 

Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.

 

B. 

Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html

 

C. 

Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.

 

D. 

Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.

 

 

Question 169

 

A company has an on-premises SQL Server database. The users access the database using Active Directory authentication. The company successfully migrated its database to Amazon RDS for SQL Server. However, the company is concerned about user authentication in the AWS Cloud environment.
Which solution should a database specialist provide for the user to authenticate?

 

A. 

Deploy Active Directory Federation Services (AD FS) on premises and configure it with an on-premises Active Directory. Set up delegation between the on- premises AD FS and AWS Security Token Service (AWS STS) to map user identities to a role using theAmazonRDSDirectoryServiceAccess managed IAM policy.

 

B. 

Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Use AWS SSO to configure an Active Directory user delegated to access the databases in RDS for SQL Server.

 

C. 

Use Active Directory Connector to redirect directory requests to the company's on-premises Active Directory without caching any information in the cloud. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.

 

D. 

Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Ensure RDS for SQL Server is using mixed mode authentication. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.

 

Question 170

 

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.
How should a database specialist automate the process of backing up the cluster data in compliance with these policies?

 

A. 

Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.

 

B. 

Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.

 

https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.htmlxregioncopy-kms-encrypted-snapshot

 

C. 

Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration.

 

D. 

Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.

 

 

Question 171

 

A database specialist is launching a test graph database using Amazon Neptune for the first time. The database specialist needs to insert millions of rows of test observations from a .csv file that is stored in Amazon S3. The database specialist has been using a series of API calls to upload the data to the Neptune DB instance.
Which combination of steps would allow the database specialist to upload the data faster? (Choose three.)

 

A. 

Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.

 

B. 

Ensure the vertices and edges are specified in different .csv files with proper header column formatting.

 

C. 

Use AWS DMS to move data from Amazon S3 to the Neptune Loader.

 

D. 

Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.

 

E. 

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.

F. Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint.

Question 172

 

A company is using Amazon DynamoDB global tables for an online gaming application. The game has players around the world. As the game has become more popular, the volume of requests to DynamoDB has increased significantly. Recently, players have reported that the game state is inconsistent between players in different countries. A database specialist observes that the ReplicationLatency metric for some of the replica tables is too high.
Which approach will alleviate the problem?

 

A. 

Configure all replica tables to use DynamoDB auto scaling.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_reqs_bestpractices.html

 

B. 

Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.

 

C. 

Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.

 

D. 

Configure the table-level write throughput limit service quota to a higher value.

 

 

Question 173

 

A company runs a MySQL database for its ecommerce application on a single Amazon RDS DB instance. Application purchases are automatically saved to the database, which causes intensive writes. Company employees frequently generate purchase reports. The company needs to improve database performance and reduce downtime due to patching for upgrades.
Which approach will meet these requirements with the LEAST amount of operational overhead?

 

A. 

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.

 

B. 

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.

 

C. 

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html

 

D. 

Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.

 

 

Question 174

 

An ecommerce company is migrating its core application database to Amazon Aurora MySQL. The company is currently performing online transaction processing (OLTP) stress testing with concurrent database sessions. During the first round of tests, a database specialist noticed slow performance for some specific write operations. Reviewing Amazon CloudWatch metrics for the Aurora DB cluster showed 90% CPU utilization.
Which steps should the database specialist take to MOST effectively identify the root cause of high CPU utilization and slow performance? (Choose two.)

 

A. 

Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.

 

B. 

Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.

 

C. 

Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.

 

D. 

Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.

 

E. 

Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

 

https://repost.aws/knowledge-center/rds-instance-high-cpu

 

Increases in CPU utilization can be caused by several factors, such as user-initiated heavy workloads, multiple concurrent queries, or long-running transactions.

To identify the source of the CPU usage in your Amazon RDS for MySQL instance, review the following approaches:

·         Enhanced Monitoring

·         Performance Insights

·         Queries that detect the cause of CPU utilization in the workload

·         Logs with activated monitoring

After you identify the source, you can analyze and optimize your workload to reduce CPU usage.

 

Question 175

 

An online advertising company is implementing an application that displays advertisements to its users. The application uses an Amazon DynamoDB table as a data store. The application also uses a DynamoDB Accelerator (DAX) cluster to cache its reads. Most of the reads are from the GetItem query and the BatchGetItem query. Consistency of reads is not a requirement for this application.
Upon deployment, the application cache is not performing as expected. Specific strongly consistent queries that run against the DAX cluster are taking many milliseconds to respond instead of microseconds.
How can the company improve the cache behavior to increase application performance?

 

A. 

Increase the size of the DAX cluster.

 

B. 

Configure DAX to be an item cache with no query cache

 

C. 

Use eventually consistent reads instead of strongly consistent reads.

 

D. 

Create a new DAX cluster with a higher TTL for the item cache.

 

Question 176

 

A company is running its critical production workload on a 500 GB Amazon Aurora MySQL DB cluster. A database engineer must move the workload to a new Amazon Aurora Serverless MySQL DB cluster without data loss.
Which solution will accomplish the move with the LEAST downtime and the LEAST application impact?

 

A. 

Modify the existing DB cluster and update the Aurora configuration to ג€Serverless.ג

 

B. 

Create a snapshot of the existing DB cluster and restore it to a new Aurora Serverless DB cluster.

 

C. 

Create an Aurora Serverless replica from the existing DB cluster and promote it to primary when the replica lag is minimal.

 

D. 

Replicate the data between the existing DB cluster and a new Aurora Serverless DB cluster by using AWS Database Migration Service (AWS DMS) with change data capture (CDC) enabled.

 

Question 177

 

A company is building a web application on AWS. The application requires the database to support read and write operations in multiple AWS Regions simultaneously. The database also needs to propagate data changes between Regions as the changes occur. The application must be highly available and must provide latency of single-digit milliseconds.
Which solution meets these requirements?

 

A. 

Amazon DynamoDB global tables

 

B. 

Amazon DynamoDB streams with AWS Lambda to replicate the data

 

C. 

An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards

 

D. 

An Amazon Aurora global database

 

Question 178

 

A company is using Amazon Neptune as the graph database for one of its products. The company's data science team accidentally created large amounts of temporary information during an ETL process. The Neptune DB cluster automatically increased the storage space to accommodate the new data, but the data science team deleted the unused information.
What should a database specialist do to avoid unnecessary charges for the unused cluster volume space?

 

A. 

Take a snapshot of the cluster volume. Restore the snapshot in another cluster with a smaller volume size.

 

B. 

Use the AWS CLI to turn on automatic resizing of the cluster volume.

 

C. 

Export the cluster data into a new Neptune DB cluster.

 

https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-storage.htmlfeature-overview-storage-best-practices

 

 

D. 

Add a Neptune read replica to the cluster. Promote this replica as a new primary DB instance. Reset the storage space of the cluster.

 

Question 179

 

A database specialist is responsible for designing a highly available solution for online transaction processing (OLTP) using Amazon RDS for MySQL production databases. Disaster recovery requirements include a cross-Region deployment along with an RPO of 5 minutes and RTO of 30 minutes.
What should the database specialist do to align to the high availability and disaster recovery requirements?

 

A. 

Use a Multi-AZ deployment in each Region.

 

B. 

Use read replica deployments in all Availability Zones of the secondary Region.

 

C. 

Use Multi-AZ and read replica deployments within a Region.

 

D. 

Use Multi-AZ and deploy a read replica in a secondary Region.

 

 

Question 180

 

A media company wants to use zero-downtime patching (ZDP) for its Amazon Aurora MySQL database. Multiple processing applications are using SSL certificates to connect to database endpoints and the read replicas.
Which factor will have the LEAST impact on the success of ZDP?

 

A. 

Binary logging is enabled, or binary log replication is in progress.

 

B. 

Current SSL connections are open to the database.

 

C. 

Temporary tables or table locks are in use.

 

D. 

The value of the lower_case_table_names server parameter was set to 0 when the tables were created.

 

Question 181

 

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.
Which solution should the database specialist recommend?

 

A. 

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

 

B. 

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

 

C. 

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

 

D. 

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.Overview.htmlDBActivityStreams.Overview.sync-mode

 

Asynchronous mode – When a database session generates an activity stream event, the session returns to normal activities immediately. In the background, the activity stream event is made a durable record. If an error occurs in the background task, an RDS event is sent. This event indicates the beginning and end of any time windows where activity stream event records might have been lost.

Asynchronous mode favors database performance over the accuracy of the activity stream.

 
Note:

Asynchronous mode is available for both Aurora PostgreSQL and Aurora MySQL.

 

Synchronous mode – When a database session generates an activity stream event, the session blocks other activities until the event is made durable. If the event can't be made durable for some reason, the database session returns to normal activities. However, an RDS event is sent indicating that activity stream records might be lost for some time. A second RDS event is sent after the system is back to a healthy state.

The synchronous mode favors the accuracy of the activity stream over database performance.

 
Note:

Synchronous mode is available for Aurora PostgreSQL. You can't use synchronous mode with Aurora MySQL.

 

 

Question 182

 

A company uses a single-node Amazon RDS for MySQL DB instance for its production database. The DB instance runs in an AWS Region in the United States. A week before a big sales event, a new maintenance update is available for the DB instance. The maintenance update is marked as required. The company wants to minimize downtime for the DB instance and asks a database specialist to make the DB instance highly available until the sales event ends.
Which solution will meet these requirements?

 

A. 

Defer the maintenance update until the sales event is over.

 

B. 

Create a read replica with the latest update. Initiate a failover before the sales event.

 

C. 

Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.

 

D. 

Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

 

 

 

 

Question 183

 

A company is migrating a database in an Amazon RDS for SQL Server DB instance from one AWS Region to another. The company wants to minimize database downtime during the migration.
Which strategy should the company choose for this cross-Region migration?

 

A. 

Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.

 

B. 

Back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.

 

C. 

Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.

 

D. 

Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.

 

https://aws.amazon.com/blogs/database/cross-region-disaster-recovery-of-amazon-rds-for-sql-server/

 

Question 184

 

A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled. The application has caused a logical corruption of the database, which is causing the application to become unresponsive. The specific time of the corruption has been identified, and it was within the backup retention period.
How should a database specialist recover the database to the most recent point before corruption?

 

A. 

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

 

B. 

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html

 

You can restore a DB instance to a specific point in time, creating a new DB instance.

When you restore a DB instance to a point in time, you can choose the default virtual private cloud (VPC) security group. Or you can apply a custom VPC security group to your DB instance.

Restored DB instances are automatically associated with the default DB parameter and option groups. However, you can apply a custom parameter group and option group by specifying them during a restore.

If the source DB instance has resource tags, RDS adds the latest tags to the restored DB instance.

 

C. 

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

 

D. 

Restore using the appropriate automated backup. No changes to the application connection string are required.

Question 185

 

A database specialist is designing an application to answer one-time queries. The application will query complex customer data and provide reports to end users. These reports can include many fields. The database specialist wants to give users the ability to query the database by using any of the provided fields. The database's traffic volume will be high but variable during peak times. However, the database will not have much traffic at other times during the day.
Which solution will meet these requirements MOST cost-effectively?

 

A. 

Amazon DynamoDB with provisioned capacity mode and auto scaling

 

B. 

Amazon DynamoDB with on-demand capacity mode

 

C. 

Amazon Aurora with auto scaling enabled

 

D. 

Amazon Aurora in a serverless mode

 

Question 186

 

A financial services company runs an on-premises MySQL database for a critical application. The company is dissatisfied with its current database disaster recovery (DR) solution. The application experiences a significant amount of downtime whenever the database fails over to its DR facility. The application also experiences slower response times when reports are processed on the same database. To minimize the downtime in DR situations, the company has decided to migrate the database to AWS. The company requires a solution that is highly available and the most cost-effective.
Which solution meets these requirements?

 

A. 

Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the replica instance endpoint and report queries to reference the primary DB instance endpoint.

 

B. 

Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

 

C. 

Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the cluster endpoint and report queries to reference the reader endpoint.

 

https://aws.amazon.com/fr/about-aws/whats-new/2016/09/reader-end-point-for-amazon-aurora/

 

You can now connect to all the read replicas on your Amazon Aurora cluster through a single reader end point. Until now, you could use the cluster end point to connect to the primary instance in the cluster or instance end points to direct queries to specific instances on your Aurora cluster.  

 

D. 

Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

 

 

 

Question 187

 

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key. While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units (WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.
What should the database specialist do to address the issue?

 

A. 

Change the data model to avoid hot partitions in the global secondary index.

 

B. 

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

 

C. 

Modify the table to use on-demand capacity instead of provisioned capacity.

 

D. 

Increase the number of retries on the bulk loading application.

 

https://repost.aws/knowledge-center/dynamodb-table-throttled

 

Question 188

 

A company has an application that uses an Amazon DynamoDB table as its data store. During normal business days, the throughput requirements from the application are uniform and consist of 5 standard write calls per second to the DynamoDB table. Each write call has 2 KB of data.
For 1 hour each day, the company runs an additional automated job on the DynamoDB table that makes 20 write requests per second. No other application writes to the DynamoDB table. The DynamoDB table does not have to meet any additional capacity requirements.
How should a database specialist configure the DynamoDB table's capacity to meet these requirements MOST cost-effectively?

 

A. 

Use DynamoDB provisioned capacity with 5 WCUs and auto scaling.

 

B. 

Use DynamoDB provisioned capacity with 5 WCUs and a write-through cache that DynamoDB Accelerator (DAX) provides.

 

C. 

Use DynamoDB provisioned capacity with 10 WCUs and auto scaling.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html

One write request unit represents one write for an item up to 1 KB in size. If you need to write an item that is larger than 1 KB, DynamoDB needs to consume additional write request units. Transactional write requests require 2 write request units to perform one write for items up to 1 KB. The total number of write request units required depends on the item size. For example, if your item size is 2 KB, you require 2 write request units to sustain one write request or 4 write request units for a transactional write request.

 

D. 

Use DynamoDB provisioned capacity with 10 WCUs and no auto scaling.

 

 

 

 

Question 189

 

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of 99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.
Which database solution meets these requirements at the LOWEST cost?

 

A. 

Amazon Neptune

 

B. 

Amazon Aurora PostgreSQL Serverless

 

C. 

Amazon RDS for PostgreSQL

 

D. Amazon DynamoDB

 

https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-dynamodb-announces-a-monthly-service-level-agreement/

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. Today, AWS announced the release of a DynamoDB service level agreement (SLA), which promises a stronger availability commitment with no scheduled downtime. AWS will use commercially reasonable efforts to make DynamoDB available for each AWS Region, during any monthly billing cycle, of at least 99.99% (the “Service Commitment”) and as described on Amazon DynamoDB Service Level Agreement. If all of your DynamoDB tables in the applicable AWS region are part of Global Tables, the availability promise will be at least 99.999%.

 

Question 190

 

Application developers have reported that an application is running slower as more users are added. The application database is running on an Amazon Aurora DB cluster with an Aurora Replica. The application is written to take advantage of read scaling through reader endpoints. A database specialist looks at the performance metrics of the database and determines that, as new users were added to the database, the primary instance CPU utilization steadily increased while the Aurora Replica CPU utilization remained steady.
How can the database specialist improve database performance while ensuring minimal downtime?

 

A. 

Modify the Aurora DB cluster to add more replicas until the overall load stabilizes. Then, reduce the number of replicas once the application meets service level objectives.

 

B. 

Modify the primary instance to a larger instance size that offers more CPU capacity.

 

C. 

Modify a replica to a larger instance size that has more CPU capacity. Then, promote the modified replica.

 

D. 

Restore the Aurora DB cluster to one that has an instance size with more CPU capacity. Then, swap the names of the old and new DB clusters.

 

 

 

 

Question 191

 

A company's development team needs to have production data restored in a staging AWS account. The production database is running on an Amazon RDS for PostgreSQL Multi-AZ DB instance, which has AWS KMS encryption enabled using the default KMS key. A database specialist planned to share the most recent automated snapshot with the staging account, but discovered that the option to share snapshots is disabled in the AWS Management Console.
What should the database specialist do to resolve this?

 

A. 

Disable automated backups in the DB instance. Share both the automated snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account and enable automated backups.

 

B. 

Copy the automated snapshot specifying a custom KMS encryption key. Share both the copied snapshot and the custom KMS encryption key with the staging account. Restore the snapshot to the staging account within the same Region.

 

https://repost.aws/knowledge-center/rds-snapshots-share-account

You can share manual DB snapshots with up to 20 AWS accounts. You can start or stop sharing manual snapshots by using the Amazon RDS console, except for the following limitations:

·         You can't share automated Amazon RDS snapshots with other AWS accounts. To share an automated snapshot, copy the snapshot to make a manual version, and then share that copy.

·         You can't share manual snapshots of DB instances that use custom option groups with persistent or permanent options. For example, this includes Transparent Data Encryption (TDE) and time zone.

·         You can share encrypted manual snapshots that don't use the default Amazon RDS encryption key. But you must first share the AWS Key Management Service (AWS KMS) key with the account that you want to share the snapshot with. To share the key with another account, share the AWS Identity and Access Management (IAM) policy with the primary and secondary accounts. You can't restore shared encrypted snapshots directly from the destination account. First, copy the snapshot to the destination account by using an AWS KMS key in the destination account. Then, restore the copied snapshot.

·         To share snapshots that use the default AWS managed key for Amazon RDS (aws/rds), encrypt the snapshot by copying it with a customer managed key. Then, share the newly created snapshot.

·         You can share snapshots across AWS Regions. First share the snapshot, and then copy the snapshot to the same Region in the destination account. Then, copy the snapshot to another Region.

C. 

Modify the DB instance to use a custom KMS encryption key. Share both the automated snapshot and the custom KMS encryption key with the staging account. Restore the snapshot in the staging account.

 

D. 

Copy the automated snapshot while keeping the default KMS key. Share both the snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account.

 

 

Question 192

 

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.
Which solution meets these requirements?

 

A. 

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

B. 

Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

 

C. 

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.

 

D. 

Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.

 

https://repost.aws/knowledge-center/aurora-serverless-logs-enable-view

 

Question 193

 

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message: error: Spectrum Scan Error: Access throttled
Which solution will resolve this error?

 

A. 

Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB

 

B. 

Reduce the number of queries that users can run in parallel.

 

C. 

Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size.

 

D. 

Review and optimize queries that submit a large aggregation step to Redshift Spectrum.

 

https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-troubleshooting.html

 

Question 194

 

A company's applications store data in Amazon Aurora MySQL DB clusters. The company has separate AWS accounts for its production, test, and development environments. To test new functionality in the test environment, the company's development team requires a copy of the production database four times a day.
Which solution meets this requirement with the MOST operational efficiency?

 

A. 

Take a manual snapshot in the production account. Share the snapshot with the test account. Restore the database from the snapshot.

 

B. 

Take a manual snapshot in the production account. Export the snapshot to Amazon S3. Copy the snapshot to an S3 bucket in the test account. Restore the database from the snapshot.

 

C. 

Share the Aurora DB cluster with the test account. Create a snapshot of the production database in the test account. Restore the database from the snapshot.

 

D. 

Share the Aurora DB cluster with the test account. Create a clone of the production database in the test account.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.htmlAurora.Managing.Clone.Cross-Account

For example, you might need to regularly share a clone of your financial database with your organization's internal auditing team. In this case, your auditing team has its own AWS account for the applications that it uses. You can give the auditing team's AWS account the permission to access your Aurora DB cluster and clone it as needed.

On the other hand, if an outside vendor audits your financial data you might prefer to create the clone yourself. You then give the outside vendor access to the clone only.

You can also use cross-account cloning to support many of the same use cases for cloning within the same AWS account, such as development and testing. For example, your organization might use different AWS accounts for production, development, testing, and so on.

 

Question 195

 

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.
What should the database specialist do to meet these requirements?

 

A. 

Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

For example, you might need to regularly share a clone of your financial database with your organization's internal auditing team. In this case, your auditing team has its own AWS account for the applications that it uses. You can give the auditing team's AWS account the permission to access your Aurora DB cluster and clone it as needed.

On the other hand, if an outside vendor audits your financial data you might prefer to create the clone yourself. You then give the outside vendor access to the clone only.

You can also use cross-account cloning to support many of the same use cases for cloning within the same AWS account, such as development and testing. For example, your organization might use different AWS accounts for production, development, testing, and so on.

 

B. 

Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.

 

C. 

Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.

 

D. 

Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

 

Question 196

 

A company is loading sensitive data into an Amazon Aurora MySQL database. To meet compliance requirements, the company needs to enable audit logging on the Aurora MySQL DB cluster to audit database activity. This logging will include events such as connections, disconnections, queries, and tables queried. The company also needs to publish the DB logs to Amazon CloudWatch to perform real-time data analysis.
Which solution meets these requirements?

 

 

 

A. 

Modify the default option group parameters to enable Advanced Auditing. Restart the database for the changes to take effect.

 

B. 

Create a custom DB cluster parameter group. Modify the parameters for Advanced Auditing. Modify the cluster to associate the new custom DB parameter group with the Aurora MySQL DB cluster.

 

C. 

Take a snapshot of the database. Create a new DB instance, and enable custom auditing and logging to CloudWatch. Deactivate the DB instance that has no logging.

 

D. 

Enable AWS CloudTrail for the DB instance. Create a filter that provides only connections, disconnections, queries, and tables queried.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Auditing.html

 

Question 197

 

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.
How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

 

A. 

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.

 

B. 

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.

 

C. 

Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.

 

D. 

Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html

 

Amazon RDS supports native backup and restore for Microsoft SQL Server databases using full backup files (.bak files). When you use RDS, you access files stored in Amazon S3 rather than using the local file system on the database server.

For example, you can create a full backup from your local server, store it on S3, and then restore it onto an existing Amazon RDS DB instance. You can also make backups from RDS, store them on S3, and then restore them wherever you want.

Native backup and restore is available in all AWS Regions for Single-AZ and Multi-AZ DB instances, including Multi-AZ DB instances with read replicas. Native backup and restore is available for all editions of Microsoft SQL Server supported on Amazon RDS.

 


 

Question 198

 

A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected.
What is the MOST likely reason that the items are not being deleted?

 

A. 

The TTL attribute's value is set as a Number data type.

 

B. 

The TTL attribute's value is set as a Binary data type.

 

C. 

The TTL attribute's value is a timestamp in the Unix epoch time format in seconds.

 

D. 

The TTL attribute's value is set with an expiration of 1 year.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-before-you-start.html

Formatting an item’s TTL attribute

When enabling TTL on a table, DynamoDB requires you to identify a specific attribute name that the service will look for when determining if an item is eligible for expiration. In addition, further requirements ensure that the background TTL processes uses the value of the TTL attribute. If an item is to be eligible for expiration via TTL:

·         The item must contain the attribute specified when TTL was enabled on the table. For example, if you specify for a table to use the attribute name expdate as the TTL attribute, but an item does not have an attribute with that name, the TTL process ignores the item.

·         The TTL attribute’s value must be a top-level Number data type. For example, if you specify for a table to use the attribute name expdate as the TTL attribute, but the attribute on an item is a String data type, the TTL processes ignore the item.

·         The TTL attribute’s value must be a timestamp in Unix epoch time format in seconds. If you use any other format, the TTL processes ignore the item. For example, if you set the value of the attribute to 1645119622, that is Thursday, February 17, 2022 17:40:22 (GMT), the item will be expired after that time. For help formatting your epoch timestamps, you can use third-party tools such as Epoch Converter to get a visual web form.

·         The TTL attribute value must be a datetimestamp with an expiration of no more than five years in the past. For example, if you set the value of the attribute to 1171734022, that is February 17, 2007 17:40:22 (GMT) and older than five years. As a result, the TTL processes will not expire that item.

 

 

Question 199

 

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster. The DB cluster uses three read replicas. The primary DB instance is an 8XL-sized instance, and the read replicas are each XL-sized instances. Users report that database queries are returning stale data. The replication lag indicates that the replicas are 5 minutes behind the primary DB instance. Status queries on the replicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that the IO_THREAD is 1 binlog behind the primary.
Which changes will reduce the lag? (Choose two.)

 

A. 

Deploy two additional read replicas matching the existing replica DB instance size.

 

B. 

Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add three Aurora Replicas.

 

C. 

Move the read replicas to the same Availability Zone as the primary DB instance.

 

D. 

Increase the instance size of the primary DB instance within the same instance class.

 

E. 

Increase the instance size of the read replicas to the same size and class as the primary DB instance.

 

Question 200

 

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.
Which step can be taken to ensure that the application is not interrupted?

 

A. 

Disable weekly maintenance on the DB cluster.

 

B. 

Clone the DB cluster and migrate it to a new copy of the database.

 

C. 

Choose to defer the upgrade and then find an appropriate down time for patching.

 

D. 

Set up an Aurora Replica and promote it to primary at the time of patching.

 

Question 201

 

An ecommerce company uses Amazon DynamoDB as the backend for its payments system. A new regulation requires the company to log all data access requests for financial audits. For this purpose, the company plans to use AWS logging and save logs to Amazon S3
How can a database specialist activate logging on the database?

 

A. 

Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

 

B. 

Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

 

C. 

Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.

 

D. 

Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.

 

https://aws.amazon.com/about-aws/whats-new/2021/04/you-now-can-use-aws-cloudtrail-to-log-amazon-dynamodb-streams-da/?nc1=h_ls

 

You now can use AWS CloudTrail to log Amazon DynamoDB Streams data-plane APIs—GetRecords and GetShardIterator—to monitor and investigate item-level changes in your DynamoDB tables. Previously, you could use CloudTrail to log DynamoDB Streams control-plane activity (and not data-plane activity) on your DynamoDB tables.

With CloudTrail data-plane logging, you can record all API activity on DynamoDB, and receive detailed information such as the AWS Identity and Access Management (IAM) user or role that made a request, the time of the request, and the accessed table. To configure data-plane events for DynamoDB, in the CloudTrail console or with the AWS CLI or AWS API, specify DynamoDB as the data event type and then choose the DynamoDB tables for which you want CloudTrail to record data-plane API activity. When you enable data-plane logging on your DynamoDB table, the stream's data plane APIs are logged automatically in CloudTrail. You also can configure whether read-only, write-only, or both types of events are captured for the trail. All DynamoDB data events are delivered to an Amazon S3 bucket and Amazon CloudWatch Events, creating an audit log of data access so that you can respond to events recorded by CloudTrail.

 

Question 202

 

A vehicle insurance company needs to choose a highly available database to track vehicle owners and their insurance details. The persisted data should be immutable in the database, including the complete and sequenced history of changes over time with all the owners and insurance transfer details for a vehicle. The data should be easily verifiable for the data lineage of an insurance claim.
Which approach meets these requirements with MINIMAL effort?

 

A. 

Create a blockchain to store the insurance details. Validate the data using a hash function to verify the data lineage of an insurance claim.

 

B. 

Create an Amazon DynamoDB table to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.

 

C. 

Create an Amazon QLDB ledger to store the insurance details. Validate the data by choosing the ledger name in the digest request to verify the data lineage of an insurance claim.

 

D. 

Create an Amazon Aurora database to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.

 

https://aws.amazon.com/qldb/

Amazon Quantum Ledger Database (QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log.

 

Question 203

 

A company stores session history for its users in an Amazon DynamoDB table. The company has a large user base and generates large amounts of session data. Teams analyze the session data for 1 week, and then the data is no longer needed. A database specialist needs to design an automated solution to purge session data that is more than 1 week old.
Which strategy meets these requirements with the MOST operational efficiency?

 

A. 

Create an AWS Step Functions state machine with a DynamoDB DeleteItem operation that uses the ConditionExpression parameter to delete items older than a week. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that runs the Step Functions state machine on a weekly basis.

 

B. 

Create an AWS Lambda function to delete items older than a week from the DynamoDB table. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that triggers the Lambda function on a weekly basis.

 

C. 

Enable Amazon DynamoDB Streams on the table. Use a stream to invoke an AWS Lambda function to delete items older than a week from the DynamoDB table

 

D. 

Enable TTL on the DynamoDB table and set a Number data type as the TTL attribute. DynamoDB will automatically delete items that have a TTL that is less than the current time.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.

TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:

·         Remove user or sensor data after one year of inactivity in an application.

·         Archive expired items to an Amazon S3 data lake via Amazon DynamoDB Streams and AWS Lambda.

·         Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.

 

 

Question 204

 

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a MySQL database that is hosted in Amazon RDS. After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections.
Which solution will meet this requirement?

 

A. 

Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.

 

https://aws.amazon.com/about-aws/whats-new/2022/08/amazon-rds-mysql-supports-ssl-tls-connections/

 

Amazon RDS for MySQL supports encrypted SSL/TLS connections to the database instances. Starting today, you can enforce SSL/TLS client connections to your RDS for MySQL database instance for enhanced transport layer security. To enforce SSL/TLS, simply enable the require_secure_transport parameter (disabled by default) through the Amazon RDS Management Console, the AWS CLI or the API. When the require_secure_transport parameter is enabled, a database client will be able to connect to the RDS for MySQL instance only if it can establish an encrypted connection.

 

B. 

Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.

 

C. 

Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.

 

D. 

Update the DB instance, and enable the Require Transport Layer Security option.

 

Question 205

 

A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX). The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads.
What should the database specialist review first to improve the utility of DAX?

 

A. 

The DynamoDB ConsumedReadCapacityUnits metric

 

B. 

The trust relationship to perform the DynamoDB API calls

 

C. 

The DAX cluster's TTL setting

 

D. 

The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.cluster-management.html#DAX.cluster-management.custom-settings.ttl

 

Question 206

 

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.
Which solution will MOST improve the performance of the data migration?

 

A. 

Increase the number of tables that are loaded in parallel.

 

B. 

Drop all indexes on the source tables.

 

C. 

Change the processing mode from the batch optimized apply option to transactional mode.

 

D. 

Enable Multi-AZ on the target database while the full load task is in progress.

 

Question 207

 

A finance company migrated its 3 ׀¢׀’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime.
Which solution will meet these requirements?

 

A. 

Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.

 

B. 

Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

 

C. 

Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.

 

D. 

Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html

 

Limitations of Amazon Aurora encrypted DB clusters

The following limitations exist for Amazon Aurora encrypted DB clusters:

·         You can't turn off encryption on an encrypted DB cluster.

·         You can't create an encrypted snapshot of an unencrypted DB cluster.

·         A snapshot of an encrypted DB cluster must be encrypted using the same KMS key as the DB cluster.

·         You can't convert an unencrypted DB cluster to an encrypted one. However, you can restore an unencrypted snapshot to an encrypted Aurora DB cluster. To do this, specify a KMS key when you restore from the unencrypted snapshot.

·         You can't create an encrypted Aurora Replica from an unencrypted Aurora DB cluster. You can't create an unencrypted Aurora Replica from an encrypted Aurora DB cluster.

·         To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key in the destination AWS Region. This is because KMS keys are specific to the AWS Region that they are created in.

The source snapshot remains encrypted throughout the copy process. Amazon Aurora uses envelope encryption to protect data during the copy process. For more information about envelope encryption, see Envelope encryption in the AWS Key Management Service Developer Guide.

·         You can't unencrypt an encrypted DB cluster. However, you can export data from an encrypted DB cluster and import the data into an unencrypted DB cluster.

 

 

Question 208

 

A company has a 4 ׀¢׀’ on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.
Which solution meets these requirements?

 

A. 

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

 

B. 

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

 

C. 

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

 

D. 

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

 

Question 209

 

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 ׀¢׀’ of storage space with Provisioned IOPS. Amazon CloudWatch metrics show that the average disk queue depth is greater than 200 and that the disk I/O response time is significantly higher than usual.
What should the database specialist do to improve the performance of the application immediately?

 

A. 

Increase the Provisioned IOPS rate on the storage.

 

B. 

Increase the available storage space.

 

C. 

Use General Purpose SSD (gp2) storage with burst credits.

 

D. 

Create a read replica to offload Read IOPS from the DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

 

Question 210

 

A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user.
What is the MOST operationally efficient way to restore the default permissions of the master user?

 

A. 

Modify the DB instance and set a new master user password.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.MasterAccounts.html

When you create a new DB instance, the default master user that you use gets certain privileges for that DB instance. You can't change the master user name after the DB instance is created.

Important

We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

Note

If you accidentally delete the permissions for the master user, you can restore them by modifying the DB instance and setting a new master user password.

 

B. 

Use AWS Secrets Manager to modify the master user password and restart the DB instance.

 

C. 

Create a new master user for the DB instance.

 

D. 

Review the IAM user that owns the DB instance, and add missing permissions.

 

Question 211

 

A company is launching a new Amazon RDS for MySQL Multi-AZ DB instance to be used as a data store for a custom-built application. After a series of tests with point-in-time recovery disabled, the company decides that it must have point-in-time recovery re-enabled before using the DB instance to store production data.
What should a database specialist do so that point-in-time recovery can be successful?

 

A. 

Enable binary logging in the DB parameter group used by the DB instance.

 

B. 

Modify the DB instance and enable audit logs to be pushed to Amazon CloudWatch Logs.

 

C. 

Modify the DB instance and configure a backup retention period

 

D. 

Set up a scheduled job to create manual DB instance snapshots.

 

Question 212

 

A company has a database fleet that includes an Amazon RDS for MySQL DB instance. During an audit, the company discovered that the data that is stored on the DB instance is unencrypted.
A database specialist must enable encryption for the DB instance. The database specialist also must encrypt all connections to the DB instance.
Which combination of actions should the database specialist take to meet these requirements? (Choose three.)

 

A. 

In the RDS console, choose ג€Enable encryptionג€ to encrypt the DB instance by using an AWS Key Management Service (AWS KMS) key.

 

B. 

Encrypt the read replica of the unencrypted DB instance by using an AWS Key Management Service (AWS KMS) key. Fail over the read replica to the primary DB instance.

 

C. 

Create a snapshot of the unencrypted DB instance. Encrypt the snapshot by using an AWS Key Management Service (AWS KMS) key. Restore the DB instance from the encrypted snapshot. Delete the original DB instance.

 

D. 

Require SSL connections for applicable database user accounts.

 

E. Use SSL/TLS from the application to encrypt a connection to the DB instance.

 

F. Enable SSH encryption on the DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

 

Question 213

 

A company has an ecommerce website that runs on AWS. The website uses an Amazon RDS for MySQL database. A database specialist wants to enforce the use of temporary credentials to access the database.
Which solution will meet this requirement?

 

A. 

Use MySQL native database authentication.

 

B. 

Use AWS Secrets Manager to rotate the credentials.

 

C. 

Use AWS Identity and Access Management (IAM) database authentication.

 

D. 

Use AWS Systems Manager Parameter Store for authentication.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAM.html

 

AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon RDS resources. IAM is an AWS service that you can use with no additional charge.

 

Question 214

 

A manufacturing company has an inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.
Which action will improve query performance with the LEAST operational effort?

 

A. 

Migrate the database to a new Amazon Redshift data warehouse.

 

B. 

Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

 

C. 

Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

 

D. 

Add an Aurora read replica.

Question 215

 

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.
Which solution will meet these requirements?

 

A. 

Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

 

B. 

Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

 

https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/ec2-fci.html

To deploy an FCI on AWS, you can use FSx for Windows File Server, which provides fully managed shared file storage. This service automatically replicates the storage synchronously across two Availability Zones to provide high availability. Using FSx for Windows File Server for file storage helps simplify and optimize your SQL Server high availability deployments on Amazon EC2.

 

C. 

Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

 

D. 

Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

 

 

Question 216

 

A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials.
Which combination of steps should a database specialist take to meet this requirement? (Choose three.)

 

A. 

Extend the on-premises Active Directory to AWS by using AD Connector.

 

B. 

Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

 

C. 

Create a directory by using AWS Directory Service for Microsoft Active Directory.

 

D. 

Create an Active Directory domain controller on Amazon EC2.

 

E. 

Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

 

F. 

Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.

 

Question 217

 

A company is developing an application that performs intensive in-memory operations on advanced data structures such as sorted sets. The application requires sub-millisecond latency for reads and writes. The application occasionally must run a group of commands as an ACID-compliant operation. A database specialist is setting up the database for this application. The database specialist needs the ability to create a new database cluster from the latest backup of the production cluster.
Which type of cluster should the database specialist create to meet these requirements?

 

A. 

Amazon ElastiCache for Memcached

 

B. 

Amazon Neptune

 

C. 

Amazon ElastiCache for Redis

 

https://aws.amazon.com/elasticache/redis-vs-memcached/

 

D. 

Amazon DynamoDB Accelerator (DAX)

 

 

Question 218

 

A company uses Amazon Aurora MySQL as the primary database engine for many of its applications. A database specialist must create a dashboard to provide the company with information about user connections to databases. According to compliance requirements, the company must retain all connection logs for at least 7 years.
Which solution will meet these requirements MOST cost-effectively?

 

A. 

Enable advanced auditing on the Aurora cluster to log CONNECT events. Export audit logs from Amazon CloudWatch to Amazon S3 by using an AWS Lambda function that is invoked by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Build a dashboard by using Amazon QuickSight.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Auditing.html

 

B. 

Capture connection attempts to the Aurora cluster with AWS Cloud Trail by using the DescribeEvents API operation. Create a CloudTrail trail to export connection logs to Amazon S3. Build a dashboard by using Amazon QuickSight.

 

C. 

Start a database activity stream for the Aurora cluster. Push the activity records to an Amazon Kinesis data stream. Build a dynamic dashboard by using AWS Lambda.

 

D. 

Publish the DatabaseConnections metric for the Aurora DB instances to Amazon CloudWatch. Build a dashboard by using CloudWatch dashboards.

 

Question 219

 

A company requires near-real-time notifications when changes are made to Amazon RDS DB security groups.
Which solution will meet this requirement with the LEAST operational overhead?

 

A. Configure an RDS event notification subscription for DB security group events.

 

B. Create an AWS Lambda function that monitors DB security group changes. Create an Amazon Simple Notification Service (Amazon SNS) topic for notification.

 

C. Turn on AWS CloudTrail. Configure notifications for the detection of changes to DB security groups.

 

D. Configure an Amazon CloudWatch alarm for RDS metrics about changes to DB security groups.

 

Question 220

 

A development team asks a database specialist to create a copy of a production Amazon RDS for MySQL DB instance every morning. The development team will use the copied DB instance as a testing environment for development. The original DB instance and the copy will be hosted in different VPCs of the same AWS account. The development team wants the copy to be available by 6 AM each day and wants to use the same endpoint address each day.
Which combination of steps should the database specialist take to meet these requirements MOST cost-effectively? (Choose three.)

 

A. 

Create a snapshot of the production database each day before the 6 AM deadline.

 

B. 

Create an RDS for MySQL DB instance from the snapshot. Select the desired DB instance size.

 

C. 

Update a defined Amazon Route 53 CNAME record to point to the copied DB instance.

 

D. 

Set up an AWS Database Migration Service (AWS DMS) migration task to copy the snapshot to the copied DB instance.

 

E. 

Use the CopySnapshot action on the production DB instance to create a snapshot before 6 AM.

 

F. 

Update a defined Amazon Route 53 alias record to point to the copied DB instance.

 

Question 221

 

A software company is conducting a security audit of its three-node Amazon Aurora MySQL DB cluster.
Which finding is a security concern that needs to be addressed?

 

A. 

The AWS account root user does not have the minimum privileges required for client applications.

 

B. 

Encryption in transit is not configured for all Aurora native backup processes.

 

C. 

Each Aurora DB cluster node is not in a separate private VPC with restricted access.

 

D. 

The IAM credentials used by the application are not rotated regularly.

 

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html

 

 

 

Question 222

 

A company has an AWS CloudFormation stack that defines an Amazon RDS DB instance. The company accidentally deletes the stack and loses recent data from the DB instance. A database specialist must change the CloudFormation template for the RDS resource to reduce the chance of accidental data loss from the DB instance in the future.
Which combination of actions should the database specialist take to meet this requirement? (Choose three.)

 

A. 

Set the DeletionProtection property to True.

 

B. 

Set the MultiAZ property to True.

 

C. 

Set the TerminationProtection property to True.

 

D. 

Set the DeleteAutomatedBackups property to False.

 

E. 

Set the DeletionPolicy attribute to No.

 

F. 

Set the DeletionPolicy attribute to Retain.

 

Question 223

 

A company has branch offices in the United States and Singapore. The company has a three-tier web application that uses a shared database. The database runs on an Amazon RDS for MySQL DB instance that is hosted in the us-west-2 Region. The application has a distributed front end that is deployed in us-west-2 and in the ap-southeast-1 Region. The company uses this front end as a dashboard that provides statistics to sales managers in each branch office.
The dashboard loads more slowly in the Singapore branch office than in the United States branch office. The company needs a solution so that the dashboard loads consistently for users in each location.
Which solution will meet these requirements in the MOST operationally efficient way?

 

A. 

Take a snapshot of the DB instance in us-west-2. Create a new DB instance in ap-southeast-2 from the snapshot. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.

 

B. 

Create an RDS read replica in ap-southeast-1 from the primary DB instance in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica.

 

C. 

Create a new DB instance in ap-southeast-1. Use AWS Database Migration Service (AWS DMS) and change data capture (CDC) to update the new DB instance in ap-southeast-1. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance.

 

D. 

Create an RDS read replica in us-west-2, where the primary DB instance resides. Create a read replica in ap-southeast-1 from the read replica in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica in ap-southeast-1.

 

 

 

 

Question 224

 

A company is using an Amazon ElastiCache for Redis cluster to host its online shopping website. Shoppers receive the following error when the website's application queries the cluster:
https://www.examtopics.com/assets/media/exam-media/04237/0014900001.png
Which solutions will resolve this memory issues with the LEAST amount of effort? (Choose three.)

 

A. 

Reduce the TTL value for keys on the node.

 

B. 

Choose a larger node type.

 

C. 

Test different values in the parameter group for the maxmemory-policy parameter to find the ideal value to use.

 

D. 

Increase the number of nodes.

 

E. 

Monitor the EngineCPUUtilization Amazon CloudWatch metric. Create an AWS Lambda function to delete keys on nodes when a threshold is reached.

 

F. Increase the TTL value for keys on the node.

 

https://repost.aws/knowledge-center/oom-command-not-allowed-redis

 

Short description

An OOM error occurs when an ElastiCache for Redis cluster can't free any additional memory.

ElastiCache for Redis implements the maxmemory-policy that's set for the cache node’s parameter group when out of memory. The default value (volatile-lru) frees up memory by evicting keys with a set expiration time (TTL value). When a cache node doesn't have any keys with a TTL value, it returns an error instead.

To resolve this error and to prevent clients from receiving OOM command not allowed error messages, do some combination of the following:

·         Set a TTL value for keys on your node.

·         Update the parameter group to use a different maxmemory-policy parameter.

·         Delete some existing keys manually to free up memory.

·         Choose a larger node type.

Note: The exact combination of the resolutions you use depends on your particular use case.

 

 

Question 225

 

A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.
Which combination of steps should the database specialist take to rename the database? (Choose two.)

 

A. 

Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.

 

B. 

Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.

 

C. 

Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.

 

D. 

Update the application with the new database connection string.

 

E. 

Update the DNS record for the DB instance.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.RenamingDB.html

 

Question 226

 

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance.
Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

 

A. 

Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.

 

B. 

Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3. (Previous version)

 

C. 

Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance. (Previous version)

 

D. 

Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.

 

E. 

Encrypt the data with client-side encryption before transferring the data to Amazon RDS.

 

https://aws.amazon.com/blogs/database/migrate-tde-enabled-sql-server-databases-to-amazon-rds-for-sql-server/

 

 

Question 227

 

A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.
What might account for this? (Choose two.)

 

A. 

The new minor version has not yet been designated as preferred and requires a manual upgrade.

 

B. 

Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.

 

C. 

Applying minor version upgrades requires sufficient free space.

 

D. 

The AWS CLI command did not include an apply-immediately parameter.

 

E. Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html

 

Question 228

 

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location. A database specialist must use encryption to ensure that the credentials are not visible in the source code.
Which solution will meet these requirements?

 

A. 

Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption.

 

B. Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager.

 

C. 

Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager.

 

D. 

Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates.

 

Question 229

 

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.
Which combination of steps should the company take to meet these requirements? (Choose two.)

 

A. 

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

 

B. 

Deploy an ElastiCache for Memcached global datastore.

 

C. 

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

 

D. 

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

 

E. 

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

 

https://aws.amazon.com/blogs/database/configuring-amazon-elasticache-for-redis-for-higher-availability/

 

Question 230

 

A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC.
Which solutions will reduce this latency? (Choose two.)

 

A. 

Configure the DMS task to run in full large binary object (LOB) mode.

 

B. 

Configure the DMS task to run in limited large binary object (LOB) mode.

 

C. 

Create a Multi-AZ replication instance.

 

D. 

Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions.

 

E. 

Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions.

 

https://aws.amazon.com/premiumsupport/knowledge-center/dms-high-source-latency/

Short description

You can monitor your AWS DMS task using Amazon CloudWatch metrics. During migration, you might see source latency during the ongoing replication phase—change data capture (CDC)—of an AWS DMS task. You can use the CloudWatch service metric for CDCLatencySource to monitor the source latency for an AWS DMS task. You might see source latency on an AWS DMS task if:

·         The source database has limited resources.

·         The AWS DMS replication instance has limited resources.

·         The network speed between the source database and the AWS DMS replication instance is slow.

·         AWS DMS reads new changes from the transaction logs of the source database during ongoing replication.

·         AWS DMS task settings are inadequate or large objects (LOBs) are being migrated.

·         The Oracle source database used for the AWS DMS task is using LogMiner for ongoing replication.

 

Question 231

 

An ecommerce company is running AWS Database Migration Service (AWS DMS) to replicate an on-premises Microsoft SQL Server database to Amazon RDS for SQL Server. The company has set up an AWS Direct Connect connection from its on-premises data center to AWS. During the migration, the company's security team receives an alarm that is related to the migration. The security team mandates that the DMS replication instance must not be accessible from public IP addresses.
What should a database specialist do to meet this requirement?

 

A. 

Set up a VPN connection to encrypt the traffic over the Direct Connect connection.

 

B. 

Modify the DMS replication instance by disabling the publicly accessible option.

 

C. 

Delete the DMS replication instance. Recreate the DMS replication instance with the publicly accessible option disabled.

 

https://repost.aws/knowledge-center/dms-disable-public-access

Short description

An AWS DMS replication instance can have one public IP address and one private IP address, just like an Amazon Elastic Compute Cloud (Amazon EC2) instance that has a public IP address.

To use a public IP address, choose the Publicly accessible option when you create your replication instance. Or specify the --publicly-accessible option when you create the replication instance using the AWS Command Line Interface (AWS CLI).

If you uncheck (disable) the box for Publicly accessible, then the replication instance has only a private IP address. As a result, the replication instance can communicate with a host that is in the same Amazon Virtual Private Cloud (Amazon VPC) and that can communicate with the private IP address. Or the replication instance can communicate with a host that is connected privately, for example, by VPN, VPC peering, or AWS Direct Connect.

After you create the replication instance, you can't modify the Publicly accessible option.

 

D. 

Create a new replication VPC subnet group with private subnets. Modify the DMS replication instance by selecting the newly created VPC subnet group.

 

 

Question 232

 

A company is using an Amazon Aurora MySQL database with Performance Insights enabled. A database specialist is checking Performance Insights and observes an alert message that starts with the following phrase: `Performance Insights is unable to collect SQL Digest statistics on new queries`¦`
Which action will resolve this alert message?

 

A. 

Truncate the events_statements_summary_by_digest table.

 

B. 

Change the AWS Key Management Service (AWS KMS) key that is used to enable Performance Insights.

 

C. 

Set the value for the performance_schema parameter in the parameter group to 1.

 

D. 

Disable and reenable Performance Insights to be effective in the next maintenance window.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PerfInsights.UsingDashboard.AnalyzeDBLoad.AdditionalMetrics.MySQL.html

 

Question 233

 

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app. The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.
Which solution will provide the MOST cost optimization of the DynamoDB database layer?

 

A. 

Change the DynamoDB tables to use on-demand capacity.

 

B. 

Use AWS Auto Scaling and configure time-based scaling.

 

C. 

Enable DynamoDB capacity-based auto scaling.

 

D. 

Enable DynamoDB Accelerator (DAX).

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

 

Question 234

 

A company has a quarterly customer survey. The survey uses an Amazon EC2 instance that is hosted in a public subnet to host a customer survey website. The company uses an Amazon RDS DB instance that is hosted in a private subnet in the same VPC to store the survey results.
The company takes a snapshot of the DB instance after a survey is complete, deletes the DB instance, and then restores the DB instance from the snapshot when the survey needs to be conducted again. A database specialist discovers that the customer survey website times out when it attempts to establish a connection to the restored DB instance.
What is the root cause of this problem?

 

A. 

The VPC peering connection has not been configured properly for the EC2 instance to communicate with the DB instance.

 

B. 

The route table of the private subnet that hosts the DB instance does not have a NAT gateway configured for communication with the EC2 instance.

 

C. 

The public subnet that hosts the EC2 instance does not have an internet gateway configured for communication with the DB instance.

 

D. 

The wrong security group was associated with the new DB instance when it was restored from the snapshot.

 

Question 235

 

A company wants to improve its ecommerce website on AWS. A database specialist decides to add Amazon ElastiCache for Redis in the implementation stack to ease the workload off the database and shorten the website response times. The database specialist must also ensure the ecommerce website is highly available within the company's AWS Region.
How should the database specialist deploy ElastiCache to meet this requirement?

 

A. 

Launch an ElastiCache for Redis cluster using the AWS CLI with the -cluster-enabled switch.

 

B. 

Launch an ElastiCache for Redis cluster and select read replicas in different Availability Zones.

 

C. 

Launch two ElastiCache for Redis clusters in two different Availability Zones. Configure Redis streams to replicate the cache from the primary cluster to another.

 

D. 

Launch an ElastiCache cluster in the primary Availability Zone and restore the cluster's snapshot to a different Availability Zone during disaster recovery.

 

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html

 

Question 236

 

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM. The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered.
How should the database administrator remediate this issue at the lowest cost?

 

A. 

Enable auto scaling and set the target usage rate to 90%.

 

B. 

Switch the table to provisioned mode and enable auto scaling.

 

C. 

Switch the table to provisioned mode and set the throughput to the peak value.

 

D. 

Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.

 

https://repost.aws/knowledge-center/dynamodb-table-throttled

 

Short description

Here are some of the common throttling issues that you might face:

·         Your DynamoDB table has adequate provisioned capacity, but most of the requests are being throttled.

·         You activated AWS Application Auto Scaling for DynamoDB, but your DynamoDB table is being throttled.

·         Your DynamoDB table is in on-demand capacity mode, but the table is being throttled.

·         You have a hot partition in your table.

·         Your table's traffic is exceeding your account throughput quotas.

 

 

Question 237

 

An IT company wants to reduce its database operation costs in its development environment. The company's workflow creates an Amazon Aurora MySQL DB cluster for each development group. The DB clusters are used for only 8 hours a day. The DB clusters can be deleted at the end of a development cycle, which lasts 2 weeks.
Which solution will meet these requirements MOST cost-effectively?

 

A. 

Use AWS CloudFormation templates. Deploy a stack with a DB cluster for each development group. Delete the stack at the end of each development cycle.

 

B. 

Use the Aurora cloning feature. Deploy a single development and test Aurora DB instance. Create clone instances for the development groups. Delete the clones at the end of each development cycle.

 

C. 

Use Aurora Replicas. From the primary writer instance, create read replicas for each development group. Promote each read replica to a standalone DB cluster Delete the standalone DB cluster at the end of each development cycle.

 

D. 

Use Aurora Serverless. Restore a current Aurora snapshot to an Aurora Serverless cluster for each development group. Select the option to pause the compute capacity on the cluster after a specified amount of time with no activity. Delete the Aurora Serverless cluster at the end of each development cycle.

 

Question 238

 

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to RDS Data API is private and never passes through the public internet.
What should the database specialist do to meet this requirement?

 

A. 

Modify the Aurora Serverless cluster by selecting a VPC with private subnets.

 

B. 

Modify the Aurora Serverless cluster by unchecking the publicly accessible option.

 

C. 

Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.

 

D. 

Create a gateway VPC endpoint for RDS Data API.

 

https://aws.amazon.com/about-aws/whats-new/2020/02/amazon-rds-data-api-now-supports-aws-privatelink/

 

You can now use AWS PrivateLink to privately access Amazon RDS Data API for Aurora Serverless from your Amazon Virtual Private Cloud (Amazon VPC) without using public IPs, and without requiring the traffic to traverse across the Internet. You can now submit your SQL statements to Amazon RDS Data API without requiring an Internet Gateway in your VPC. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. Amazon RDS Data API customers can now use private IP connectivity and security groups to meet their specific compliance requirements.  

 

 

Question 239

 

A startup company in the travel industry wants to create an application that includes a personal travel assistant to display information for nearby airports based on user location. The application will use Amazon DynamoDB and must be able to access and display attributes such as airline names, arrival times, and flight numbers. However, the application must not be able to access or display pilot names or passenger counts.
Which solution will meet these requirements MOST cost-effectively?

 

A. 

Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.

 

B. 

Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.

 

C. 

Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.

 

D. 

Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.

 

https://aws.amazon.com/blogs/aws/fine-grained-access-control-for-amazon-dynamodb/

 

Question 240

 

A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward.
Which solution meets these requirements?

 

A. 

Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqldump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.

 

B. 

Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.

 

C. 

Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.

 

D. 

Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.

 

 

Question 241

 

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.
Which combination of actions must the application development team take to meet these requirements? (Choose two.)

 

A. 

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

 

B. 

Make a copy of the DB snapshot, and set the encryption option to disable.

 

C. 

Share the DB snapshot by setting the DB snapshot visibility option to public.

 

D. 

Make a copy of the DB snapshot, and set the encryption option to enable.

 

E. 

Share the DB snapshot by using the default AWS KMS encryption key.

 

https://repost.aws/knowledge-center/rds-snapshots-share-account

Question 242

 

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:
Real-time inserts through Amazon Kinesis Data Firehose
Bulk inserts through COPY commands from Amazon S3
Analytics through SQL queries
Recently, the cluster has started to experience performance issues.
Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

 

A. 

Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.

 

B. 

Stream real-time data into Redshift temporary tables before loading the data into permanent tables.

 

C. 

For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.

 

D. 

For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.

 

E. 

Optimize analytics SQL queries to use sort keys.

 

F. 

Avoid using temporary tables in analytics SQL queries.

 

Question 243

 

An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to AWS. The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration.
Which solution meets these requirements?

 

A. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).

 

B. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2

 

C. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).

 

D. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.

 

https://docs.aws.amazon.com/documentdb/latest/developerguide/docdb-migration.htmldocdb-migration-approaches

 

Question 244

 

A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the DB instances used by the department but did not select the option to create a snapshot. Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully.
What could be the reason for this?

 

A. 

When stopping the DB instance, the option to create a snapshot should have been selected.

 

B. 

When stopping the DB instance, the duration for stopping the DB instance should have been selected.

 

C. 

Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.

 

D. 

Stopped DB instances will automatically restart if the instance is not manually started after 7 days.

 

https://repost.aws/knowledge-center/rds-stop-seven-days

 

 

Question 245

 

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads.
Which solution will meet these requirements in the MOST operationally efficient manner?

 

A. 

Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data

 

B. 

Use Amazon Redshift for relational data and JSON data.

 

C. 

Use Amazon RDS for relational data. Use Amazon Neptune for JSON data

 

D. 

Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.

 

https://docs.aws.amazon.com/redshift/latest/dg/super-overview.html

 

Question 246

 

A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover.
Which solution on AWS will meet these requirements with the LEAST operational overhead?

 

A. Deploy an Amazon RDS DB instance with a read replica.

 

B. Deploy an Amazon RDS Multi-AZ DB instance.

 

C. Deploy Amazon DynamoDB global tables.

 

D. Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.

 

 

Question 247

 

A social media company is using Amazon DynamoDB to store user profile data and user activity data. Developers are reading and writing the data, causing the size of the tables to grow significantly. Developers have started to face performance bottlenecks with the tables.
Which solution should a database specialist recommend to read items the FASTEST without consuming all the provisioned throughput for the tables?

 

A. 

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

 

B. 

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

 

C. 

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.

 

D. 

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.

 

Question 248

 

A pharmaceutical company's drug search API is using an Amazon Neptune DB cluster. A bulk uploader process automatically updates the information in the database a few times each week. A few weeks ago during a bulk upload, a database specialist noticed that the database started to respond frequently with a ThrottlingException error. The problem also occurred with subsequent uploads. The database specialist must create a solution to prevent ThrottlingException errors for the database. The solution must minimize the downtime of the cluster.
Which solution meets these requirements?

 

A. Create a read replica that uses a larger instance size than the primary DB instance. Fail over the primary DB instance to the read replica.

 

B. Add a read replica to each Availability Zone. Use an instance for the read replica that is the same size as the primary DB instance. Keep the traffic between the API and the database within the Availability Zone.

 

C. Create a read replica that uses a larger instance size than the primary DB instance. Offload the reads from the primary DB instance.

 

D. Take the latest backup, and restore it in a DB cluster of a larger size. Point the application to the newly created DB cluster.

 

Question 249

 

A global company is developing an application across multiple AWS Regions. The company needs a database solution with low latency in each Region and automatic disaster recovery. The database must be deployed in an active-active configuration with automatic data synchronization between Regions.
Which solution will meet these requirements with the LOWEST latency?

 

A. 

Amazon RDS with cross-Region read replicas

 

B. 

Amazon DynamoDB global tables

 

C. 

Amazon Aurora global database

 

D. 

Amazon Athena and Amazon S3 with S3 Cross Region Replication

 

Question 250

 

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC. The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.
Which security strategy should a database specialist implement to meet these requirements?

 

A. Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.

 

B. Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.

 

C. Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.

 

D. Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.

 

https://docs.aws.amazon.com/qldb/latest/developerguide/vpc-endpoints.html

 

Question 251

 

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.
The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.
Which solution will meet these requirements?

 

A. 

Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

 

B. 

Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.

 

C. 

Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

 

D. 

Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html

VPC endpoints for DynamoDB can alleviate these challenges. A VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet. Your EC2 instances do not require public IP addresses, and you don't need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint policies to control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.

 

Question 252

 

A company's database specialist is building an Amazon RDS for Microsoft SQL Server DB instance to store hundreds of records in CSV format. A customer service tool uploads the records to an Amazon S3 bucket. An employee who previously worked at the company already created a custom stored procedure to map the necessary CSV fields to the database tables. The database specialist needs to implement a solution that reuses this previous work and minimizes operational overhead.
Which solution will meet these requirements?

 

A. 

Create an Amazon S3 event to invoke an AWS Lambda function. Configure the Lambda function to parse the .csv file and use a SQL client library to run INSERT statements to load the data into the tables.

 

B. 

Write a custom .NET app that is hosted on Amazon EC2. Configure the .NET app to load the .csv file and call the custom stored procedure to insert the data into the tables.

 

C. 

Download the .csv file from Amazon S3 to the RDS D drive by using an AWS msdb stored procedure. Call the custom stored procedure to insert the data from the RDS D drive into the tables.

 

D. 

Create an Amazon S3 event to invoke AWS Step Functions to parse the .csv file and call the custom stored procedure to insert the data into the tables.

 

Question 253

 

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora PostgreSQL database on AWS. The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration.
What is the MOST operationally efficient solution that meets these requirements?

 

A. 

Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.

 

B. 

Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase.

 

C. 

Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.

 

D. 

Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.

 

https://repost.aws/knowledge-center/dms-mapping-oracle-postgresql

Question 254

 

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.
Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

 

A. 

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

 

B. 

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

 

C. 

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

 

D. 

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

 

E. 

Modify the system table to enable logging for each user.

https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html

Amazon Redshift logs information about connections and user activities in your database. These logs help you to monitor the database for security and troubleshooting purposes, a process called database auditing. The logs can be stored in:

·         Amazon S3 buckets - This provides access with data-security features for users who are responsible for monitoring activities in the database.

·         Amazon CloudWatch - You can view audit-logging data using the features built into CloudWatch, such as visualization features and setting actions.

 

Question 255

 

A bank is using an Amazon RDS for MySQL DB instance in a proof of concept. A database specialist is evaluating automated database snapshots and cross-Region snapshot copies as part of this proof of concept. After validating three automated snapshots successfully, the database specialist realizes that the fourth snapshot was not created.
Which of the following are possible reasons why the snapshot was not created? (Choose two.)

 

A. 

A copy of the automated snapshot for this DB instance is in progress within the same AWS Region.

 

B. 

A copy of a manual snapshot for this DB instance is in progress for only certain databases within the DB instance.

 

C. 

The RDS maintenance window is not specified.

 

D. 

The DB instance is in the STORAGE_FULL state.

 

E. 

RDS event notifications have not been enabled.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

Automated backups follow these rules:

·         Your database must be in the available state for automated backups to occur. Automated backups don't occur while your database is in a state other than available, for example, storage_full.

·         Automated backups don't occur while a DB snapshot copy is running in the same AWS Region for the same database.

 

Question 256

 

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine. The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO.
Which solution will meet these requirements?

 

A. 

Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover.

 

B. 

Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server.

 

C. 

Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region.

 

D. 

Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region.

 

Question 257

 

A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS.
Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

 

A. 

Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases.

 

B. 

Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.

 

C. 

On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.

 

D. 

On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.

 

E. 

Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.

F. On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.

 

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/replatform-sql-server.html

 

The Windows to Linux replatforming assistant for Microsoft SQL Server Databases service is a scripting tool. It helps you move existing Microsoft SQL Server workloads from a Windows to a Linux operating system.

A Microsoft SQL Server database can be replatformed from an EC2 Windows instance to an EC2 Linux instance running Microsoft SQL Server. It can also be replatformed to the VMware Cloud running Microsoft SQL Server Linux on AWS.

 

Question 258

 

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days.
What should a database specialist do to meet this requirement with the LEAST amount of downtime?

 

A. 

Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.

 

B. 

Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.

 

C. 

Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.

 

D. 

Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.

 

Question 259

 

A database specialist is planning to migrate a MySQL database to Amazon Aurora. The database specialist wants to configure the primary DB cluster in the us-west-2 Region and the secondary DB cluster in the eu-west-1 Region. In the event of a disaster recovery scenario, the database must be available in eu-west-1 with an RPO of a few seconds. Which solution will meet these requirements?

 

A. 

Use Aurora MySQL with the primary DB cluster in us-west-2 and a cross-Region Aurora Replica in eu-west-1

 

B. 

Use Aurora MySQL with the primary DB cluster in us-west-2 and binlog-based external replication to eu-west-1

 

C. 

Use an Aurora MySQL global database with the primary DB cluster in us-west-2 and the secondary DB cluster in eu-west-1

 

D. 

Use Aurora MySQL with the primary DB cluster in us-west-2. Use AWS Database Migration Service (AWS DMS) change data capture (GDC) replication to the secondary DB cluster in eu-west-1

 

An Aurora global database provides more comprehensive failover capabilities than the failover provided by a default Aurora DB cluster. By using an Aurora global database, you can plan for and recover from disaster fairly quickly. Recovery from disaster is typically measured using values for RTO and RPO.

·         Recovery time objective (RTO) – The time it takes a system to return to a working state after a disaster. In other words, RTO measures downtime. For an Aurora global database, RTO can be in the order of minutes.

·         Recovery point objective (RPO) – The amount of data that can be lost (measured in time). For an Aurora global database, RPO is typically measured in seconds. With an Aurora PostgreSQL–based global database, you can use the rds.global_db_rpo parameter to set and track the upper bound on RPO, but doing so might affect transaction processing on the primary cluster's writer node.

 

Question 260

 

An ecommerce company is planning to launch a custom customer relationship management (CRM) application on AWS. The development team selected Microsoft SQL Server as the database engine for this deployment. The CRM application will require operating system access because the application will manipulate files and packages on the server hosting the database. A senior database engineer must help the application team select a suitable deployment model for SQL Server. The deployment model should be optimized for the workload requirements.

Which deployment option should the database engineer choose that involves the LEAST operational overhead?

 

A. 

Run SQL Server on Amazon EC2 and grant elevated privileges for both the database instance and the host operating system.

 

B. 

Amazon RDS for SQL Server and grant elevated privileges for both the database instance and the host operating system.

 

C. 

Run SQL Server on Amazon EC2 and grant elevated privileges for the database instance.

 

D. 

An Amazon RDS Custom for SQL Server and grant elevated privileges for both the database instance and the host operating system.

 

https://pages.awscloud.com/Amazon-RDS-Custom-for-SQL-Server-Technical-Overview_2022_0907-DAT_OD.html

 

Amazon RDS Custom is a managed database service that allows businesses with applications that need customization of the underlying operating system and databases that support them while getting all the automation, durability, and scalability benefits of a managed database service.

 

Question 261

 

A company uses Amazon DynamoDB to store its customer data. The DynamoDB table is designed with the user ID as the partition key value and multiple other non-key attributes. An external application needs to access data for specific user IDs. The external application must have access only to items with specific partition key values.

What should the database specialist do to meet these requirements?

 

A. 

Use the dynamodb:ReturnValues condition key in the external application's IAM policy to grant access.

B. 

Use a projection expression to select specific users from the DynamoDB table for the external application.

 

C. 

Use the ExecuteStatementAPI operation to select specific users from the DynamoDB table for the external application.

 

D. 

Use the dynamodb:LeadingKeys condition key in the external application's IAM policy to grant access.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html

 

Question 262

 

A city’s weather forecast team is using Amazon DynamoDB in the data tier for an application. The application has several components. The analysis component of the application requires repeated reads against a large dataset. The application has started to temporarily consume all the read capacity in the DynamoDB table and is negatively affecting other applications that need to access the same data.
Which solution will resolve this issue with the LEAST development effort?

 

A. 

Use DynamoDB Accelerator (DAX)

 

B. 

Use Amazon CloudFront in front of DynamoDB

 

C. 

Create a DynamoDB table with a local secondary index (LSI)

 

D. 

Use Amazon ElastiCache in front of DynamoDB

 

Question 263

 

A company is creating a serverless application that uses multiple AWS services and stores data on an Amazon RDS DB instance. The database credentials must be stored securely. An AWS Lambda function must be able to access the credentials. The company also must rotate the database password monthly by using an automated solution.
What should a database specialist do to meet those requirements in the MOST secure manner?

 

A. 

Store the database credentials by using AWS Systems Manager Parameter Store. Enable automatic rotation of the password. Use the AWS Cloud Development Kit (AWS CDK) in the Lambda function to retrieve the credentials from Parameter Store

 

B. 

Encrypt the database credentials by using AWS Key Management Service (AWS KMS). Store the credentials in Amazon S3. Use an S3 Lifecycle policy to rotate the password. Retrieve the credentials by using Python code in Lambda

 

C. 

Store the database credentials by using AWS Secrets Manager. Enable automatic rotation of the password. Configure the Lambda function to use the Secrets Manager API to retrieve the credentials

 

D. 

Store the database credentials in an Amazon DynamoDB table. Assign an IAM role to the Lambda function to grant the Lambda function read-only access to the DynamoDB table. Rotate the password by using another Lambda function that runs monthly

 

Question 264

 

A company wants to migrate its on-premises Oracle database to Amazon RDS for PostgreSQL by using AWS Database Migration Service (AWS DMS). A database specialist needs to evaluate the migration task settings and data type conversion in an AWS DMS task.
What should the database specialist do to identify the optimal migration method?

 

A. 

Use the AWS Schema Conversion Tool (AWS SCT) database migration assessment report

 

https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_AssessmentReport.html

 

B. 

Use the AWS Schema Conversion Tool (AWS SCT) multiserver assessor.

 

C. 

Use an AWS DMS pre-migration assessment

 

D. 

Use the AWS DMS data validation tool

 

 

Question 265

 

An ecommerce company runs an application on Amazon RDS for SQL Server 2017 Enterprise edition. Due to the increase in read volume, the company’s application team is planning to offload the read transactions by adding a read replica to the RDS for SQL Server DB instance.
What architectural conditions should a database specialist set? (Choose two.)

 

A. 

Ensure that the automatic backups are turned on for the RDS DB instance

 

B. 

Ensure the backup retention value is set to 0 for the RDBMS instance

 

C. 

Ensure the RDS DB instance is set to Multi-AZ

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.ReadReplicas.html

Before a DB instance can serve as a source instance for replication, you must enable automatic backups on the source DB instance. To do so, you set the backup retention period to a value other than 0. The source DB instance must be a Multi-AZ deployment with Always On Availability Groups (AGs). Setting this type of deployment also enforces that automatic backups are enabled.

 

D. 

Ensure the RDS DB instance is set to Single-AZ

 

E. 

Ensure the RDS DB instance is in a stopped state to turn on the read replica

 

Question 266

 

A company uses AWS Lambda functions in a private subnet in a VPC to run application logic. The Lambda functions must not have access to the public internet. Additionally, all data communication must remain within the private network. As part of a new requirement, the application logic needs access to an Amazon DynamoDB table.
What is the MOST secure way to meet this new requirement?

 

A. 

Provision the DynamoDB table inside the same VPC that contains the Lambda functions

 

B. 

Create a gateway VPC endpoint for DynamoDB to provide access to the table

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html

VPC endpoint for DynamoDB enables Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet

 

C. 

Use a network ACL to only allow access to the DynamoDB table from the VPC

 

D. 

Use a security group to only allow access to the DynamoDB table from the VPC

 

 

Question 267

 

A startup company is developing electric vehicles. These vehicles are expected to send real-time data to the AWS Cloud for data analysis. This data will include trip metrics, trip duration, and engine temperature. The database team decides to store the data for 15 days using Amazon DynamoDB.
How can the database team achieve this with the LEAST operational overhead?

 

A. 

Implement Amazon DynamoDB Accelerator (DAX) on the DynamoDB table. Use Amazon EventBridge (Amazon CloudWatch Events) to poll the DynamoDB table and drop items after 15 days

 

B. 

Turn on DynamoDB Streams for the DynamoDB table to push the data from DynamoDB to another storage location. Use AWS Lambda to poll and terminate items older than 15 days.

 

C. 

Turn on the TTL feature for the DynamoDB table. Use the TTL attribute as a timestamp and set the expiration of items to 15 days

 

D. 

Create an AWS Lambda function to poll the list of DynamoDB tables every 15 days. Drop the existing table and create a new table

 

Question 268

 

A company is using an Amazon RDS Multi-AZ DB instance in its development environment. The DB instance uses General Purpose SSD storage. The DB instance provides data to an application that has I/O constraints and high online transaction processing (OLTP) workloads. The users report that the application is slow. A database specialist finds a high degree of latency in the database writes. The database specialist must decrease the database latency by designing a solution that minimizes operational overhead.
Which solution will meet these requirements?

 

A. 

Eliminate the Multi-AZ deployment. Run the DB instance in only one Availability Zone

 

B. 

Recreate the DB instance. Use the default storage type. Reload the data from an automatic snapshot

 

C. 

Switch the storage to Provisioned IOPS SSD on the DB instance that is running

 

D. 

Recreate the DB instance. Use Provisioned IOPS SSD storage. Reload the data from an automatic snapshot

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html

 

Question 269

 

A company wants to migrate its on-premises Oracle database to a managed open-source database engine in Amazon RDS by using AWS Database Migration Service (AWS DMS). A database specialist needs to identify the target engine in Amazon RDS based on the conversion percentage of database code objects such as stored procedures, functions, views, and database storage objects. The company will select the engine that has the least manual conversion effort.
What should the database specialist do to identify the target engine?

 

A. 

Use the AWS Schema Conversion Tool (AWS SCT) database migration assessment report

 

B. 

Use the AWS Schema Conversion Tool (AWS SCT) multiserver assessor

 

C. 

Use an AWS DMS pre-migration assessment

 

D. 

Use the AWS DMS data validation tool

 

https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_AssessmentReport.Multiserver.html

 

Question 270

 

A retail company runs its production database on Amazon RDS for SQL Server. The company wants more flexibility in backing up and restoring the database. A database specialist needs to create a native backup and restore strategy. The solution must take native SQL Server backups and store them in a highly scalable manner.
Which combination of stops should the database specialist take to meet those requirements? (Choose three.)

 

A. 

Set up an Amazon S3 destination bucket. Establish a trust relationship with an IAM role that includes permissions for Amazon RDS.

 

B. 

Set up an Amazon FSx for Windows File Server destination file system. Establish a trust relationship with an IAM role that includes permissions for Amazon RDS.

 

C. 

Create an option group. Add the SQLSERVER_BACKUP_RESTORE option to the option group

 

D. 

Modify the existing default option group. Add the SQLSERVER_BACKUP_RESTORE option to the option group

 

E. 

Back up the database by using the native BACKUP DATABASE TSQL command. Restore the database by using the RESTORE DATABASE TSQL command.

 

F. Back up the database by using the rds_backup_database stored procedure. Restore the database by using the rds_restore_database stored procedure.

 

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.BackupRestore.html

https://repost.aws/knowledge-center/native-backup-rds-sql-server

By using native backup and restore for SQL Server databases, you can create a differential or full backup of your on-premises database and store the backup files on Amazon S3. You can then restore to an existing Amazon RDS DB instance running SQL Server. You can also back up an RDS for SQL Server database, store it on Amazon S3, and restore it in other locations.

The general process for adding the native backup and restore option to a DB instance is the following:

1.       Create a new option group, or copy or modify an existing option group.

2.       Add the SQLSERVER_BACKUP_RESTORE option to the option group.

3.       Associate an Amazon Identity and Access Management (IAM) role with the option. The IAM role must have access to an S3 bucket to store the database backups.

 

Question 271

 

A company has a variety of Amazon Aurora DB clusters. Each of these DB clusters has various configurations that meet specific sets of requirements. Depending on the team and the use case, these configurations can be organized into broader categories. A database specialist wants to implement a solution to make the storage and modification of the configuration parameters more systematic.
Which AWS service or feature should the database specialist use to meet these requirements?

 

A. AWS Systems Manager Parameter Store

 

B. DB parameter group

 

C. AWS Config with the Amazon RDS managed rules

 

D. AWS Secrets Manager

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.html

DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances. DB cluster parameter groups apply to Multi-AZ DB clusters only. In a Multi-AZ DB cluster, the settings in the DB cluster parameter group apply to all of the DB instances in the cluster. The default DB parameter group for the DB engine and DB engine version is used for each DB instance in the DB cluster.

 

Question 272

 

A company is using Amazon Redshift for its data warehouse. During review of the last few AWS monthly bills, a database specialist notices charges for Amazon Redshift backup storage. The database specialist needs to decrease the cost of these charges in the future and must create a solution that provides notification if the charges exceed a threshold.
Which combination of actions will moot those requirements with the LEAST operational overhead? (Choose two.)

 

A. 

Migrate all manual snapshots to the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class

 

B. 

Use an automated snapshot schedule to take a snapshot once each day

 

C. 

Create an Amazon CloudWatch billing alarm to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic if the threshold is exceeded

 

D. 

Create a serverless AWS Glue job to run every 4 hours to describe cluster snapshots and send an email message if the threshold is exceeded

 

E. 

Delete manual snapshots that are not required anymore

 

Question 273

 

An online bookstore recently migrated its database from on-premises Oracle to Amazon Aurora PostgreSQL 13. The bookstore uses scheduled jobs to run customized SQL scripts to administer the Oracle database, running hours-long maintenance tasks, such as partition maintenance and statistics gathering. The bookstore's application team has reached out to a database specialist seeking an ideal replacement for scheduling jobs with Aurora PostgreSQL.
What should the database specialist implement to meet these requirements with MINIMAL operational overhead?

 

A. 

Configure an Amazon EC2 instance to run on a schedule to initiate database maintenance jobs

 

B. 

Configure AWS Batch with AWS Step Functions to schedule long-running database maintenance tasks

 

C. 

Create an Amazon EventBridae (Amazon CloudWatch Events) rule with AWS Lambda that runs on a schedule to initiate database maintenance jobs

 

D. 

Turn on the pg_cron extension in the Aurora PostgreSOL database and schedule the database maintenance tasks by using the cron.schedule function

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/PostgreSQL_pg_cron.html

You can use the PostgreSQL pg_cron extension to schedule maintenance commands within a PostgreSQL database. 

 

Question 274

 

A company is preparing to release a new application. During a load test on the application before launch, the company noticed that its Amazon RDS for MySQL database responded more slowly than expected. As a result, the application did not meet performance goals. A database specialist must determine which SQL statements are consuming the most load.
Which set of steps should the database specialist take to obtain this information?

 

A. 

Navigate to RDS Performance Insights. Select the database that is associated with the application. Update the counter metrics to show top_sql. Update the time range to when the load test occurred. Review the top SQL statements.

 

B. 

Navigate to RDS Performance Insights. Select the database that is associated with the application. Update the time range to when the load test occurred. Change the slice to SQL. Review the top SQL statements.

 

C. 

Navigate to Amazon CloudWatch. Select the metrics for the appropriate DB instance. Review the top SQL statements metric for the time range when the load test occurred. Create a CloudWatch dashboard to watch during future load tests.

 

D. 

Navigate to Amazon CloudWatch. Find the log group for the application's database. Review the top-sql-statements log file for the time range when the load test occurred.

 

Question 275

 

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.
How can the database specialist minimize the performance degradation after failover?

 

A. 

Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the

writer DB instance and replica to tier-0

 

B. 

Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1

 

C. 

Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture

 

D. 

Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan

 

Question 276

 

A company wants to move its on-premises Oracle database to an Amazon Aurora PostgreSQL DB cluster. The source database includes 500 GB of data. 900 stored procedures and functions, and application source code with embedded SQL statements. The company understands there are some database code objects and custom features that may not be automatically converted and may need some manual intervention. Management would like to complete this migration as fast as possible with minimal downtime.
Which tools and approach should be used to meet these requirements?

 

A. 

Use AWS DMS to perform data migration and to automatically create all schemas with Aurora PostgreSQL

 

B. 

Use AWS DMS to perform data migration and use the AWS Schema Conversion Tool (AWS SCT) to automatically generate the converted code

 

C. 

Use the AWS Schema Conversion Tool (AWS SCT) to automatically convert all types of Oracle schemas to PostgreSQL and migrate the data to Aurora

 

D. 

Use the dump and pg_dump utilities for both data migration and schema conversion

 

Question 277

 

A company recently launched a mobile app that has grown in popularity during the last week. The company started development in the cloud and did not initially follow security best practices during development of the mobile app. The mobile app gives customers the ability to use the platform anonymously. Platform architects use Amazon ElastiCache for Redis in a VPC to manage session affinity (sticky sessions) and cookies for customers.
The company's security team now mandates encryption in transit and encryption at rest for all traffic. A database specialist is using the AWS CLI to comply with this mandate.

Which combination of steps should the database specialist take to meet these requirements? (Choose three.)

 

 

A. 

Create a manual backup of the existing Redis replication group by using the create-snapshot command. Restore from the backup by using the create-replication-group command

 

B. 

Use the --transit-encryption-enabled parameter on the new Redis replication group

 

C. 

Use the --at-rest-encryption-enabled parameter on the existing Redis replication group

 

D. 

Use the --transit-encryption-enabled parameter on the existing Redis replication group

 

E. 

Use the --at-rest-encryption-enabled parameter on the new Redis replication group

 

F. Create a manual backup of the existing Redis replication group by using the CreateBackupSelection command. Restore from the backup by using the StartRestoreJob command

 

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/in-transit-encryption.html

 

Question 278

 

A company is using Amazon DocumentDB (with MongoDB compatibility) to manage its complex documents. Users report that an Amazon DocumentDB cluster takes a long time to return query results. A database specialist must investigate and resolve this issue.
Which of the following can the database specialist use to investigate the query plan and analyze the query performance?

 

A. 

AWS X-Ray deep linking

 

B. 

Amazon CloudWatch Logs Insights

 

C. 

MongoDB explain() method

 

D. 

AWS CloudTrail with a custom filter

 

Question 279

 

A company's database specialist is migrating a production Amazon RDS for MySQL database to Amazon Aurora MySQL. The source database is configured for Multi-AZ. The company's production team wants to validate the target database before switching the associated application over to use the new database endpoint. The database specialist plans to use AWS Database Migration Service (AWS DMS) for the migration.
Which steps should the database specialist perform to meet the production team's requirement? (Choose three.)

 

A. 

Enable automatic backups on the source database

 

B. 

Disable automatic backups on the source database

 

C. 

Enable binary logging. Set the binlog format parameter to ROW on the source database.

 

D. 

Enable binary logging. Set the binlog_format parameter to MIXED on the source database

 

E. 

Use the source primary database as the source endpoint for the DMS task. Configure the task as full load plus change data capture(CDC) to complete the migration

 

F. 

Use the source secondary database as the source endpoint for the DMS task. Configure the task as full load plus change data capture (CDC) to complete the migration

 

Question 280

 

A media company hosts a highly available news website on AWS but needs to improve its page load time, especially during very popular news releases. Once a news page is published, it is very unlikely to change unless an error is identified. The company has decided to use Amazon ElastiCache.
What is the recommended strategy for this use case?

 

A. 

Use ElastiCache for Memcached with write-through and long time to live (TTL)

 

B. 

Use ElastiCache for Redis with lazy loading and short time to live (TTL)

 

C. 

Use ElastiCache for Memcached with lazy loading and short time to live (TTL)

 

D. 

Use ElastiCache for Redis with write-through and long time to live (TTL)

 

Question 281

 

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database.
Which solution will meet this requirement with the LEAST operational overhead?

 

A. 

Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance

 

B. 

Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.

 

C. 

Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU

 

D. 

Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights

 

Question 282

 

A database administrator is working on transferring data from an on-premises Oracle instance to an Amazon RDS for Oracle DB instance through an AWS Database Migration Service (AWS DMS) task with ongoing replication only. The database administrator noticed that the migration task failed after running successfully for some time. The logs indicate that there was generic error. The database administrator wants to know which data definition language (DDL) statement caused this issue.
What should the database administrator do to identify' this issue in the MOST operationally efficient manner?

 

A. 

Export AWS DMS logs to Amazon CloudWatch and identify the DDL statement from the AWS Management Console

 

B. 

Turn on logging for the AWS DMS task by setting the TARGET_LOAD action with the level of severity set to LOGGER_SEVERITY_DETAILED_DEBUG

 

C. 

Turn on DDL activity tracing in the RDS for Oracle DB instance parameter group

 

D. 

Turn on logging for the AWS DMS task by setting the TARGET_APPLY action with the level of severity' set to LOGGER_SEVERITY_DETAILED_DEBUG

 

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.html

 

Question 283

 

A company is migrating its 200 GB on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The original database columns include NOT NULL and foreign key constraints. A database administrator needs to complete the migration while following best practices for database migrations.
Which option meets these requirements to migrate the database to AWS?

 

A. 

Use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to migrate the database to an Aurora PostgreSQL DB cluster.

 

B. 

Create an AWS Lambda function to connect to the source database and load the data into the target Aurora PostgreSQL DB cluster.

 

C. 

Use the PostgreSQL tools pg_dump and pg_restore to migrate to the Aurora PostgreSQL DB cluster.

 

D. 

Create an Aurora PostgreSQL read replica and promote the read replica to become primary once it is synchronized.

 

Option C is the best option here. Option A could have been correct if Schema conversion was not mentioned. Option B is not the most efficient way and option D is not applicable as Aurora doesn't support on-prem database source for creating read replica.

 

Question 284

 

A company is working on migrating a large Oracle database schema with 3,500 stored procedures to Amazon Aurora PostgreSQL. An application developer is using the AWS Schema Conversion Tool (AWS SCT) to convert code from Oracle to Aurora PostgreSQL. However, the code conversion is taking a longer time with performance issues. The application team has reached out to a database specialist to improve the performance of the AWS SCT conversion.
What should the database specialist do to resolve the performance issues?

 

A. 

In AWS SCT, turn on the balance speed with memory consumption performance option with the optimal memory settings on local desktop.

 

B. 

Provision the target Aurora PostgreSQL database with a higher instance class. In AWS SCT. turn on the balance speed with memory consumption performance option.

 

C. 

In AWS SCT, turn on the fast conversion with large memory consumption performance option and set the JavaOptions section to the maximum memory available.

 

D. 

Provision a client Amazon EC2 machine with more CPU and memory resources in the same AWS Region as the Aurora PostgreSQL database.

 

https://repost.aws/knowledge-center/dms-optimize-aws-sct-performance

 

Question 285

 

A company has a 12-node Amazon Aurora MySQL DB cluster. The company wants to use three specific Aurora Replicas to handle the workload from one of its read-only applications.
Which solution will meet this requirement with the LEAST operational overhead?

 

A. 

Use CNAMEs to set up DNS aliases for the three Aurora Replicas.

 

B. 

Configure an Aurora custom endpoint for the three Aurora Replicas.

 

C. 

Use the cluster reader endpoint. Configure the failover priority of the three Aurora Replicas.

 

D. 

Use the specific instance endpoints for each of the three Aurora Replicas.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.htmlAurora.Endpoints.Custom

 

Question 286

 

A company uses an Amazon Aurora MySQL DB cluster with the most recent version of the MySQL database engine. The company wants all data that is transferred between clients and the DB cluster to be encrypted.

What should a database specialist do to meet this requirement?

 

A. 

Turn on data encryption when modifying the DB cluster by using the AWS Management Console or by using the AWS CLI to call the modify-db-cluster command.

 

B. 

Download the key pair for the DB instance. Reference that file from the --key-name option when connecting with a MySQL client.

 

C. 

Turn on data encryption by using AWS Key Management Service (AWS KMS). Use the AWS KMS key to encrypt the connections between a MySQL client and the Aurora DB cluster.

 

D. 

Turn on the require_secure_transport parameter in the DB cluster parameter group. Download the root certificate for the DB instance. Reference that file from the --ssl-ca option when connecting with a MySQL client.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Security.html

You can require that all user connections to your Aurora MySQL DB cluster use SSL/TLS by using the require_secure_transport DB cluster parameter. By default, the require_secure_transport parameter is set to OFF. You can set the require_secure_transport parameter to ON to require SSL/TLS for connections to your DB cluster.

You can set the require_secure_transport parameter value by updating the DB cluster parameter group for your DB cluster. You don't need to reboot your DB cluster for the change to take effect

 

Question 287

 

A database specialist needs to move an Amazon RDS DB instance from one AWS account to another AWS account.
Which solution will meet this requirement with the LEAST operational effort?

 

A. 

Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from the source AWS account to the destination AWS account.

 

B. 

Create a DB snapshot of the DB instance. Share the snapshot with the destination AWS account. Create a new DB instance by restoring the snapshot in the destination AWS account.

 

C. 

Create a Multi-AZ deployment for the DB instance. Create a read replica for the DB instance in the source AWS account. Use the read replica to replicate the data into the DB instance in the destination AWS account.

 

D. 

Use AWS DataSync to back up the DB instance in the source AWS account. Use AWS Resource Access Manager (AWS RAM) to restore the backup in the destination AWS account.

 

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-amazon-rds-db-instance-to-another-vpc-or-account.html

 

Question 288

 

A company uses Amazon DynamoDB as a data store for multi-tenant data. Approximately 70% of the reads by the company's application are strongly consistent. The current key schema for the DynamoDB table is as follows:

Partition key: OrgID -

Sort key: TenantIDVersion -

Due to a change in design and access patterns, the company needs to support strongly consistent lookups based on the new schema below:

Partition key: OrgIDTenantID -

Sort key: Version -

How can the database specialist implement this change?

 

A. 

Create a global secondary index (GSI) on the existing table with the specified partition and sort key.

 

B. 

Create a local secondary index (LSI) on the existing table with the specified partition and sort key.

 

C. 

Create a new table with the specified partition and sort key. Create an AWS Glue ETL job to perform the transformation and write the transformed data to the new table.

 

D. 

Create a new table with the specified partition and sort key. Use AWS Database Migration Service (AWS DMS) to migrate the data to the new table.

 

Question 289

 

A company is using Amazon Aurora with Aurora Replicas. A database specialist needs to split up two read-only applications so that each application connects to a different set of DB instances. The database specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?

 

A. 

Use a different instance endpoint for each application.

 

B. 

Use the reader endpoint for both applications.

 

C. 

Use the reader endpoint for one application and an instance endpoint for the other application.

 

D. 

Use different custom endpoints for each application.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Endpoints.htmlAurora.Overview.Endpoints.Types

 

Question 290

 

A company uses an Amazon DynamoDB table to store data for an application. The application requires full access to the table. Some employees receive direct access to the table, but a security policy restricts their access to only certain fields. The company wants to begin using a DynamoDB Accelerator (DAX) cluster on top of the DynamoDB table.
How can the company ensure that the security policy is maintained after the implementation of the DAX cluster?

 

A. 

Modify the IAM policies for the employees. Implement user-level separation that allows the employees to access the DAX cluster.

 

B. 

Modify the IAM policies for the IAM service role of the DAX cluster. Implement user-level separation to allow access to DynamoDB.

 

C. 

Modify the IAM policies for the employees. Allow the employees to access the DAX cluster without allowing the employees to access the DynamoDB table.

 

D. 

Modify the IAM policies for the employees. Allow the employees to access the DynamoDB table without allowing the employees to access the DAX cluster.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.access-control.html

If you are currently using IAM roles and policies to restrict access to DynamoDB tables data, then the use of DAX can subvert those policies. For example, a user could have access to a DynamoDB table via DAX but not have explicit access to the same table accessing DynamoDB directly

 

Question 291

 

A company uses an Amazon Redshift cluster to support its business intelligence (BI) team. The cluster has a maintenance window that overlaps with some business report jobs that run long-running queries on the cluster. During a recent maintenance window, the cluster went offline and restarted for an update. The BI team wants to know which queries were terminated during the maintenance window.
What should a database specialist do to obtain this information?

 

A. 

Look for the terminated queries in the SVL_QLOG view.

 

B. 

Look for the terminated queries in the SVL_QUERY_REPORT view.

 

C. 

Write a scalar SQL user-defined function to find the terminated queries.

 

D. 

Use a federated query to find the terminated queries.

 

https://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QLOG.html

 

Question 292

 

A database specialist observes several idle connections in an Amazon RDS for MySQL DB instance. The DB instance is using RDS Proxy. An application is configured to connect to the proxy endpoint.
What should the database specialist do to control the idle connections in the database?

 

A. 

Modify the MaxConnectionsPercent parameter through the RDS Proxy console.

 

B. 

Use CALL mysql.rds_kill(thread-id) for the IDLE threads that are returned from the SHOW FULL PROCESSLIST command.

 

C. 

Modify the MaxIdleConnectionsPercent parameter for the RDS proxy.

 

D. 

Modify the max_connections configuration setting for the DB instance. Modify the ConnectionBorrowTimeout parameter for the RDS proxy.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy-managing.htmlrds-proxy-connection-pooling-tuning.maxidleconnectionspercent

You can control the number of idle database connections that RDS Proxy can keep in the connection pool. RDS Proxy considers a database connection in it's pool to be idle when there's been no activity on the connection for five minutes.

You specify the limit as a percentage of the maximum connections available for your database. The default value is 50 percent of MaxConnectionsPercent, and the upper limit is the value of MaxConnectionsPercent. With a high value, the proxy leaves a high percentage of idle database connections open. With a low value, the proxy closes a high percentage of idle database connections. If your workloads are unpredictable, consider setting a high value for MaxIdleConnectionsPercent. Doing so means that RDS Proxy can accommodate surges in activity without opening a lot of new database connections.

 

 

Question 293

An online retailer uses Amazon DynamoDB for its product catalog and order data. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.
What should a database specialist do to accommodate the changing requirements for DAX?

 

A. Increase the number of nodes in the existing DAX cluster.

 

B. Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.

 

C. Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.

 

D. Modify the node type in the existing DAX cluster.

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.cluster-management.htmlDAX.cluster-management.scaling

Vertical scaling

If you have a large working set of data, your application might benefit from using larger node types. Larger nodes can enable the cluster to store more data in memory, reducing cache misses and improving overall application performance of the application. (All of the nodes in a DAX cluster must be of the same type.)

If your DAX cluster has a high rate of write operations or cache misses, your application might also benefit from using larger node types. Write operations and cache misses consume resources on the cluster's primary node. Therefore, using larger node types might increase the performance of the primary node and thereby allow a higher throughput for these types of operations.

 

Question 294

 

A financial services company is using AWS Database Migration Service (AWS DMS) to migrate its databases from on-premises to AWS. A database administrator is working on replicating a database to AWS from on-premises using full load and change data capture (CDC). During the CDC replication, the database administrator observed that the target latency was high and slowly increasing.
What could be the root causes for this high target latency? (Choose two.)

 

A. 

There was ongoing maintenance on the replication instance.

 

B. 

The source endpoint was changed by modifying the task.

 

C. 

Loopback changes had affected the source and target instances.

 

D. 

There was no primary key or index in the target database.

 

E. 

There were resource bottlenecks in the replication instance.

 

https://repost.aws/knowledge-center/dms-high-target-latency

 

Question 295

 

A database specialist needs to set up an Amazon DynamoDB table. The table must exist in multiple AWS Regions and must provide point-in-time recovery of data.

Which combination of steps should the database specialist take to meet these requirements with the LEAST operational overhead? (Choose three.)

 

A. 

Enable DynamoDB Streams for a global table. Set the view type to new and old images.

 

B. 

Enable DynamoDB Streams for all replica tables.

 

C. 

Add a replica table for each Region. Ensure that table names are not already in use in each replica Region.

 

D. 

Add a replica table for each Region with a random suffix added to each table name.

 

E. 

Enable point-in-time recovery for the global table.

 

F. 

Enable point-in-time recovery for all replica tables.

 

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_StreamSpecification.html

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/pointintimerecovery_beforeyoubegin.html

 

Question 296

 

A company uses an Amazon Aurora MySQL DB cluster. A database specialist has configured the DB cluster to use the automated backup feature with a 10-day retention period. The company wants the database specialist to reduce the cost of the database backup storage as much as possible without causing downtime. It is more important for the company to optimize costs than it is to retain a large set of database backups.
Which set of actions should the database specialist take on the DB cluster to meet these requirements?

 

A. 

Disable the automated backup feature by changing the backup retention period to 0 days. Perform manual snapshots daily. Delete old snapshots.

 

B. 

Change the backup retention period to 1 day. Remove old manual snapshots if they are no longer required.

 

C. 

Keep the backup retention period at 10 days. Remove old manual snapshots if they are no longer required.

 

D. 

Disable the automated backup feature by changing the backup retention period to 0 days. Create a backup plan in AWS Backup to perform daily backups.

 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html

 

Question 297

 

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The company’s operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience.
Which solution will solve the throttling issue without requiring changes to the app?

 

A. 

Add a DynamoDB table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

 

B. 

Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

 

C. 

Use on-demand capacity mode for the DynamoDB table.

 

D. 

Use DynamoDB Accelerator (DAX).

 

Question 298

 

A global company needs to migrate from an on-premises Microsoft SQL Server database to a highly available database solution on AWS. The company wants to modernize its application and keep operational costs low. The current database includes secondary indexes and stored procedures that need to be included in the migration. The company has limited availability of database specialists to support the migration and wants to automate the process.
Which solution will meet these requirements?

 

A. 

Use AWS Database Migration Service (AWS DMS) to migrate all database objects from the on-premises SQL Server database to a Multi-AZ deployment of Amazon Aurora MySQL.

 

B. 

Use AWS Database Migration Service (AWS DMS) and the AWS Schema Conversion Tool (AWS SCT) to migrate all database objects from the on-premises SQL Server database to a Multi-AZ deployment of Amazon Aurora MySQL.

 

C. 

Rehost the on-premises SQL Server as a SQL Server Always On availability group. Host members of the availability group on Amazon EC2 instances. Use AWS Database Migration Service (AWS DMS) to migrate all database objects.

 

D. 

Rehost the on-premises SQL Server as a SQL Server Always On availability group. Host members of the availability group on Amazon EC2 instances in a single subnet that extends across multiple Availability Zones. Use SQL Server tools to migrate the data.

 

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-microsoft-sql-server-database-to-aurora-mysql-by-using-aws-dms-and-aws-sct.html

 

Question 299

 

A company is using an Amazon Aurora PostgreSQL DB cluster for a project. A database specialist must ensure that the database is encrypted at rest. The database size is 500 GB.
What is the FASTEST way to secure the data through encryption at rest in the DB cluster?

 

A. 

Take a manual snapshot of the unencrypted DB cluster. Create an encrypted copy of that snapshot in the same AWS Region as the unencrypted snapshot. Restore a DB cluster from the encrypted snapshot.

 

B. 

Create an AWS Key Management Service (AWS KMS) key in the same AWS Region and create a new encrypted Aurora cluster using this key.

 

C. 

Take a manual snapshot of the unencrypted DB cluster. Restore the unencrypted snapshot to a new encrypted Aurora PostgreSQL DB cluster.

 

D. 

Create a new encrypted Aurora PostgreSQL DB cluster. Use AWS Database Migration Service (AWS DMS) to migrate the data from the unencrypted DB cluster to the encrypted DB cluster.

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.htmlOverview.Encryption.Limitations

 

Question 300

 

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records. This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table. The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.
Which solution will meet these requirements?

 

A. 

Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.

 

B. 

Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service (Amazon Elasticsearch Service) domain for queries.

 

C. 

Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.

 

D. 

Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.

 

Question 301

 

A company is using Amazon Redshift. A database specialist needs to allow an existing Redshift cluster to access data from other Redshift clusters, Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)

 

A. 

Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.

 

B. 

Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.

 

C. 

Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.

 

D. 

Use federated queries to access data in Amazon RDS.

 

E. 

Use data sharing to access data from the other Redshift clusters.

 

F. 

Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.

 

In addition to external tables created using the CREATE EXTERNAL TABLE command, Amazon Redshift can reference external tables defined in an AWS Glue or AWS Lake Formation catalog or an Apache Hive metastore. Use the CREATE EXTERNAL SCHEMA command to register an external database defined in the external catalog and make the external tables available for use in Amazon Redshift. If the external table exists in an AWS Glue or AWS Lake Formation catalog or Hive metastore, you don't need to create the table using CREATE EXTERNAL TABLE. To view external tables, query the SVV_EXTERNAL_TABLES system view.

By using federated queries in Amazon Redshift, you can query and analyze data across operational databases, data warehouses, and data lakes. With the Federated Query feature, you can integrate queries from Amazon Redshift on live data in external databases with queries across your Amazon Redshift and Amazon S3 environments. Federated queries can work with external databases in Amazon RDS for PostgreSQL, Amazon Aurora PostgreSQL-Compatible Edition, Amazon RDS for MySQL, and Amazon Aurora MySQL-Compatible Edition.

 

With data sharing, you can share live data with relative security and ease across Amazon Redshift clusters, AWS accounts, or AWS Regions for read purposes.

 

 

Question 302

 

A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete. The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.
Which combination of changes will meet these requirements? (Choose two.)

 

A. 

Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.

 

B. 

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.

 

C. 

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.

 

D. 

Use parallel load with different data boundaries for larger tables.

 

E. 

Run the DMS tasks on a larger instance class. Increase local storage on the instance.

 

 

 

Question 303

 

A financial services company is running a MySQL database on premises. The database holds details about all customer interactions and the financial advice that the company provided. The write traffic to the database is well known and consistent. However, the read traffic is subject to significant and sudden increases for end-of-month reporting. The database is becoming overloaded during these periods of heavy read activity. The company decides to move the database to AWS. A database specialist needs to propose a solution in the AWS Cloud that will scale to meet the variable read traffic requirements without affecting the performance of write traffic. Scaling events must not require any downtime.
What is the MOST operationally efficient solution that meets these requirements?

 

A. 

Deploy a MySQL primary node on Amazon EC2 in one Availability Zone. Deploy a MySQL read replica on Amazon EC2 in a different Availability Zone. Configure a scheduled scaling event to increase the CPU capacity and RAM capacity within the MySQL read replica the day before each known traffic surge. Configure a scheduled scaling event to reduce the CPU capacity and RAM capacity within the MySQL read replica the day after each known traffic surge.

 

B. 

Deploy an Amazon Aurora MySQL DB cluster. Select a Cross-AZ configuration with an Aurora Replica. Create an Aurora Auto Scaling policy to adjust the number of Aurora Replicas based on CPU utilization. Direct all read-only reporting traffic to the reader endpoint for the DB cluster.

 

C. 

Deploy an Amazon RDS for MySQL Multi-AZ database as a write database. Deploy a second RDS for MySQL Multi-AZ database that is configured as an auto scaling read-only database. Use AWS Database Migration Service (AWS DMS) to continuously replicate data from the write database to the read-only database. Direct all read-only reporting traffic to the reader endpoint for the read-only database.

 

D. 

Deploy an Amazon DynamoDB database. Create a DynamoDB auto scaling policy to adjust the read capacity of the database based on target utilization. Direct all read traffic and write traffic to the DynamoDB database.

 

Question 304

 

A company has a Microsoft SQL Server 2017 Enterprise edition on Amazon RDS database with the Multi-AZ option turned on. Automatic backups are turned on and the retention period is set to 7 days. The company needs to add a read replica to the RDS DB instance.
How should a database specialist achieve this task?

 

A. 

Turn off the Multi-AZ feature, add the read replica, and turn Multi-AZ back on again.

 

B. 

Set the backup retention period to 0, add the read replica, and set the backup retention period to 7 days again.

 

C. 

Restore a snapshot to a new RDS DB instance and add the DB instance as a replica to the original database.

 

D. 

Add the new read replica without making any other changes to the RDS database.

 

https://aws.amazon.com/blogs/database/using-in-region-read-replicas-in-amazon-rds-for-sql-server/

When you create a read replica, Amazon RDS takes a snapshot of the primary DB instance and creates a new read-only instance from the snapshot. Creating or deleting the read replica doesn’t require any downtime on the primary DB instance. You can create up to five read replicas.

 

Question 305

 

A company is using a 1 TB Amazon RDS for PostgreSQL DB instance to store user data. During a security review, a security engineer sees that the DB instance is not encrypted at rest.
How should a database specialist correct this issue with the LEAST amount of downtime and no data loss?

 

A. 

Modify the DB instance by using the RDS management console, and enable encryption. Apply the changes immediately.

 

B. 

Create a manual DB instance snapshot and then create an encrypted copy of that snapshot. Use this snapshot to create a new encrypted DB instance. Modify the application to connect to the new DB instance.

 

C. 

Create a new encrypted DB instance and use AWS Database Migration Service (AWS DMS) to migrate the existing database to the encrypted DB instance. Once the instances are in sync, modify the application to connect to the new DB instance.

 

D. 

Create an encrypted read replica. Once the read replica is in sync, promote it to primary. Modify the application to connect to the new primary instance.

 

Question 306

 

Developers are stress testing an Amazon RDS DB instance. When they increase the number of simultaneous users to the expected level, the DB instance becomes unresponsive. A database specialist needs to monitor the DB instance to determine which SQL statements are causing load issues. The solution must minimize the effort that is required to filter the load by SQL statements.

Which solution will meet these requirements?

 

A

Monitor the DB instance's activity by using RDS events in the RDS console.

Incorrect. RDS events in the RDS console will provide information about starts and stops or similar events in the database. However, this solution will not identify the SQL statements that are overloading the database.

 

B

Monitor the DB instance's RDS Enhanced Monitoring metrics in the Amazon CloudWatch Logs console.

 

C

Monitor the DB instance's performance by using the RDS Performance Insights dashboard.

 

Performance Insights provides a prebuilt dashboard that gives you the ability to filter by various metrics and events that occur inside the RDS environment.

For more information about Performance Insights on Amazon RDS, see Monitoring with Performance Insights on Amazon RDS.

 

D

Monitor the DB instance's concurrent connections metrics in the Amazon CloudWatch metrics console.

 

Question 307

 

A financial company needs to migrate a highly available 4 TB production Oracle database to AWS. The average I/O usage is around 8,000 IOPS. At the end of every month, the company runs write-intensive batch jobs that consume up to 18,000 IOPS. The company has a service level agreement (SLA) to complete the batch jobs within a specified window.

Which solution will meet these requirements MOST cost-effectively?

 

A.

Create an Oracle database on an Amazon EC2 instance with 5 TB of General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) storage. Build an automation script to perform a storage scaling operation to increase IOPS to 18,000 before the start of the scheduled month-end activity. Perform another storage scaling operation to decrease IOPS to 8,000 after the batch jobs are complete.

 

B

Create an Oracle database on an Amazon EC2 instance with 4 TB of Provisioned IOPS (PIOPS) SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) storage. Build an automation script to perform a storage scaling operation to increase PIOPS to 18,000 before the start of the scheduled month-end activity. Perform another storage scaling operation to decrease PIOPS to 8,000 after the batch jobs are complete.

 

C

Create an Amazon RDS for Oracle DB instance with a Multi-AZ deployment and 5 TB of General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) storage. Build an automation script to perform a storage scaling operation to increase IOPS to 18,000 before the start of the scheduled month-end activity. Perform another storage scaling operation to decrease IOPS to 8,000 after the batch jobs are complete.

 

D

Create an Amazon RDS for Oracle DB instance with a Multi-AZ deployment and 4 TB of Provisioned IOPS (PIOPS) SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) storage. Build an automation script to perform a storage scaling operation to increase PIOPS to 18,000 before the start of the scheduled month-end activity. Perform another storage scaling operation to decrease PIOPS to 8,000 after the batch jobs are complete.

 

This solution will provide the DB instance with the ability to increase the PIOPS individually to the required 18,000 IOPS to meet the I/O requirement while keeping the provisioned storage space at 4 TB. This solution also is highly available.

For more information about RDS storage options and features, see Amazon RDS DB instance storage.

For more information about RDS storage scaling, see Increasing DB instance storage capacity.

 

Question 308

 

A company is using a custom application to manage worker pay. This application is using an Amazon Aurora MySQL-Compatible Edition database. The application users are associated with groups and are stored in a table that is named user_groups. This table has an automatically incremented ID (AUTO_INCREMENT) field as the primary key. The company's human resources (HR) director wants to receive email notification immediately every time a user is added to the administrators group.

Which solution will meet these requirements with the LEAST effort?

 

A

Create a new_admins table that contains the ID of new administrative users. Create an AFTER INSERT event on the user_groups table that inserts a row in the new_admins table when administrative users are added to the user_groups table. Create an AWS Lambda function that reads the new_admins table. If the table is not empty, the function sends an email to the HR director through Amazon Simple Notification Service (Amazon SNS) and clears the new_admins table. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke the Lambda function every 5 minutes.

 

B

Activate Database Activity Streams in Aurora. Create an AWS Lambda function that polls Amazon Kinesis Data Streams for ROLE events for the administrator group. Configure the function to send an email message to the HR director through Amazon Simple Email Service (Amazon SES) when the function finds a ROLE event. Create an IAM role that allows the function to invoke the SES API. Grant the INVOKE SES API role to the Lambda function.

 

This solution is a single Lambda function that will immediately notify the HR director. This solution uses built-in events that will automatically start the Lambda function when the Kinesis data stream is populated.

For more information about Database Activity Streams, see Using Database Activity Streams with Amazon Aurora.

For more information about the integration of Lambda with Kinesis, see Using AWS Lambda with Amazon Kinesis.

 

C

Move the user and group entities from Aurora to Amazon Neptune. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the HR director's email address to the topic. Set a Neptune EventSubscription resource with an EventCategory filter on added edges to the administrators group node. Set the SnsTopicArn property to the topic ARN.

 

D

Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the HR director's email address to the topic. Create a last_increment table that contains the current AUTO_INCREMENT value of the user_groups table. Create an AWS Lambda function that compares the values and updates the last_increment value. If a difference is detected, Lambda publishes a message to the SNS topic. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke the Lambda function every 5 minutes.

 

Question 309

 

A company has a 15 TB PostgreSQL operational database for its online user application. The database is running on a single on-premises server that is unstable.

The company wants to move the database to Amazon RDS. A database specialist must choose a migration solution that can be completed with minimum downtime. The solution also must not result in data loss.

Which solution meets these requirements?

 

A.

Back up the database by using the pg_dump utility. Upload the dump file to Amazon S3. Use the pg_restore utility to restore the database to a new Amazon RDS for PostgreSQL DB instance. Configure the appropriate security groups to allow database access. Modify the application's database connection pool to reference the new endpoint.

 

B

Back up the database by using the pg_dump utility. Compress the dump file, and upload it to Amazon S3. Use the pg_restore utility to restore the database to a new Amazon RDS for PostgreSQL DB instance. Begin a replication job to apply data changes from the point of the initial backup. Configure the appropriate security groups to allow database access. Modify the application's database connection pool to reference the new endpoint when the replication lag is zero.

 

This solution overcomes the problems of using pg_dump and pg_restore without keeping the data synchronized before the cutover. If you establish a replication process after the initial copy and setup of the target database, you can send the data that changed since the initial copy to the target database and keep the data in sync until the cutover time. At cutover, you can take the source application offline, and any remaining data will synchronize to the target. Then you can take down the source database and repoint the application to the target database.

For more information about PostgreSQL logical replication, see Migrating PostgreSQL from on-premises or Amazon EC2 to Amazon RDS using logical replication.

For more information about the migration of PostgreSQL databases from on premises to AWS, see Migrate an on-premises PostgreSQL database to Amazon RDS for PostgreSQL.

For more information about how to import data into Amazon RDS for PostgreSQL, see Importing a PostgreSQL database from an Amazon EC2 instance.

 

C

Use AWS Database Migration Service (AWS DMS) to initiate a migration of the data from the on-premises database to a new Amazon RDS for PostgreSQL DB instance. Modify the application’s database connection pool to reference the new endpoint. When the migration is complete, configure the appropriate security groups to allow database access.

 

D

Use AWS Database Migration Service (AWS DMS) to initiate a migration of the data from the on-premises database to a new Amazon Aurora PostgreSQL DB cluster. Modify the application's database connection pool to reference the new endpoint. When the migration is complete, configure the appropriate security groups to allow database access.

 

Question 310

 

A company is using Amazon RDS for MySQL DB instances for development. The company wants to save money on infrequently used DB instances. After the company stops some DB instances, the company's bill is lower the next month. However, the bill is still higher than the company expected.

Which actions should the company take to make additional reductions in the costs of its DB instances? (Select TWO.)

 

A

Delete databases from the DB instances to reduce storage space.

 

B

Take a snapshot, and terminate the unused DB instances.

 

If you use a database only periodically, you can reduce costs if you store a snapshot on Amazon S3, terminate the DB instance, and restore from the snapshot when the database is needed.

For more information about how to restore a database from a snapshot, see Tutorial: Restore a DB instance from a DB snapshot.

For more information about the deletion of DB instances, see Deleting a DB instance.

 

C

Schedule an AWS Lambda function to stop the DB instances after business hours and to start the DB instances at the beginning of the workday.

 

You can reduce costs by shutting down databases when you do not need them. You can develop automated microservices through Lambda to shut down databases that should not be running.

For more information about how to stop a database temporarily, see Stopping an Amazon RDS DB instance temporarily.

For more information about how to stop and start a database automatically to help with cost savings, see Implementing DB Instance Stop and Start in Amazon RDS.

For more information about how to invoke Lambda functions with EventBridge (CloudWatch Events), see Using AWS Lambda with Amazon CloudWatch Events.

D

Create a MySQL stored procedure to stop the DB instances that have low usage. Use Amazon EventBridge (Amazon CloudWatch Events) to periodically run the stored procedure.

 

E

Configure Amazon RDS for MySQL to automatically scale down the DB instance size when usage is low.

 

Question 311

 

A company has contracted with a third-party vendor to monitor the logs of an Amazon Aurora MySQL-Compatible Edition database to provide performance recommendations. The company's policy restricts vendors from directly accessing the database. The company's database specialist must provide the logs to the vendor in near-real time without providing access to the AWS account or the database itself.

Which solution will meet these requirements with the LEAST operational overhead?

 

A

Provide the vendor with read-only access to the underlying operating system of the database so that the vendor can extract the logs.

 

B

Extract the logs by using an AWS Lambda function. Compress the logs. Send the logs through email to the vendor every hour.

 

C

Enable logging in the database. Copy the logs directly from the Aurora log storage location to an Amazon S3 bucket in the vendor's AWS account.

 

D

Enable logging in the database. Publish the logs to Amazon CloudWatch Logs. Create a subscription filter to send the logs to an Amazon Kinesis data stream in the vendor's AWS account.

 

The vendor can collaborate with an owner of a different AWS account, the database specialist, to send the log events to the vendor's AWS resources for monitoring and analysis.

For more information about cross-account data sharing, see Cross-account log data sharing with subscriptions.

For more information about how to publish Aurora logs to CloudWatch Logs, see Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs.

 

Question 312

 

A company manages and runs large-scale conferences for multinational associations. A presenter wants to conduct live surveys to improve audience interaction. The survey data is anonymous and is needed only for the duration of a presentation. A total of 3,000 people will attend the presenter's session. The company needs to build a robust platform to deliver the survey and collect the results. The survey responses must be processed in near-real time. The results must be made available to the presenter as soon as the poll concludes. The solution must scale automatically when the surveys begin.

Which database solution will meet these requirements MOST cost-effectively?

 

A

Amazon DynamoDB with on-demand capacity mode

 

The description of the environment does not imply that a rigid relational database schema is required. The flexible nature of a DynamoDB table will work well for this use case. In addition, DynamoDB scales quickly and can handle millions of transactions each second.

For more information about DynamoDB features, see Amazon DynamoDB features.

For more information about DynamoDB use cases, see How to determine if Amazon DynamoDB is appropriate for your needs, and then plan your migration.

 

B

Amazon ElastiCache for Redis in cluster mode

 

C

Amazon Neptune on burstable instance types

 

D

Amazon DynamoDB with provisioned capacity mode with RCU set to 300 and WCU set to 100

 

Question 313

 

A large company requires that all of its Amazon RDS DB instances run in subnets with no routes to the internet. The company must constantly audit this requirement for violations and must maintain the desired state for compliance.

Which solution will meet these requirements with the LEAST effort?

 

A

Use an AWS Config managed rule for public access. Automatically perform remediation by using an AWS Systems Manager Automation runbook.

 

AWS Config has multiple managed rules to monitor best practices in architecture. The rds-instance-public-access-check managed rule meets the requirements in this scenario. Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources.

For more information about AWS Config managed rules, see AWS Config Managed Rules.

For more information about System Manager Automation runbooks, see AWS Systems Manager Automation.

 

B

Create VPC flow logs for the subnets to monitor internet traffic on each network interface. Publish the flow logs to Amazon CloudWatch Logs. Configure an alarm from a metric filter. Use Amazon Simple Notification Service (Amazon SNS) to initiate an AWS Lambda function for remediation.

 

C

Publish the RDS DB instance audit and error logs to Amazon CloudWatch Logs to monitor each subnet's route table for compliance. Configure an alarm from a metric filter. Automatically perform remediation by using an AWS Systems Manager Automation runbook.

 

D

Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor each subnet's route table for compliance. Use Amazon Simple Notification Service (Amazon SNS) to initiate an AWS Lambda function for remediation.

 

Question 314

 

A company is using Amazon RDS for MySQL to store its application data. The company is growing quickly and wants to migrate the database to Amazon Aurora MySQL-Compatible Edition with the least possible effort.

Which solution will meet these requirements?

 

Create a dump file of the RDS for MySQL database. Create a new Aurora MySQL-Compatible DB cluster. Use the mysqldump utility to restore the dump file. Point the application to the new Aurora endpoint.

 

B

Use AWS Database Migration Service (AWS DMS) to migrate the data from the RDS for MySQL database to a new Aurora MySQL-Compatible DB cluster. When the migration is complete, point the application to the new Aurora endpoint.

 

C

Set up external replication between the RDS for MySQL DB instance and a new Aurora MySQL-Compatible DB cluster. When the replication is complete, point the application to the new Aurora endpoint.

 

D

Create an Aurora Replica from the existing RDS for MySQL DB instance. When the replication is complete, promote the replica and point the application to the new Aurora endpoint.

 

This solution is the recommended way to migrate from RDS for MySQL to Aurora MySQL-Compatible.

For more information about migration in this manner, see Migrating data from a MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica.

 

 

 

Question 315

 

A company's database specialist is supporting a new analytics solution that uses a property graph and the Apache TinkerPop Gremlin query language on an Amazon Neptune DB cluster. Every day, the company's applications generate and store more than 1 million new records in an Amazon S3 bucket in Apache Parquet format. The database specialist must import these records into the database. The analytics solution runs queries continuously, so the database specialist must import the data while minimizing the performance impact on the Neptune DB instance.

How should the database specialist load the data to meet these requirements?

 

A

Convert the data files into the Neptune .csv file format. Start the Neptune loader by sending a request through HTTP to the Neptune DB instance.

 

Gremlin uses Neptune .csv files to import property graphs into Neptune.

For more information about data formats in Neptune, see Load Data Formats.

 

B

Convert the data files into the .orc file format. Run parallel INSERT statements from the Gremlin Console.

 

C

Convert the data files into the .rdf/.xml file format. Start the Neptune loader by sending a request through HTTP to the Neptune DB instance.

 

D

Create a custom AWS Glue ETL script to import the data with addVertex and addEdge steps into the Neptune DB instance.

 

Question 316

 

A company's database specialist wants to create a copy of an encrypted Amazon RDS DB instance. The company will use this copy to create test DB instances for developers. All the test DB instances must be encrypted, must use a custom parameter group and a custom option group, and must have a backup retention period of 2 days. The database specialist must choose a solution that fulfills these obligations and minimizes the need for future maintenance.

Which solution meets these requirements?

 

A

Move an automated DB snapshot into a new Amazon S3 bucket. Create an AWS CloudFormation template that uses the snapshot to launch a new DB instance with the required parameter group, option group, encryption, and backup retention period. Grant the developers permission to use the template to create a stack.

 

B

Create a dump file of the DB instance. When the developers request a DB instance, use the dump file to create a new encrypted test DB instance with the required parameter group, option group, and backup retention period.

 

C

Create a manual snapshot of the DB instance. Create an AWS CloudFormation template that uses the snapshot to launch a new DB instance with the required parameter group, option group, encryption, and backup retention period. Grant the developers permission to use the template to create a stack.

 

To help alleviate errors from manual processes, the database specialist should use infrastructure as code (IaC) as the guiding design principle. The database specialist can meet all the requirements with a CloudFormation template that uses a database snapshot to create the test database, along with the required encryption, parameter group, option group, and backup retention period.

For more information about how to copy a snapshot, see Copying a snapshot.

For more information about CloudFormation, see What is AWS CloudFormation?

 

D

Create a custom SQL script to export the schema and data from the DB instance. When the developers request a DB instance, use the SQL script to create a new encrypted test DB instance with the required parameter group, option group, and backup retention period.

 

Question 317

 

A company has used PostgreSQL on premises for the past 10 years. The size of the company's database has grown to more than 40 TB, including one table that is larger than 6 TB. Performance issues are affecting the company's ability to meet service level agreements (SLAs). The company also faces challenges with its backup and restore capabilities. The company has a 1 Gbps AWS Direct Connect connection in its data center with more than half of the bandwidth available. The company has an upcoming 48-hour maintenance window and wants to complete a migration from its on-premises PostgreSQL server to Amazon Aurora PostgreSQL-Compatible Edition by the end of the maintenance window.

What should the company do to meet these requirements?

 

A

Use the PostgreSQL COPY command to write each table to a flat file on Amazon S3 when the maintenance window begins. Create a new Aurora PostgreSQL-Compatible database, and duplicate the schema from the on-premises server. Load each table from the S3 bucket. Cut over to the new database.

 

B

Use Amazon RDS to take a snapshot of the database. Save the snapshot to an Amazon S3 bucket during the maintenance window. Restore the snapshot to a new Aurora PostgreSQL-Compatible DB instance. Cut over to the new database.

 

C

During the maintenance window, use the pg_dump utility with multi-threading and compression enabled to export the data from PostgreSQL to an AWS Snowball Edge Storage Optimized device. Return the Snowball Edge device to AWS. Use the pg_restore utility to import the data from Amazon S3 into a new Aurora PostgreSQL-Compatible DB instance.

 

D

Before the maintenance window, use the AWS Schema Conversion Tool to extract the data locally and ship the data to AWS in an AWS Snowball Edge Storage Optimized device. Configure an AWS Database Migration Service (AWS DMS) task to move the data into a new Aurora PostgreSQL-Compatible DB instance. Migrate ongoing changes by using change data capture (CDC). Cut over to the new database during the maintenance window.

 

This solution is appropriate for the 48-hour maintenance window, storage requirements, and bandwidth constraints.

For more information about how to use AWS DMS to migrate PostgreSQL databases, see Using a PostgreSQL database as an AWS DMS source.

 

Question 318

 

A database specialist is reviewing the design of a two-tier web application for a company. The web tier runs on Amazon EC2 instances in public subnets and connects to an Amazon RDS DB instance that is publicly accessible. The company wants to follow AWS security best practices. The company also has new security guidelines. The new guidelines prohibit public access to databases and allow connections from instances in the web tier only. The company's current design does not meet these new security guidelines.

The database specialist must create a solution that meets the new security guidelines and minimizes downtime for the database.

Which combination of steps should the database specialist take to meet these requirements? (Select TWO.)

 

A

Terminate the DB instance. Create a new DB instance without public access.

 

B

Modify the DB instance by removing public access.

 

You can remove public access from RDS DB instances with zero downtime.

For more information about the modification of RDS DB instances, see Modifying an Amazon RDS DB instance.

 

C

Modify the RDS VPC security group rules to allow access from the web server subnets.

 

D

Modify the RDS VPC security group rules to allow access from the elastic network interfaces of the web servers.

 

This solution will limit database access to the IP addresses and ports that are specified in the security group rules.

For more information about use cases for security groups, see Security group rules for different use cases.

 

E

Modify the RDS VPC security group rules to deny access to public IP address ranges.

 

Question 319

 

A software development company is planning a product launch. Before the launch, the company is scaling up its production Amazon Aurora DB cluster to a much larger instance. The new instance inherits the custom parameters that were previously set in the DB parameter group.

On the day of the launch, the application shows error messages that indicate that the database is unavailable. Amazon CloudWatch reveals that the maximum number of connections is being limited to 500.

Which solution will solve this problem with the LEAST amount of downtime?

 

A

Scale up the instance to the largest instance size possible. Deploy multiple read replicas.

 

B

Set the value of the max_connections parameter in the default DB parameter group to 1,000. Apply the change immediately.

 

C

Set the value of the max_connections parameter in the custom DB parameter group to a formula that uses the DBInstanceClassMemory variable. Apply the change immediately.

 

When the company modified the parameters in preparation for its launch, the company set the connections to a fixed value. When the company scaled up the instance, the max_connections parameter did not scale because it was set to a fixed size rather than to a formula. If the company sets this parameter to a formula, the company solves the problem and ensures that the value scales if the company increases the instance size again.

For more information about how to work with DB parameter groups and DB cluster parameter groups, see Working with DB parameter groups and DB cluster parameter groups.

For more information about DB parameter formulas, see DB parameter formulas.

For more information about best practices for Aurora database configuration, see Best practices for Amazon Aurora MySQL database configuration.

For more information about how to manage performance and scaling of Aurora, see Managing performance and scaling for Amazon Aurora MySQL.

 

D

Deploy multiple read replicas. Increase the application's connection pooling settings to handle the additional read traffic.

 

Question 320

 

A user is running an Amazon RDS for MySQL DB instance in the us-east-1 Region with a read replica in the eu-west-1 Region. The ReplicaLag metric in Amazon CloudWatch indicates a replication lag of up to 5 hours.

What are the likely causes of this replication lag? (Select TWO.)

 

A

There is network outage, and replication is not active.

 

B

There is a long-running query on the primary DB instance.

 

This issue might be temporary. The replication lag might return to normal after the long-running job is complete.

For more information about how to monitor replication lag, see Working with read replicas.

For more information about how to troubleshoot replication lag, see How can I troubleshoot high replica lag with Amazon RDS for MySQL?

 

C

Queries on the primary DB instance are running in a serial sequence instead of with parallel processing.

 

D

The long_query_time value is too high on the primary DB instance.

 

E

The read replica DB instance is a smaller size than the primary DB instance.

 

A smaller read replica might not have sufficient resources to keep up with a larger primary DB instance.

For more information about how to monitor replication lag, see Working with read replicas.

For more information about how to troubleshoot replication lag, see How can I troubleshoot high replica lag with Amazon RDS for MySQL?

 

 

Question 321

 

A company is developing an ecommerce application on AWS. The database backend is an Amazon RDS for MySQL Multi-AZ DB instance. During development, the company has turned off automatic scaling of storage. The company's database specialist receives a notification that the FreeStorageSpace metric has reached a defined threshold. The database specialist must increase the available DB instance storage.

Which solution will meet this requirement with the LEAST downtime?

 

A

Create a manual snapshot of the DB instance. From the snapshot, create a new DB instance that has more storage capacity.

 

B

Create a new DB instance that has more storage capacity. Copy all data and objects to the new DB instance by using AWS Database Migration Service (AWS DMS).

 

C

Increase the DB instance storage capacity by using the AWS Management Console, Amazon RDS API, or AWS CLI.

 

In most cases, the scaling of storage does not require any outage and does not degrade performance of the DB instance.

For more information about how to increase DB instance storage capacity, see Increasing DB instance storage capacity.

 

D

Change the DB instance status to storage-optimization by using the AWS CLI to automatically scale the storage capacity.

 

Question 322

 

A database administrator needs to enable IAM authentication on an existing Amazon Neptune cluster. The database administrator has created new IAM users that have the proper IAM policies to access the Neptune cluster. The database administrator also has distributed IAM credentials to the database users.

What should the database administrator do so that the users can access the Neptune cluster with the IAM credentials?

 

A

Add the users' IAM credentials to the Gremlin Console configuration file. Instruct the users to access the cluster by using the Gremlin Console.

 

B

Create an IAM role with the relevant permissions. Assign the role to the users. Instruct the users to sign in to the cluster by using the role.

 

C

Add the users' IAM credentials to their default credential profiles file. Instruct the users to access the cluster by using the Gremlin Console.

 

You can authenticate to a Neptune DB instance or DB cluster by using IAM database authentication. When IAM database authentication is enabled, each request must be signed with AWS Signature Version 4. AWS Signature Version 4 is the process to add authentication information to AWS requests.

For security, all requests to Neptune DB clusters with IAM authentication enabled must be signed with an access key. This key consists of an access key ID and secret access key. The authentication is managed externally through IAM policies. Neptune authenticates on connection. For WebSocket connections, Neptune verifies the permissions periodically to ensure that the user still has access.

All HTTP requests (even the HTTP request that is used to create a WebSocket session) must have Signature Version 4 signing when IAM authentication is enabled on a Neptune cluster.

For more information about IAM credentials in Neptune, see Identity and Access Management in Amazon Neptune.

For more information about the integration of IAM credentials with the Gremlin Console, see Connecting to Neptune Using Java and Gremlin with Signature Version 4 Signing.

 

D

Configure Neptune so that users obtain an AWS Security Token Service (AWS STS) token when they send the IAM access key and secret key as headers to the Neptune API endpoint. Instruct the users to sign in to the cluster by using the token.

 

Question 323

 

A company in a highly regulated industry is evaluating Amazon RDS for a mission-critical database deployment. The database must be encrypted and must receive patches during a maintenance window that the company agrees upon with a regulator.

Which combination of configurations will meet these requirements? (Select TWO.)

 

A

An Amazon RDS multi-Region deployment

 

B

An Amazon RDS Multi-AZ deployment

 

In the event of a planned or unplanned outage of your DB instance, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have enabled Multi-AZ.

For more information about Multi-AZ maintenance, see Maintaining a DB instance.

 

C

A default maintenance window

 

D

A user-defined maintenance window

 

The company can consult with the regulator about the maintenance window and then select the maintenance window during the creation of the RDS DB instance. The company can manage that window as needs change.

For more information about the RDS maintenance window, see The Amazon RDS maintenance window.

 

E

A maintenance window that is automatically chosen based on resource utilization

 

Question 324

 

An operations team wants to identify and reduce its number of inactive Amazon RDS DB instances. The operations team defines a DB instance as inactive if the DB instance has not received connections for a prolonged period. The operations team needs a solution that runs daily with automatic notifications whenever a DB instance becomes inactive.

Which solution will meet these requirements?

 

Use AWS Config. Apply the Operational Best Practices for Databases Services conformance pack. Configure alerts from AWS Config to send messages to an Amazon Simple Notification Service (Amazon SNS) topic.

 

B

Schedule an AWS Lambda function to refresh the AWS Trusted Advisor check for idle RDS DB instances daily. Create an Amazon EventBridge (Amazon CloudWatch Events) rule by using Trusted Advisor as the event source and by using an Amazon Simple Notification Service (Amazon SNS) topic as the target.

 

The Trusted Advisor idle instance report will show idle instances. This identification of underutilized instances will show which DB instances are candidates for removal and will show potential cost savings.

For more information about integration of Trusted Advisor events with EventBridge (CloudWatch Events), see Events from AWS services.

For more information about Trusted Advisor check results within EventBridge (CloudWatch Events), see Monitoring Trusted Advisor check results with Amazon CloudWatch Events.

For more information about Trusted Advisor best practices, see AWS Trusted Advisor best practice checklist.

 

C

Use native tools to write platform-appropriate queries to gather activity statistics from each DB instance and output the results to a file in an Amazon S3 bucket. Provide the operations team with a direct link to the output file in Amazon S3.

 

D

Subscribe the operations team to the AWS Trusted Advisor email notifications.

 

Question 325

 

A database specialist is building an order shipment application that uses an Amazon DynamoDB table. The database specialist selected OrderID as the table's partition key. Every new item that is added to this table will have a sequential and unique OrderID attribute and a ShipmentStatus attribute that is set to OPEN. When orders are shipped, the item's ShipmentStatus changes to CLOSED.

Periodically, the application must retrieve all OPEN orders and send them to another application for further processing. The database specialist created a global secondary index (GSI) with ShipmentStatus as the partition key for this retrieval pattern. As the business grows, the application layer increasingly receives a ProvisionedThroughputExceededException error during the creation and modification of orders.

How should the database specialist resolve this error?

 

A

Change the application to generate random OrderID attributes instead of sequential numbers. Make order updates to the GSI.

 

B

Increase the provisioned write capacity of the GSI. Use a Boolean data type for the ShipmentStatus attribute.

 

C

Increase the provisioned write capacity on the DynamoDB table. Reduce the write capacity of the GSI.

 

D

Delete the GSI. Use a table scan with a filter expression to return the OPEN orders. Periodically move CLOSED orders to a separate table.

 

This question tests your ability to recognize a hot key that exceeds the maximum write throughput of a DynamoDB partition. Use of such a low-cardinality attribute for the partition key of a GSI is a poor design choice that will lead to a hot partition and poor performance. Periodic movement of the old data to a separate table reduces the impact of scanning the table, which should resolve the error.

For more information about how to distribute workloads efficiently, see Sharding Using Calculated Suffixes.

For more information about how to choose the right DynamoDB partition key, see Choosing the Right DynamoDB Partition Key.

For more information about best practices for good sort key design, see Best Practices for Using Sort Keys to Organize Data.

 

 

Question 326

 

You are the database lead on a medical application for a large healthcare system. This application is mission critical and related to patients well-being. The main requirement is that the RDS database be in an active-active configuration with no downtime. Which database can meet these requirements?

 

A.

Aurora Global Database

 

B.

RDS Multi-AZ

 

C.

DynamoDB with global tables.

 

D.

Aurora Multi-Master cluster

 

With an active-active workload, you perform read and write operations to all the DB instances at the same time. In this configuration, you typically segment the workload so that the different DB instances don't modify the same underlying data at the same time, doing so minimizes the chance for write conflicts. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html

 

Question 327

 

You have a DynamoDB database that stores user profiling information for a popular online game. This database is crucial to operations, and you configure continuous backups and create a copy of this database in your dev environment. After testing a new script, you realize that you have run the script against the prod environment rather than the dev environment. This action will have an immediate and adverse effect on the prod environment. What steps can you take to quickly restore the prod database?

 

A.   

Use point-in-time recovery to restore the prod database to the desired point prior to the script running.

 

Amazon DynamoDB point-in-time recovery (PITR) provides continuous backups of your DynamoDB table data. You can restore a table to a point in time using the DynamoDB console or the AWS Command Line Interface (AWS CLI).

 https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.Tutorial.html

 

B.     

Restore the prod database from the latest snapshot.

 

C.     

Use point-in-time recovery to restore the prod database to the previous day.

 

D.    

Contact Amazon as DynamoDB is a managed service.

 

 

Question 328

 

A development team with little AWS experience has deployed their application using Elastic Beanstalk. This included configuring an RDS MySQL database within Elastic Beanstalk. Initially, this is a test environment, and they decided to delete the environment, make changes to the application, and create a new environment. But in deleting the environment, they lost the data from their database. What best practice step can they take the next time they create the environment?

 

A.     

Decouple the database from Elastic Beanstalk

 

It is best practice to decouple a database from the Elastic Beanstalk environment. The loosely coupled database will remain even after the Elastic Beanstalk environment is deleted. https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

 

B.     

Configure Multi-AZ RDS

 

C.     

Turn on continuous backups.

 

D.    

Create a Read Replica

 

 

Question 329

 

A company is developing an application for consumer surveys. Survey information from handheld devices will be uploaded into an RDS MySQL database. Depending on the turnout, and the size of cities where the surveys take place, there is concern about an unpredictable volume of data and exceeding the allocated storage of the database. You have decided to enable storage autoscaling for an Amazon RDS DB instance. When Amazon RDS starts a storage modification for an autoscaling-enabled DB instance, how will the storage increments be allocated?

 

A.

The additional storage is in increments of whichever of the following is greater: 10 GiB 10 percent of currently allocated storage Storage growth prediction for 7 hours based on the FreeStorageSpace metrics change in the past hour.

 

B.

An additional 5 GiB will be allocated.

 

C.

The additional storage is in increments of whichever of the following is greater:

·         5 GiB

·         10% of currently allocated storage

·         Storage growth prediction for 7 hours based on the FreeStorageSpace metrics change in the past hour.

 

With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space it automatically scales up your storage. Amazon RDS starts a storage modification for an autoscaling-enabled DB instance when these factors apply:

 

·         Free available space is less than 10% of the allocated storage.

·         The low-storage condition lasts at least five minutes.

·         At least six hours have passed since the last storage modification.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.htmlUSER_PIOPS.Autoscaling

 

D.

The additional storage is in increments of whichever of the following is greater:

·         1 GiB

·         5% of currently allocated storage

·         Storage growth prediction for 7 hours based on the FreeStorageSpace metrics change in the past hour.

 

Question 330

 

You are designing a DynamoDB table and have decided to configure provisioned throughput yourself using strict guidelines. You configure the table in provisioned throughput mode with 50 RCU and 50 WCU. How much data will be read from and written to the table each second?

 

A.

200 KB for strongly consistent read operations, 400 KB for eventually consistent read operations, and 100 KB for write operations.

 

B.

200 KB for strongly consistent read operations, 400 KB for eventually consistent read operations, and 50 KB for write operations.

 

One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB. If you need to read an item that is larger than 4 KB, DynamoDB must consume additional read capacity units. The total number of read capacity units required depends on the item size, and whether you want an eventually- consistent or strongly-consistent read. For example, if your item size is 8 KB, you require 2 read capacity units to sustain one strongly consistent read per second, 1 read capacity unit if you choose eventually consistent reads, or 4 read capacity units for a transactional read request. One write capacity unit represents one write per second for an item up to 1 KB in size. If you need to write an item that is larger than 1 KB, DynamoDB must consume additional write capacity units. Transactional write requests require 2 write capacity units to perform one write per second for items up to 1 KB. The total number of write capacity units required depends on the item size. For example, if your item size is 2 KB, you require 2 write capacity units to sustain one write request per second or 4 write capacity units for a transactional write request.

 

C.

100 KB for strongly consistent read operations, 200 KB for eventually consistent read operations, and 50 KB for write operations.

 

D.

400 KB for strongly consistent read operations, 800 KB for eventually consistent read operations, and 50 KB for write operations.

 

 

Question 331

 

You have been assigned to create a DynamoDB table for a new gamified application for an online education company. Before creating the database, you want to design and create secondary indexes. Which concepts apply for secondary indexes?

 

A.

In general, you should use global secondary indexes rather than local secondary indexes.

 

In general, you should use global secondary indexes rather than local secondary indexes. The exception is when you need strong consistency in your query results, which a local secondary index can provide, but a global secondary index cannot (global secondary index queries only support eventual consistency). https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-general.html

 

B.

Create secondary indexes on attributes that are important but not queried often.

 

C.

Use a local secondary index when you need strong consistency in your query results.

 

In general, you should use global secondary indexes rather than local secondary indexes. The exception is when you need strong consistency in your query results, which a local secondary index can provide, but a global secondary index cannot (global secondary index queries only support eventual consistency). https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-general.html

 

D.

Don't create secondary indexes on attributes that you don't query often.

 

Keep the number of indexes to a minimum. Don't create secondary indexes on attributes that you don't query often. Indexes that are seldom used contribute to increased storage and I/O costs without improving application performance. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-general.html

 

E.

Use a local secondary index when you need eventual consistency in your query results.

 

Question 332

 

You have an application that is very read-heavy and have chosen a design that includes an RDS MySQL database and three RDS read replicas. How can you best distribute the load between the three read replicas?

 

A.

Configure an Application load balancer and add each read replica endpoint to the load balancer target group.

 

B.

No additional steps are necessary as the RDS Master will distribute requests evenly.

 

C.

Add each read replica endpoint to a Route 53 record set and configure weighted routing to distribute traffic to the read replicas.

 

You can use Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create individual record sets for each DNS endpoint associated with your read replicas and give them the same weight. Then, direct requests to the endpoint of the record set. https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/

 

D.

Configure a Network load balancer and add each read replica endpoint to the load balancer target group.

 

Question 333

 

A global finance company is developing an application with very stringent requirements related to high availability. The Disaster Recovery plan has already been developed and calls for an aggressive RTO and RPO. For the data tier, there will need to be an active-active configuration and data synchronization in multiple regions. Which database can best meet these requirements?

 

A.

DynamoDB with Global tables

 

When you create a DynamoDB global table, it consists of multiple replica tables (one per Region) that DynamoDB treats as a single unit. Every replica has the same table name and the same primary key schema. When an application writes data to a replica table in one Region, DynamoDB propagates the write to the other replica tables in the other AWS Regions automatically. 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html

 

B.

RDS Multi-AZ

 

C.

RDS Multi-region

 

D.

Athena Global Database

 

Question 334

 

A company is developing an application for consumer surveys. Survey information from handheld devices will be uploaded into an RDS MySQL database. Depending on turnout, and the size of cities where the surveys take place, there is concern about the unpredictable volume of data and exceeding the allocated storage of the database. You have decided to enable storage autoscaling for an Amazon RDS DB instance. Which items are limitations for storage auto scaling?

 

A.

Autoscaling doesn't occur if the maximum storage threshold would be exceeded by the storage increment.

 

The following limitations apply to storage autoscaling:

Autoscaling doesn't occur if the maximum storage threshold would be exceeded by the storage increment.

Autoscaling can't completely prevent storage-full situations for large data loads, because further storage modifications can't be made until six hours after storage optimization has completed on the instance. If you perform a large data load, and autoscaling doesn't provide enough space, the database might remain in the storage-full state for several hours. This can harm the database.

If you start a storage scaling operation at the same time that Amazon RDS starts an autoscaling operation, your storage modification takes precedence. The autoscaling operation is canceled.

Autoscaling can't be used with magnetic storage.

Autoscaling can't be used with the following previous-generation instance classes that have less than 6 TiB of orderable storage: db.m3.large, db.m3.xlarge, and db.m3.2xlarge. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.htmlUSER_PIOPS.Autoscaling

 

B.

You can’t use both EC2 auto scaling and RDS auto scaling in the same VPC.

 

C.

You can’t use auto scaling with RDS SQL Server.

 

D.

You can’t use auto scaling with RDS Oracle.

 

E.

Autoscaling can't be used with magnetic storage.

 

The following limitations apply to storage autoscaling:

Autoscaling doesn't occur if the maximum storage threshold would be exceeded by the storage increment.

Autoscaling can't completely prevent storage-full situations for large data loads, because further storage modifications can't be made until six hours after storage optimization has completed on the instance. If you perform a large data load, and autoscaling doesn't provide enough space, the database might remain in the storage-full state for several hours. This can harm the database.

If you start a storage scaling operation at the same time that Amazon RDS starts an autoscaling operation, your storage modification takes precedence. The autoscaling operation is canceled.

Autoscaling can't be used with magnetic storage.

Autoscaling can't be used with the following previous-generation instance classes that have less than 6 TiB of orderable storage: db.m3.large, db.m3.xlarge, and db.m3.2xlarge. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.htmlUSER_PIOPS.Autoscaling

 

Question 335

 

After careful consideration, you have decided to use Elastic Beanstalk to deploy a new home loan application for a real estate company. You have also decided to use AWS MySQL for the database. What is the best practice with regard to using RDS databases with Elastic Beanstalk.

 

A.

Add read replicas for high durability.

 

B.

Create the database outside of Elastic Beanstalk for loose coupling.

 

The database will remain intact even if the Elastic Beanstalk environment is deleted. https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

 

C.

Create the database within Elastic Beanstalk for tight coupling.

 

D.

Set up Multi-AZ for high durability.

 

 

 

 

Question 336

 

A worldwide finance company has a DynamoDB database that contains mission-critical data. The nature of the database dictates that it is up and running and performing at a high level at all times. However, they would like to recreate this database in another region and perform testing on it. How can they best migrate this data and test it in another region?

 

A.

Configure DynamoDB Streams and add the new region to the current database Global Table settings.

 

B.

Create a new database in the new region. Use Database Migration Service to migrate the data into the new database.

 

C.

Perform a point-in-time recovery of the database for the new database in the new region.

 

You can choose the restore point and recreate the database into the new region. Amazon DynamoDB point-in-time recovery (PITR) provides continuous backups of your DynamoDB table data. You can restore a table to a point in time using the DynamoDB console or the AWS Command Line Interface (AWS CLI). Reference: Restoring a DynamoDB Table to a Point in Time

 

D.

Use AWS Import/Export to migrate the data from the old to the new database.

 

Question 337

 

Your company has a security application that collects IoT data from many devices throughout large retail department stores. The main requirement for the data store is that this IoT data needs to be ingested fast and with low latency. Which database can best meet this requirement?

 

A.

DynamoDB

 

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

 

B.

Amazon RedShift

 

C.

Elasticache

 

D.

Amazon Aurora

 

Question 338

 

Your team is considering a new database on the AWS cloud. The database will store IoT data from exercisers worldwide. Requirements for the database include high availability, minimal development effort, and microsecond latency. The decision has been made to use DynamoDB with DAX. What type of operations can DAX handle?

 

A.

CreateTable

 

B.

UpdateTable

 

C.

Query

 

DAX can perform UpdateItem, Query, and Scan operations. 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.consistency.html

 

D.

UpdateItem

 

DAX can perform UpdateItem, Query, and Scan operations. 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.consistency.html

 

E.

Scan

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.consistency.html

 

Question 339

 

A gaming application is using DynamoDB to store user related information. The application is used globally and the usage patterns are very unpredictable. There are intermittent, short burst, traffic spikes that have been causing throttling of requests on the DynamoDB tables. Which solution is the best when taking cost into consideration?

 

A.

Enable on-demand capacity mode on the DynamoDB table.

 

When you choose on-demand mode, DynamoDB instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, DynamoDB adapts rapidly to accommodate the workload. Tables that use on-demand mode deliver the same single-digit millisecond latency, service-level agreement (SLA) commitment, and security that DynamoDB already offers. You can choose on-demand for both new and existing tables and you can continue using the existing DynamoDB APIs without changing code. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.htmlHowItWorks.OnDemand

 

B.

Configure an Application Load Balancer in front of the application server.

 

C.

Configure DynamoDB auto scaling.

 

D.

Increase provisioned capacity on the DynamoDB table.

 

Question 340

 

You have been assigned to create a DynamoDB table for a new gamified application for an online education company. After an analysis of indexes needed, you have decided that you need a local secondary index. Before creating the local secondary index, you consider the size of the data that may be written to it. What is the maximum item collection size in DynamoDB?

 

A.

100 meg

 

B.

10GB

 

The maximum size of any item collection is 10GB. This limit does not apply to tables without local secondary indexes. Only tables that have one or more local secondary indexes are affected. If an item collection exceeds the 10 GB limit, DynamoDB returns an ItemCollectionSizeLimitExceededException, and you won't be able to add more items to the item collection or increase the sizes of items that are in the item collection. (Read and write operations that shrink the size of the item collection are still allowed.) You can still add items to other item collections. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.htmlLSI.ItemCollections.SizeLimit

 

C.

1GB

 

D.

500 meg

 

Question 341

A company is developing an application for exit polling. Polling information from handheld devices will be uploaded into an RDS MySQL database. Depending on voter turnout and the size of cities where the polling takes place, there is concern about the unpredictable volume of data and it exceeding the allocated storage of the database. What steps can be taken to solve this issue in a timely and cost effective manner?

 

A.

Enable storage autoscaling for an Amazon RDS DB instance.

 

If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance. For example, you might use this feature for a new mobile gaming application that users are adopting rapidly. In this case, a rapidly increasing workload might exceed the available database storage. To avoid having to manually scale up database storage, you can use Amazon RDS storage autoscaling. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.htmlUSER_PIOPS.Autoscaling

 

B.

Replace the database server with an Auto Scaling group of servers.

 

C.

Use SQS to store backlogged data.

 

D.

Vertically scale the RDS DB instance.

 

Question 342

 

A gaming application uses DynamoDB to store user scores and give the user the ability to view the high scores listed in descending order. But the nature of the game is such that it is very rare for users to stay at the same level for more than 30 days, so a requirement is to have user game data older than 30 days purged. What is the most efficient way to meet this requirement?

 

A.

Enable TTL on the DynamoDB table. Set the expiration to 30 days and store the expiration timestamp in epoch format.

 

Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the specified timestamp's date and time, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost to reduce stored data volumes by retaining only the items that remain current for your workload’s needs. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

 

B.

Delete and recreate the DynamoDB table every 30 days.

 

C

Configure DynamoDB Streams on the table. Create a Lambda function that is triggered when a stream is activated, and delete items older than 30 days.

 

D.

Create a cron job that will perform a query on the table every day and delete items older than 30 days.

 

Question 343

 

A national automobile parts chain has a parts application in their main AWS account backed by an RDS MySQL database. But a different division in the company has created a specialty parts database in a different AWS account. The parts application team would like to be able to access this database in the other AWS account. Which option can meet these requirements?

 

A.

PrivateLink

 

B.

Direct Connect

 

C.

Site to site vpn

 

D.

VPC Peering

 

A VPC peering connection helps you to facilitate the transfer of data. For example, if you have more than one AWS account, you can peer the VPCs across those accounts to create a file sharing network. You can also use a VPC peering connection to allow other VPCs to access resources you have in one of your VPCs. https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

 

Question 344

 

Your company is eventually migrating an on-premises data warehouse to Amazon RedShift. But for now, you have been tasked with setting up a hybrid environment. Your main requirements are to provide a cheap but reliable solution, and also the solution which can be set up in the shortest amount of time. Which option meets these requirements?

 

A.

VPC Peering

 

B.

Site to site vpn

 

When cost is a determining factor, vpn is a better option than Direct Connect.

 

C.

PrivateLink Interface Endpoint

 

D.

Direct Connect

 

Question 345

 

Your company is developing an application which performs data analytics for sports teams by capturing IoT data during practice and live play. The raw data from the IoT devices is sent to a Data Lake in S3. The data is in S3 but they want to perform some SQL queries on this data and do it using serverless technologies. Which solution would most efficiently meet this requirement?

 

A.

Create a Lambda function which would stream the data to RDS MySQL tables and then perform queries from RDS MySQL.

 

B.

Use Amazon Athena to query the data directly from S3.

 

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. https://aws.amazon.com/athena/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc

 

C.

Create a Lambda function which would stream the data to RedShift tables and then perform queries from RedShift.

 

D.

Create a Lambda function which would stream the data to DynamoDB tables and then perform queries from DynamoDB.

 

Question 346

 

A financial company has a near real-time stock dashboard which they want to move to AWS. The requirements include high availability, real-time processing, and low latency. Which in-memory data stores could meet these requirements? Choose 2 Answers

 

A.

DynamoDB

 

B.

Aurora

 

C.

DynamoDB with DAX

 

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second. https://aws.amazon.com/dynamodb/dax/:~:text=Amazon%20DynamoDB%20Accelerator%20(DAX)%20is,millions%20of%20requests%20per%20second.

 

D.

Elasticache Redis

 

Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. https://aws.amazon.com/elasticache/redis/

 

E.

RDS MySQL

 

Question 347

 

You have been assigned to oversee creation and deployment of an Amazon DocumentDB cluster. Before moving the cluster to prod, you would like to do some performance testing and tuning. What tool can you use to review the query execution plan and analyze query performance?

 

A.

Amazon Inspector

 

B.

DocumentDB explain()

 

The explain() method returns a document with the query plan and, optionally, the execution statistics. 

https://docs.aws.amazon.com/documentdb/latest/developerguide/developerguide.pdf

 

C.

CloudWatch Logs

 

D.

CloudTrail Logs

 

Question 348

 

A company has migrated their on-premises a 4 TB database to an RDS MySQL database. The database quickly begins developing performance issues as the popularity of their application increases. They have decided to migrate to Amazon Aurora to gain performance benefits. What is the most efficient solution to migrate to Aurora?

A.

Use Database Migration Service to migrate from RDS MySQL to Aurora.

 

B.

Create an Aurora read replica for the MySQL database. Once the synchronization is complete, promote the Aurora read replica to a standalone database.

 

After migration completes, you can promote the Aurora Read Replica to a stand-alone DB cluster and direct your client applications to the endpoint for the Aurora Read Replica. 

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replica.html

 

C.

Make a backup of their data on the MySQL server. Go back to the on-premises database and migrate straight to Amazon Aurora. Restore the backup to the new Aurora server.

 

D.

Run the mysqldump utility to copy the data from MySQL to Aurora.

 

Question 349

 

You have decided to use CloudFormation to deploy an Amazon RedShift cluster. For Disaster Recovery purposes, this CloudFormation can then be stored in another region and deployed there in a disaster situation. Within this template, you would like to be able to set the administrator password at runtime. Which section of the CloudFormation template can be used for this purpose?

 

A.

Mappings

 

B.

Parameter

 

Use the optional Parameters section to customize your templates. Parameters enable you to input custom values to your template each time you create or update a stack.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html

 

C.

Resource

 

D.

Metadata

 

Question 350

 

A large insurance company has been storing customer data in Amazon S3 buckets. They have decided that they want to query this data directly from the S3 buckets. They will use SQL queries to extract the information they need from the buckets. The results of these queries will be used to provide the base data for a data warehouse they will create and use to produce complex reports for their business team. Which AWS services can be used to perform these tasks and create the Data Warehouse and reports?

 

A.

Amazon Glue and Amazon RedShift Spectrum

 

B.

Amazon Athena and RDS MySQL

 

C.

Amazon Inspector and Amazon RedShift Spectrum

 

D.

Amazon Athena and Amazon RedShift Spectrum

 

Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets. Much of the processing occurs in the Redshift Spectrum layer, and most of the data remains in Amazon S3. https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html

 

 

Question 351

 

Your company has been storing detailed, itemized manufacturing parts documents in Amazon S3 for several years. This has become a very large amount of data and they would like to create a Data Lake and begin writing SQL queries on this data. Which AWS service can meet this requirement?

 

A.

DynamoDB

 

B.

Amazon Aurora

 

C.

RedShift

 

D.

Amazon Athena

 

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. https://docs.aws.amazon.com/athena/latest/ug/what-is.html

 

Question 352

 

Your team has been performing Disaster Recovery drills on the company databases. One of the databases is an Aurora PostgreSQL database cluster. After the drills, it has been determined that better performance is needed, and specifically, minimal application downtime during a failover. What steps can meet this requirement?

 

A.

Set a high value for the database and application client TCP keepalive parameters. Enable Aurora DB cluster cache management.

 

B.

Enable database activity streams. Set a high value for the database and application client TCP keepalive parameters.

 

C.

Set a low value for the database and application client TCP keepalive parameters. Enable Aurora DB cluster cache management.

 

Fast Failover with Amazon Aurora PostgreSQL There are several things you can do to make a failover perform faster with Aurora PostgreSQL. This section discusses each of the following ways:

Aggressively set TCP keepalives to ensure that longer running queries that are waiting for a server response will be killed before the read timeout expires in the event of a failure.

Set the Java DNS caching timeouts aggressively to ensure the Aurora read-only endpoint can properly cycle through read-only nodes on subsequent connection attempts.

Set the timeout variables used in the JDBC connection string as low as possible. Use separate connection objects for short and long running queries.

Use the provided read and write Aurora endpoints to establish a connection to the cluster.

Use RDS APIs to test application response on server side failures and use a packet dropping tool to test application response for client-side failures. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html

 

D.

Enable database activity streams. Set a low value for the database and application client TCP keepalive parameters.

 

Question 353

 

Your company is using an Amazon DynamoDB database that contains highly sensitive information. You must achieve reliable connectivity to DynamoDB without requiring an internet gateway or a NAT device from your VPC resources, which are deployed in the same Region. Which solution best meets these requirements?

 

A.

Use a Gateway Endpoint

 

Gateway endpoints provide reliable connectivity to DynamoDB without requiring an internet gateway or a NAT device for your VPC. You can access Amazon DynamoDB from your VPC using gateway VPC endpoints. After you create the gateway endpoint, you can add it as a target in your route table for traffic destined from your VPC to DynamoDB. AWS Documentation: Gateway endpoints for Amazon DynamoDB.

 

B.

Use AWS VPC Peering with a PrivateLink Gateway Endpoint

 

C.

Use site-to-site VPN with a PrivateLink Gateway Endpoint

 

D.

Use AWS Direct Connect with AWS VPN CloudHub

 

Question 354

 

You are working with a large DynamoDB database. But some of the read operations are very large and you would like to reduce the return time on this data. Which operation can help you reduce the size of your data sets by returning only specified attributes?

A.

GetItem

 

B.

Query

 

C.

Scan

 

D.

Projection Expressions

 

To read data from a table, you use operations such as GetItem, Query, or Scan. Amazon DynamoDB returns all the item attributes by default. To get only some, rather than all of the attributes, use a projection expression.

A projection expression is a string that identifies the attributes that you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ProjectionExpressions.html

 

Question 355

 

You have been assigned to migrate several databases to the AWS cloud. But to this point, the migrations have been homogeneous. Now the final database to migrate is a heterogeneous migration of an Oracle database to an RDS PostgreSQL database. How can you most efficiently perform this migration?

 

A.

Write a Lambda function to convert from the source schema to the target schema. Use AWS Database Migration Service to migrate the data from on-premises into the PostgreSQL database.

 

B.

Use the AWS Schema Conversion Tool (SCT) to convert the source schema to match the target schema. Use AWS Database Migration Service to migrate the data from on-premises into the PostgreSQL database.

 

AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. https://aws.amazon.com/dms/ The AWS Schema Conversion Tool makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. https://aws.amazon.com/dms/schema-conversion-tool/

 

C.

Use AWS Database Migration Service to migrate the data from on-premises into the PostgreSQL database.

 

D.

Use the AWS Schema Conversion Tool (SCT) to convert the source schema to match the target schema. Use AWS Server Migration Service to migrate the database server from on-premises into the PostgreSQL database.

 

Question 356

 

A large insurance company is considering migrating from on-premises MySQL databases to Amazon Aurora. The Database Lead has created a checklist for the migration. One of the items in the checklist is MySQL compatibility. Which statement accurately describes Aurora’s compatibility with MySQL?

 

A.

The Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.0 and up using the InnoDB storage engine.

 

 

B.

The Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.6 and 5.7 using the MyISAM storage engine.

 

C.

The Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.6 and 5.7 using the InnoDB storage engine.

The Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.6 and 5.7 using the InnoDB storage engine. Certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora. https://aws.amazon.com/rds/aurora/faqs/:~:text=The%20Amazon%20Aurora%20database%20engine,not%20available%20with%20Amazon%20Aurora.

 

D.

The Amazon Aurora database engine is designed to be wire-compatible with MySQL 5.6 and 5.7 using the XtraDB storage engine.

 

Question 357

 

You are the database lead for your company and have deployed an RDS MySQL database. Your requirements dictated that RDS Multi-AZ be used. You have notified management and the application and database teams of an impending maintenance window for a database OS upgrade. Which item is true about the behavior of RDS Multi-AZ during database OS maintenance?

 

A.

The primary instance will be upgraded first, the standby instance will be promoted to primary during the upgrade. When the primary upgrade is completed, the standby instance will then be upgraded. There is no downtime.

 

B.

The primary and secondary instances are upgraded at the same time, but normal operations continue and there is no downtime.

 

C.

The standby instance will be upgraded first, then the primary instance will be upgraded, at which time the standby instance will be promoted to primary until the upgrade is complete. There is no downtime.

 

D.

The secondary instance is upgraded first, then the instance fails over, and then the primary instance is updated. The downtime is during failover.

 

For Multi-AZ deployments, OS maintenance is applied to the secondary instance first, then the instance fails over, and then the primary instance is updated. The downtime is during failover. https://aws.amazon.com/premiumsupport/knowledge-center/rds-required-maintenance/

 

Question 358

 

You have configured a DynamoDB table to store customer information for a pharmacy application. The data can be purged after 30 days and you have set up TTL to do so. But there is an additional requirement to further process these deleted records. Which solution can meet this additional requirement?

 

A.

Create a CloudWatch Event triggered by the TTL event to stream the deleted records to SQS for further processing.

 

B.

Create a Lambda function to process the deleted records as needed. Set the Lambda function to be triggered by the TTL expiry event.

 

C.

Enable continuous backups on the table with point in time recovery.

 

D.

Enable DynamoDB Streams on the table to trigger a lambda function and process the streams records of expired items.

 

You can back up, or otherwise process, items that are deleted by Time to Live (TTL) by enabling Amazon DynamoDB Streams on the table and processing the streams records of the expired items. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-streams.html

 

Question 359

 

You have been tasked with migrating an on-premises MySQL database to an AWS RDS database. Part of your task is to also come up with a plan for backing up the database once migrated to AWS. You need to understand the features of backups and snapshots in RDS. Which items are true regarding RDS backups? Choose 3 answers.

 

A.

Snapshots can be created with the AWS Management Console, CreateDBSnapshot API, or create-db-snapshot command.

 

You can create a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.

 https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html

 

B.

Snapshots must be created from the CLI.

 

C.

DB Snapshots are retained for 60 days.

 

D.

DB Snapshots are kept until you explicitly delete them.

 

Database snapshots are manual (user-initiated) backups of your complete DB instance that serve as full backups. They’re stored in Amazon S3, and are retained until you explicitly delete them. https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/:~:text=Database%20snapshots%20are%20manual%20(user,to%20different%20Regions%20and%20accounts.

 

E.

Disabling automatic backups for a DB instance deletes all existing automated backups for the instance.

 

Disabling automatic backups for a DB instance deletes all existing automated backups for the instance. If you disable and then re-enable automated backups, you are only able to restore starting from the time you re-enabled automated backups. 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

 

Question 360

 

You have been tasked with migrating an on-premises MySQL database to an AWS RDS database. Part of your task is to also come up with a plan for backing up the database once migrated to AWS. You need to understand the features of backups and snapshots in RDS. Which is true regarding the backup retention period?

 

A.

The default backup retention period is seven days if you create the DB instance using the console. The default backup retention period is seven days if you create the DB instance using the Amazon RDS API or the AWS CLI.

 

B.

The default backup retention period is seven days if you create the DB instance using the console. The default backup retention period is one day if you create the DB instance using the Amazon RDS API or the AWS CLI.

 

[Working with backups]

(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html)

If you don't set the backup retention period, the default backup retention period is one day if you create the DB instance using the Amazon RDS API or the AWS CLI. The default backup retention period is seven days if you create the DB instance using the console.

 

C.

The default backup retention period is thirty days if you create the DB instance using the console. The default backup retention period is 7 days if you create the DB instance using the Amazon RDS API or the AWS CLI.

 

D.

The default backup retention period is 14 days if you create the DB instance using the console. The default backup retention period is two days if you create the DB instance using the Amazon RDS API or the AWS CLI.

 

 

 

 

 

Question 361

 

After configuring Elasticache for Redis, your development team notifies you about some performance issues. You monitor Elasticache for a while and determine that the automatic backups are one cause of the performance issue. Which steps can you take to improve the performance of the backup and also the performance in general?

 

A.

Use auto scaling to automatically adjust capacity to maintain steady, predictable performance.

 

Auto scaling is now a feature of Amazon ElastiCache for Redis. You can automatically have your cluster scale horizontally by adding or removing shards or replica nodes. AWS Application Auto Scaling is used to manage the scaling, and Amazon CloudWatch metrics are used to determine scale up or down actions.

 

B.

Run the backups against read replicas.

 

If you are running Redis in a node group with more than one node, you can take a backup from the primary node or one of the read replicas. Because of the system resources required during BGSAVE, we recommend that you create backups from one of the read replicas. Reference: Backup and Restore for ElastiCache for Redis

 

C.

Use Memcached instead of Redis for improved performance.

 

D.

Run backups during off hours.

 

Question 362

 

You have developed an application which provides an online auction service. The application has been working well but with a minor glitch around several users submitting bids at the same time. It is crucial to record the last value entered by users and display it back to them during this rapid bidding process. The latest bid price must be accurate to enable users to attempt to outbid the latest price. What steps can you take in DynamoDB to ensure that the latest bid is displayed?

 

A.

When performing a GetItem operation, Use ConsistentRead = true.

 

When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful. DynamoDB uses eventually consistent reads, unless you specify otherwise. Read operations (such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this parameter to true, DynamoDB uses strongly consistent reads during the operation.

 https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html

 

B.

Use ConsistentRead = true while doing PutItem operation for any item.

 

C.

When performing a GetItem operation, Use ConsistentRead = false.

 

D.

Use ConsistentRead = true while doing UpdateItem operation for any item.

 

Question 363

 

You are working with a DynamoDB table and want to provide a way for other team members to be able to return a set of attributes for an item with a given primary key. Which Amazon DynamoDB action would you use?

 

A.

Search

 

B.

GetItem

 

DynamoDB provides the GetItem action for retrieving an item by its primary key. GetItem is highly efficient because it provides direct access to the physical location of the item.

 

C.

Query

 

D.

Scan

 

Question 364

 

You are considering using ElastiCache but first need to understand in detail the process of backing up and restoring ElastiCache clusters. What are some key points regarding backup and restore of ElastiCache?

 

A.

During the backup process, you can't run any other API or CLI operations on the cluster.

 

Consider the following constraint when planning or making backups: During the backup process, you can't run any other API or CLI operations on the cluster.

Reference: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.htmlbackups-performance

 

B.

Backup and restore are supported only for clusters running on Redis.

 

Consider the following constraint when planning or making backups: At this time, backup and restore are supported only for clusters running on Redis.

Reference: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.htmlbackups-performance

 

C.

During any contiguous 24-hour period, you can create no more than 20 manual backups per node in the cluster.

 

Consider the following constraint when planning or making backups: During any contiguous 24-hour period, you can create no more than 20 manual backups per node in the cluster.

Reference: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.htmlbackups-performance

 

D.

Redis (cluster mode enabled) supports taking backups at the shard level (for the API or CLI, the node group level).

 

E.

Backup and restore are supported only for clusters running on Memcached.

 

Question 365

 

You are configuring a new application which will use DynamoDB as a data store. One of the requirements is that the data be automatically purged after 50 days. What is the most efficient way to do this?

 

A.

Delete the table every 50 days and create a new table.

 

B.

Enable TTL in DynamoDB. Create a Lambda function which scans the table each day and deletes items whose TTL has expired.

 

C.

Add an age item to the table. Increment the age by 1 each day. Run a query that deletes items from the table that have an age of 50 days.

 

D.

Enable TTL in DynamoDB and store the expiration date in epoch format in the TTL attribute.

 

Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.

TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:

Remove user or sensor data after one year of inactivity in an application.

Archive expired items to an Amazon S3 data lake via DynamoDB Streams and AWS Lambda.

Retain sensitive data for a certain amount of time according to contractual or regulatory obligations. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

 

Question 366

 

You are using a version of Redis prior to 2.8.22. After configuring Elasticache for Redis, your development team notifies you of some performance issues. You monitor Elasticache for a while and determine that the automatic backups are causing performance problems. Which step can you take to improve performance?

 

A.

Set the reserved-memory-percent parameter

 

If you are running a version of Redis before 2.8.22, reserve more memory for backups and failovers than if you are running Redis 2.8.22 or later. https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/redis-memory-management.html

 

B.

Configure DAX for Redis

 

C.

Turn off the reserved-memory-percent parameter

 

D.

Use Memcached instead of Redis for improved performance.

 

Question 367

 

You have been hired by a large insurance company to administer their AWS RedShift data warehouse. You have been trying to get a handle on the user base for the data warehouse and things, such as who accesses the warehouse, and what they are doing when they access the warehouse. You are trying to determine if audit logging is being used, and if so, where are the logs stored? Which options best describe audit logging in AWS RedShift?

 

A.

Audit logging is not turned on by default in Amazon Redshift. When you turn on logging on your cluster, Amazon Redshift can create and upload logs to Amazon S3.

 

Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift exports logs to Amazon CloudWatch, or creates and uploads logs to Amazon S3, which capture data from the time audit logging is enabled through to the present time. Each logging update is a continuation of the previous logs. AWS Documentation: Database audit logging > Enabling logging.

 

B.

Audit logging is not turned on by default in Amazon Redshift. When you turn on logging on your cluster, Amazon Redshift can export logs to Amazon CloudWatch.

 

Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift exports logs to Amazon CloudWatch, or creates and uploads logs to Amazon S3, which capture data from the time audit logging is enabled through to the present time. Each logging update is a continuation of the previous logs. AWS Documentation: Database audit logging > Enabling logging.

 

C.

Audit logging is enabled by default in Amazon Redshift. The logs are stored in a log folder on the Amazon Redshift cluster.

 

D.

Audit logging is enabled by default in Amazon Redshift. The logs are stored in Amazon S3.

 

 

Question 368

 

Your team has created a new database on the AWS cloud. The database stores IoT data from exercisers worldwide. Requirements for the database include high availability, minimal development effort, and microsecond latency. You are using DynamoDB and DAX, but have begun to get error messages because the DAX node sizes are too small. What error message will you receive if the number of requests sent to DAX exceeds the capacity of a node?

 

A.

InternalFailure

 

B.

AccessDeniedException

 

C.

InvalidAction

 

D.

ThrottlingException

 

If the number of requests sent to DAX exceeds the capacity of a node, DAX limits the rate at which it accepts additional requests by returning a ThrottlingException. DAX continuously evaluates your CPU utilization to determine the volume of requests it can process while maintaining a healthy cluster state. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html

 

Question 369

 

You have an application for retail sales, and any downtime means money lost. The application is backed by an RDS MySQL database. During a meeting with management, an emphasis was placed on automatic backups for the database. A past incident where automatic backups were turned off has resonated with management, and it was stressed that this could not happen again. After double-checking that automatic backups are turned on, you would like to set up a notification if the automatic backups are ever turned off. What steps can you take?

 

A.

Set up a CloudWatch alarm and use the alarm to trigger an SNS notification.

 

B.

Set up an AWS Config rule for the database and use this rule to trigger an SNS notification.

 

C.

Subscribe to RDS Event Notification and be sure to include the event “Automatic backups for this DB instance have been disabled”.

 

This is a valid event for which you can subscribe to in RDS. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.htmlUSER_Events.ListSubscription

 

D.

Subscribe to RDS Event Notification and you will be notified of all events related to the RDS instance.

 

Question 370

 

You have been hired as a database specialist, and one of the databases under your supervision will be an Aurora MySQL cluster. This database gets heavy traffic at two different times of day, and you have seen the CPU utilization nearly maximized at these times of the day. You have decided to take steps to enable the cluster to efficiently handle these spikes. How can you meet this requirement in a cost-efficient manner?

 

A.

Define and apply a scaling policy to the Aurora DB cluster.

 

Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas provisioned for an Aurora DB cluster using single-master replication. Aurora Auto Scaling is available for both Aurora MySQL and Aurora PostgreSQL. Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don't pay for unused provisioned DB instances. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html

 

B.

Add elasticache to handle frequently used queries.

 

C.

Upgrade the instance class of the Aurora Replica.

 

D.

Place the database instance in an auto scaling group.

 

This would be creating multiple databases and is not a viable solution.

 

Question 371

 

You are working on a DynamoDB database and are beginning to create secondary indexes on the tables. If you want to create more than one table with secondary indexes, you must do so sequentially. What error message will you receive if you try to concurrently create more than one table with a secondary index?

 

A.

InternalFailure

B.

InvalidAction

C.

AccessDeniedException

D.

LimitExceededException

 

If you want to create more than one table with secondary indexes, you must do so sequentially. For example, you would create the first table and wait for it to become ACTIVE, create the next table and wait for it to become ACTIVE, and so on. If you try to concurrently create more than one table with a secondary index, DynamoDB returns a LimitExceededException.

 https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html

 

Question 372

 

A news organization has transferred decades of news articles archived in microfiche to digital and plans to serve these articles online in AWS. The architecture will consist of a web server and an RDS MySQL backend. A key feature of the application will be a “solve the mystery” game where contestants will have to search many news articles to link together clues and solve a mystery. Interest in the application and the game have been robust and traffic is expected to be heavy. A Solutions Architect is considering using Elasticache to maintain good, consistent performance. Which types of use cases can benefit from the addition of Elasticache?

 

A.

Your data stays relatively the same.

 

Consider caching your data if the following is true:

·         Your data is slow or expensive to get when compared to cache retrieval.

·         Users access your data often.

·         Your data stays relatively the same, or if it changes quickly, that staleness is not a large issue.

https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html

 

B.

Your data is slow or expensive to get when compared to cache retrieval.

 

Consider caching your data if the following is true:

·         Your data is slow or expensive to get when compared to cache retrieval.

·         Users access your data often.

·         Your data stays relatively the same, or if it changes quickly, that staleness is not a large issue. https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html

C.

Users access your data often.

 

Consider caching your data if the following is true:

·         Your data is slow or expensive to get when compared to cache retrieval.

·         Users access your data often.

·         Your data stays relatively the same, or if it changes quickly, that staleness is not a large issue. https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html

 

D.

You need to run complex join queries.

 

E.

Your data changes often.

 

Question 373

 

You have been hired as a database specialist and one of the databases under your supervision will be an Aurora MySQL cluster. This database gets heavy traffic at two different times of day, and you have seen the CPU utilization nearly maximized at these times of the day. You have decided to enable Aurora Auto Scaling to enable the cluster to efficiently handle these spikes. How does Aurora implement auto scaling?

 

A.

Aurora Auto Scaling uses a scaling policy to adjust the number of Aurora Replicas in an Aurora DB cluster.

 

Aurora Auto Scaling uses a scaling policy to adjust the number of Aurora Replicas in an Aurora DB cluster. Aurora Auto Scaling has the following components:

·         A service-linked role

·         A target metric

·         Minimum and maximum capacity

·         A cooldown period https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.htmlAurora.Integrating.AutoScaling.Concepts

 

B.

Aurora Auto Scaling uses a scaling policy to adjust the provisioned IOPs in a cluster.

 

C.

Aurora Auto Scaling uses a scaling policy to adjust the size of the Aurora Replicas in an Aurora DB cluster.

 

D.

Aurora Auto Scaling uses a scaling policy to add or remove database instances.

 

Question 374

 

Your RDS MySQL database was having some memory issues due to a memory leak in an application. The application issue has been resolved, but you have decided to reboot the database server for a fresh start. Upon reboot, you receive the error “MySQL could not be started due to incompatible parameters”. What steps can you take to resolve this error?

 

A.

Stop the server and perform a full restore from backups.

 

B.

Reboot the instance again, and the error message will be cleared.

 

C.

Reset all the parameters in the parameter group to the default value.

 

To resolve this issue, change the value of each incompatible parameter to a compatible value using one of the following options:

·         Reset all the parameters in the parameter group to the default value.

·         Reset the values of the parameters that are incompatible. https://aws.amazon.com/premiumsupport/knowledge-center/rds-incompatible-parameters/

 

D.

Perform a point-in-time recovery to before the latest version of the application with the memory leak.

 

E.

Reset the values of the parameters that are incompatible.

 

To resolve this issue, change the value of each incompatible parameter to a compatible value using one of the following options:

·         Reset all the parameters in the parameter group to the default value.

·         Reset the values of the parameters that are incompatible. https://aws.amazon.com/premiumsupport/knowledge-center/rds-incompatible-parameters/

 

Question 375

 

You are the database lead for a development team that is developing a tax application. The application will be returning queries on complex calculations, and you have decided to use Amazon Aurora for increased performance. From past experience, you know that the developers will make errors working with the production database. You want to configure the database in a way that it can quickly go back in time to before the error happened and return the database to a healthy, running state. You have decided to configure the Backtrack feature of Amazon Aurora. Which statements about the Backtrack feature are correct?

 

A.

Backtrack rewinds the database cluster without creating a new cluster.

 

To use the Backtrack feature, you must enable backtracking and specify a target backtrack window. Otherwise, backtracking is disabled.

For the target backtrack window, specify the amount of time that you want to be able to rewind your database using Backtrack. Aurora tries to retain enough change records to support that window of time. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html

 

B.

You must opt in to the Backtrack feature at cluster creation time.

 

You must opt-in when you create or restore a cluster; you cannot enable it for a running cluster. https://aws.amazon.com/blogs/aws/amazon-aurora-backtrack-turn-back-time/

 

C.

The target Backtrack window can go as far back as 24 hours.

 

D.

The target Backtrack window can go as far back as 72 hours.

 

The limit for a backtrack window is 72 hours. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html

Backtrack will create a fresh cluster with data up to the point in time you select.

 

Question 376

 

A national fitness company has created a contest called the Denali challenge. The idea is to travel the height of Denali, which is roughly 20,000 feet, in a month by either hiking, walking, or bicycling. Contestants will be awarded medals and t-shirts based on how many “Denaili’s” they can do in a month. The company would like to add a leaderboard that lists, in real-time, the total height ascended by each user, as well as a leaderboard of single day height ascended. This data will be maintained in an RDS MySQL database and augmented with Elasticache. The requirements include real-time processing, low latency, and high elasticity. What benefits can Elasticache provide to this solution?

 

A.

ElastiCache can improve latency and throughput for write-heavy application workloads.

 

B.

Elasticache can benefit the performance of compute-intensive workloads as well as improve latency and throughput for read-heavy applications.

 

C.

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud.

 

You can also build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. https://aws.amazon.com/elasticache/

 

D.

ElastiCache can improve latency and throughput for both read and write-heavy application workloads.

 

E.

Versions of ElastiCache, Memcached, and Redis support advanced data structures.

 

Question 377

 

You have configured an application with several EC2 instances in an Auto Scaling Group fronted by an Elastic Load Balancer and backed by an AWS RDS database. The application is in a public subnet and the database is in a private subnet in the same VPC, but when you attempt to establish a connection to the database, the application times out. What could be causing this problem?

 

A.

The database credentials are incorrect.

 

B.

The RDS database's Security Group does not have ingress setup for the application instances

 

C.

There is not a route in the route table to the database instance.

 

D.

The database instance does not have a public ip address.

 

Security groups control the access that traffic has in and out of a DB instance. Three types of security groups are used with Amazon RDS: VPC security groups, DB security groups, and EC2-Classic security groups.

 

Question 377

 

You are the database lead for a development team that is developing a geological application. The application will be returning queries on complex calculations, and you have decided to use Amazon Aurora for increased performance. From past experience, you know that the developers will make errors working with the production database. You want to configure the database in a way that it can quickly go back to before the error happened and return the database to a healthy, running state. What configuration option will you choose?

 

A.

Configure read replicas in order to promote the read replica if an error occurs.

 

B.

Configure RDS Multi-AZ and have a process to institute a manual failover if necessary.

 

C.

Enable the Backtrack feature of Aurora at launch time.

 

D.

Take frequent snapshots so that you can restore to a selected point in time.

 

Aurora uses a distributed, log-structured storage system. Each change to your database generates a new log record identified by a Log Sequence Number (LSN). Enabling the backtrack feature provisions a FIFO buffer in the cluster for storage of LSNs. This allows for quick access and recovery times measured in seconds.

 

Question 378

 

Your company maintains a Redshift cluster used to analyze customer insurance data. A recent audit has dictated that this data be encrypted using AWS KMS. Which items are true regarding encrypting RedShift clusters?

 

A.

You can modify an unencrypted cluster to use AWS Key Management Service (AWS KMS) encryption.

 

B.

You can modify an unencrypted cluster to use AWS Key Management Service (AWS KMS) encryption, using either an AWS-managed key or a customer managed key.

 

When you modify your cluster to enable AWS KMS encryption, Amazon Redshift automatically migrates your data to a new encrypted cluster. https://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html

Snapshots created from the encrypted cluster need to be manually encrypted.

Snapshots created from the encrypted cluster are also encrypted.

In Amazon Redshift, you can enable database encryption for your clusters to help protect data at rest. When you enable encryption for a cluster, the data blocks and system metadata are encrypted for the cluster and its snapshots.

https://cloudcraft.linuxacademy.com/assessments/question-pools/896d840f-a283-4689-a6d2-8a38e513af4a

 

C.

You can enable encryption when you launch your cluster.

 

You can enable encryption when you launch your cluster, or you can modify an unencrypted cluster to use AWS Key Management Service (AWS KMS) encryption, using either an AWS-managed key or a customer managed key. When you modify your cluster to enable AWS KMS encryption, Amazon Redshift automatically migrates your data to a new encrypted cluster. https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html

 

D.

RedShift clusters are enabled by default.

 

Question 379

 

A small tech company continues to grow and add offices throughout North America. Each new office location has its own AWS account. As each location starts up, a base dataset needs to be added from corporate headquarters. The corporate data team needs to send an encrypted snapshot to the new locations. The snapshot is encrypted with KMS. What steps should be taken for the corporate data team to share the encrypted snapshot with a new office location?

 

A.

Add the target account to a default AWS KMS key.

Copy the snapshot using the customer managed key, and then share the snapshot with the target account.

Copy the shared DB snapshot from the target account.

 

B.

Add the target account to a customer (non-default) KMS key.

Copy the snapshot using the default AWS managed key, and then share the snapshot with the target account.

Copy the shared DB snapshot from the target account.

 

C.

FTP the snapshot to the new office location. Restore the snapshot to the new database.

 

D.

Add the target account to a customer (non-default) KMS key.

 

E.

Copy the snapshot using the customer managed key, and then share the snapshot with the target account.

 

F.

Copy the shared DB snapshot from the target account.

 

The key is to add the target account to a customer KMS key.

https://aws.amazon.com/premiumsupport/knowledge-center/share-encrypted-rds-snapshot-kms-key/:~:text=Amazon%20RDS%20console.-,Choose%20Snapshots%20from%20the%20navigation%20pane.,key%20from%20the%20target%20account.

 

Question 380

 

You are creating a CloudFormation template to deploy resources in the company AWS account. You have decided to use Systems Manager Parameter Store, and specifically, a dynamic reference, to set the access control for an S3 bucket to a parameter value stored in Systems Manager Parameter Store. Which option would you use in CloudFormation to reference version 2 of a parameter named S3AccessControl that is stored in plain text?

 

A.

"AccessControl": "{{resolve:ssm:S3AccessControl:2}}"

 

B.

"AccessControl": "{{resolve:ssm-secure:S3AccessControl:2}}"

 

C.

"AccessControl": "{{resolve:param:S3AccessControl:2}}"

 

D.

"AccessControl": "{{resolve:ssm-plain:S3AccessControl:2}}"

 

CloudFormation currently supports the following dynamic reference patterns:

·         ssm, for plaintext values stored in AWS Systems Manager Parameter Store.

·         ssm-secure, for secure strings stored in AWS Systems Manager Parameter Store.

·         secretsmanager, for entire secrets or specific secret values that are stored in AWS Secrets Manager. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.htmldynamic-references-secretsmanager

 

Question 381

 

Your company is migrating from on-premises to AWS Cloud and will operate in a hybrid configuration until the migration is complete. One of the steps that need to be completed is migrating their data warehouse to Amazon Redshift. This data is sensitive, and they want to establish a private connection from on-premises to Redshift. The main requirements are that migrating the data at high speed and reliability takes precedence over cost and time to configure. Which option meets these requirements?

 

A.

Private Link Interface Endpoint

 

B.

VPC Peering

C.

Direct Connect

 

D.

Site to site VPN

 

Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. Direct Connect takes longer to set up than VPN, but the scenario states that time to set up is not a major concern.

 

Question 381

 

You have been tasked with configuring Direct Connect for a connection from the company's on-premises data center to the AWS Cloud. What features and benefits can Direct Connect provide in this configuration?

 

A.

Direct Connect can be set up in hours.

 

B.

Direct Connect utilizes the internet to transfer data.

 

C.

Direct Connect encrypts data in transit.

 

D.

Direct Connect transfers data of a secured, private, and dedicated connection.

 

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. https://aws.amazon.com/directconnect/

 

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations.

Direct Connect does not use the public internet. https://aws.amazon.com/directconnect/

 

Question 382

 

You have been hired by an online retail company as a Database Lead. One of your initial tasks is to securely store database credentials in the AWS account. The primary requirement is that the solution should provide automatic rotation of database credentials, and cost is not a factor. Which item can best meet this requirement?

 

A.

AWS Key Management Service

 

B.

Secrets Manager

 

C.

AWS Organizations

 

D.

Systems Manager Parameter Store

 

If you create your own customer master keys by using AWS KMS to encrypt your secrets, AWS charges you at the current AWS KMS rate. However, you can use the "default" key created by AWS Secrets Manager for your account for free. You can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise. https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html

 

Question 383

 

You have configured a new RDS MySQL database for a small insurance company that is moving to the AWS Cloud. But your attempts to connect to the database are failing. What could be the most likely cause?

 

A.

The outbound rules on the instance Security Group are not configured properly.

 

B.

The database instance is in a private subnet, so there needs to be a NAT Gateway.

 

C.

You are not connecting from the root account.

 

D.

The inbound rules on the instance Security Group are not configured properly.

 

Security groups control the access that traffic has in and out of a DB instance. Three types of security groups are used with Amazon RDS: VPC security groups, DB security groups, and EC2-Classic security groups. In simple terms, these work as follows:

·         A VPC security group controls access to DB instances and EC2 instances inside a VPC.

·         A DB security group controls access to EC2-Classic DB instances that are not in a VPC.

·         An EC2-Classic security group controls access to an EC2 instance. For more information about EC2-Classic security groups, see EC2-Classic in the Amazon EC2 documentation. By default, network access is disabled for a DB instance. You can specify rules in a security group that allows access from an IP address range, port, or security group. Once ingress rules are configured, the same rules apply to all DB instances that are associated with that security group. You can specify up to 20 rules in a security group. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html

 

 

Question 384

 

You are reviewing a CloudFormation template to find a dynamic parameter that references secret holding database parameters. There are several dynamic references in the template, but one clue you know of is that these database parameters are scheduled for automatic secret rotation every 60 days. Based on this knowledge, which type of dynamic reference should you look for in the CloudFormation template?

 

A.

ssm-secure

 

B.

secretsrotate

 

C.

secretsmanager

 

D.

ssm

 

The clue that these secrets are rotated automatically means that the secrets are stored in Secrets Manager. This is the proper dynamic reference for Secrets Manager.

·         CloudFormation currently supports the following dynamic reference patterns:

·         ssm, for plaintext values stored in AWS Systems Manager Parameter Store.

·         ssm-secure, for secure strings stored in AWS Systems Manager Parameter Store.

·         secretsmanager, for entire secrets or specific secret values that are stored in AWS Secrets Manager. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.htmldynamic-references-secretsmanager

 

Question 385

 

A financial services company is migrating their applications to the AWS cloud. Compliance requirements will dictate their choice of an Amazon Relational Database Service (RDS) database. One of the main compliance requirements is that the database uses AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB cluster. Instead, you use an authentication token. Which RDS database can use IAM database authentication?

 

A.

Aurora MySQL

 

You can authenticate to your DB instance using IAM database authentication. IAM database authentication works with MariaDB, MySQL, and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

An authentication token is a unique string of characters that RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database because authentication is managed externally using IAM. You can also still use standard database authentication.

Reference: IAM Database Authentication for MariaDB, MySQL, and PostgreSQL

 

B.

Oracle

 

C.

MariaDB

 

You can authenticate to your DB instance using IAM database authentication. IAM database authentication works with MariaDB, MySQL, and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

An authentication token is a unique string of characters that RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database because authentication is managed externally using IAM. You can also still use standard database authentication.

Reference: IAM Database Authentication for MariaDB, MySQL, and PostgreSQL

 

C.

Aurora PostgreSQL

 

You can authenticate to your DB instance using IAM database authentication. IAM database authentication works with MariaDB, MySQL, and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

An authentication token is a unique string of characters that RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database because authentication is managed externally using IAM. You can also still use standard database authentication.

Reference: IAM Database Authentication for MariaDB, MySQL, and PostgreSQL

 

D.

SQL Server

 

Question 386

 

You have just configured a new RDS SQL Server database in a private subnet. The public facing WordPress instance has been configured with the database name, database username, database password, and database host information. The WordPress instance is having trouble communicating with the backend database. What are the most likely possible causes?

 

A.

You have made a mistake in the database configuration of the WordPress instance.

 

If the WordPress instance does not know the correct database name, database username, database password, and database host information, the WordPress instance won't be able to connect to the backend database even if the ingress security group rule for the database is configured to allow traffic from the WordPress instance.

 

B.

The Security Group associated with the RDS SQL Server database needs to allow ingress traffic from the WordPress instance.

 

Security Group rules can allow traffic. For custom security groups, there is a default allow outbound rule, but no inbound rules. You would want to restrict traffic to your database by allowing ingress connections from the WordPress instance (and for any database admins).

 

C.

The database instance state is not yet available.

 

For a newly created DB instance, the DB instance has a status of creating until the DB instance is ready to use. When the state changes to available, you can connect to the DB instance. Depending on the size of your DB instance, it can take up to 20 minutes before an instance is available. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.htmlCHAP_Troubleshooting.Connecting

 

D.

The database should be located in the same subnet as the WordPress instance.

 

Question 387

 

A small tech company continues to grow and add offices throughout North America. Each new office location has its own AWS account. As each location starts up, a base dataset needs to be added from corporate headquarters. The corporate data team needs to send an encrypted snapshot to the new locations. What restrictions apply when sharing encrypted snapshots?

 

A.

You can’t share snapshots across AWS accounts.

 

B.

You can't share a snapshot that has been encrypted using the default AWS KMS encryption key of the AWS account that shared the snapshot.

 

You can't share a DB snapshot that uses this option. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.AdvSecurity.html

 

C.

You can’t share snapshots across AWS regions.

 

 

D.

You can't share Oracle or Microsoft SQL Server snapshots that are encrypted using Transparent Data Encryption (TDE).

 

You would need to create a customer key (not the default key) to share the snapshot. https://aws.amazon.com/premiumsupport/knowledge-center/share-encrypted-rds-snapshot-kms-key/

 

E.

You can't share encrypted snapshots as public.

 

If you choose, you can make your unencrypted snapshots available publicly to all AWS users. You can't make your encrypted snapshots available publicly. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html

 

Question 388

 

After creating an RDS Aurora database, you have configured it to use IAM Database Authentication. To test the connection, you generate an authentication token. For what length of time is the token valid?

 

A.

1 hour

 

B.

day

 

C.

15min

 

An authentication token is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html

 

D.

2 weeks

 

 

How to fix errors : -         ORA-38760: This database instance failed to turn on flashback database -         ORA-38780: Restore poin...