[June-2021]Braindump2go New DP-203 PDF and VCE Dumps Free Share(Q78-Q105)

QUESTION 78

You plan to implement an Azure Data Lake Gen2 storage account.

You need to ensure that the data lake will remain available if a data center fails in the primary Azure region.

The solution must minimize costs.

Which type of replication should you use for the storage account?


A.geo-redundant storage (GRS)

B.zone-redundant storage (ZRS)

C.locally-redundant storage (LRS)

D.geo-zone-redundant storage (GZRS)


Answer: A


QUESTION 79

You plan to ingest streaming social media data by using Azure Stream Analytics.

The data will be stored in files in Azure Data Lake Storage, and then consumed by using Azure Datiabricks and PolyBase in Azure Synapse Analytics.

You need to recommend a Stream Analytics data output format to ensure that the queries from Databricks and PolyBase against the files encounter the fewest possible errors.

The solution must ensure that the tiles can be queried quickly and that the data type information is retained.

What should you recommend?


A.Parquet

B.Avro

C.CSV

D.JSON


Answer: B


QUESTION 80

You have an Azure Data Lake Storage Gen2 container that contains 100 TB of data.

You need to ensure that the data in the container is available for read workloads in a secondary region if an outage occurs in the primary region. The solution must minimize costs.

Which type of data redundancy should you use?


A.zone-redundant storage (ZRS)

B.read-access geo-redundant storage (RA-GRS)

C.locally-redundant storage (LRS)

D.geo-redundant storage (GRS)


Answer: C


QUESTION 81

You have an Azure Synapse Analytics dedicated SQL Pool1. Pool1 contains a partitioned fact table named dbo.Sales and a staging table named stg.Sales that has the matching table and partition definitions.

You need to overwrite the content of the first partition in dbo.Sales with the content of the same partition in stg.Sales. The solution must minimize load times.

What should you do?


A.Switch the first partition from dbo.Sales to stg.Sales.

B.Switch the first partition from stg.Sales to dbo. Sales.

C.Update dbo.Sales from stg.Sales.

D.Insert the data from stg.Sales into dbo.Sales.


Answer: D


QUESTION 82

You are designing a partition strategy for a fact table in an Azure Synapse Analytics dedicated SQL pool. The table has the following specifications:

- Contain sales data for 20,000 products.

- Use hash distribution on a column named ProduclID,

- Contain 2.4 billion records for the years 20l9 and 2020.

Which number of partition ranges provides optimal compression and performance of the clustered columnstore index?


A.40

B.240

C.400

D.2,400


Answer: B


QUESTION 83

You have an Azure Synapse Analytics serverless SQL pool named Pool1 and an Azure Data Lake Storage Gen2 account named storage1. The AllowedBlobpublicAccess porperty is disabled for storage1.

You need to create an external data source that can be used by Azure Active Directory (Azure AD) users to access storage1 from Pool1.

What should you create first?


A.an external resource pool

B.a remote service binding

C.database scoped credentials

D.an external library


Answer: C


QUESTION 84

You plan to implement an Azure Data Lake Storage Gen2 container that will contain CSV files.

The size of the files will vary based on the number of events that occur per hour.

File sizes range from 4.KB to 5 GB.

You need to ensure that the files stored in the container are optimized for batch processing.

What should you do?


A.Compress the files.

B.Merge the files.

C.Convert the files to JSON

D.Convert the files to Avro.


Answer: D


QUESTION 85

You have an Azure Factory instance named DF1 that contains a pipeline named PL1.PL1 includes a tumbling window trigger.

You create five clones of PL1. You configure each clone pipeline to use a different data source. You need to ensure that the execution schedules of the clone pipeline match the execution schedule of PL1.

What should you do?


A.Add a new trigger to each cloned pipeline

B.Associate each cloned pipeline to an existing trigger.

C.Create a tumbling window trigger dependency for the trigger of PL1.

D.Modify the Concurrency setting of each pipeline.


Answer: B


QUESTION 86

You are planning a streaming data solution that will use Azure Databricks. The solution will stream sales transaction data from an online store. The solution has the following specifications:

- The output data will contain items purchased, quantity, line total sales amount, and line total tax amount.

- Line total sales amount and line total tax amount will be aggregated in Databricks.

- Sales transactions will never be updated. Instead, new rows will be added to adjust a sale.

You need to recommend an output mode for the dataset that will be processed by using Structured Streaming.

The solution must minimize duplicate data.

What should you recommend?


A.Append

B.Update

C.Complete


Answer: C


QUESTION 87

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1.

You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1.

You plan to insert data from the files into Table1 and azure Data Lake Storage Gen2 container named container1.

You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1.

You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1.

Solution: You use a dedicated SQL pool to create an external table that has a additional DateTime column.

Does this meet the goal?


A.Yes

B.No


Answer: A


QUESTION 88

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1.

You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1.

You plan to insert data from the files into Table1 and azure Data Lake Storage Gen2 container named container1.

You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1.

You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1.

Solution: In an Azure Synapse Analytics pipeline, you use a data flow that contains a Derived Column transformation.

Does this meet the goal?


A.Yes

B.No


Answer: B


QUESTION 89

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1.

You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1.

You plan to insert data from the files into Table1 and azure Data Lake Storage Gen2 container named container1.

You plan to insert data from the files into Table1 and transform the data.

Each row of data in the files will produce one row in the serving layer of Table1.

You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1.

Solution: In an Azure Synapse Analytics pipeline, you use a Get Metadata activity that retrieves the DateTime of the files.

Does this meet the goal?


A.Yes

B.No


Answer: B


QUESTION 90

You have a C# application that process data from an Azure IoT hub and performs complex transformations.

You need to replace the application with a real-time solution. The solution must reuse as much code as possible from the existing application.


A.Azure Databricks

B.Azure Event Grid

C.Azure Stream Analytics

D.Azure Data Factory


Answer: C


QUESTION 91

You have several Azure Data Factory pipelines that contain a mix of the following types of activities.

* Wrangling data flow

* Notebook

* Copy

* jar

Which two Azure services should you use to debug the activities? Each correct answer presents part of the solution NOTE: Each correct selection is worth one point.


A.Azure HDInsight

B.Azure Databricks

C.Azure Machine Learning

D.Azure Data Factory

E.Azure Synapse Analytics


Answer: CE


QUESTION 92

You use Azure Stream Analytics to receive Twitter data from Azure Event Hubs and to output the data to an Azure Blob storage account.

You need to output the count of tweets during the last five minutes every five minutes. Each tweet must only be counted once.

Which windowing function should you use?


A.a five-minute Session window

B.a five-minute Sliding window

C.a five-minute Tumbling window

D.a five-minute Hopping window that has one-minute hop


Answer: C


QUESTION 93

You have an Azure Stream Analytics query.

The query returns a result set that contains 10,000 distinct values for a column named clusterID.

You monitor the Stream Analytics job and discover high latency.

You need to reduce the latency.

Which two actions should you perform? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.


A.Add a pass-through query.

B.Add a temporal analytic function.

C.Scale out the query by using PARTITION BY.

D.Convert the query to a reference query.

E.Increase the number of streaming units.


Answer: CE


QUESTION 94

You are designing a solution that will copy Parquet files stored in an Azure Blob storage account to an Azure Data Lake Storage Gen2 account.

The data will be loaded daily to the data lake and will use a folder structure of {Year}/{Month}/{Day}/.

You need to design a daily Azure Data Factory data load to minimize the data transfer between the two accounts.

Which two configurations should you include in the design? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.


A.Delete the files in the destination before loading new data.

B.Filter by the last modified date of the source files.

C.Delete the source files after they are copied.

D.Specify a file naming pattern for the destination.


Answer: BC


QUESTION 95

A company purchases IoT devices to monitor manufacturing machinery.

The company uses an IoT appliance to communicate with the IoT devices.

The company must be able to monitor the devices in real-time.

You need to design the solution.

What should you recommend?


A.Azure Stream Analytics cloud job using Azure PowerShell

B.Azure Analysis Services using Azure Portal

C.Azure Data Factory instance using Azure Portal

D.Azure Analysis Services using Azure PowerShell


Answer: A


QUESTION 96

You are designing a statistical analysis solution that will use custom proprietary1 Python functions on near real-time data from Azure Event Hubs.

You need to recommend which Azure service to use to perform the statistical analysis. The solution must minimize latency.

What should you recommend?


A.Azure Stream Analytics

B.Azure SQL Database

C.Azure Databricks

D.Azure Synapse Analytics


Answer: A


QUESTION 97

You are designing an Azure Databricks interactive cluster. The cluster will be used infrequently and will be configured for auto-termination.

You need to ensure that the cluster configuration is retained indefinitely after the cluster is terminated. The solution must minimize costs.

What should you do?


A.Clone the cluster after it is terminated.

B.Terminate the cluster manually when processing completes.

C.Create an Azure runbook that starts the cluster every 90 days.

D.Pin the cluster.


Answer: D


QUESTION 98

You have an Azure Synapse Analytics job that uses Scala.

You need to view the status of the job.

What should you do?


A.From Azure Monitor, run a Kusto query against the AzureDiagnostics table.

B.From Azure Monitor, run a Kusto query against the SparkLogying1 Event.CL table.

C.From Synapse Studio, select the workspace. From Monitor, select Apache Sparks applications.

D.From Synapse Studio, select the workspace. From Monitor, select SQL requests.


Answer: C


QUESTION 99

You configure monitoring for a Microsoft Azure SQL Data Warehouse implementation. The implementation uses PolyBase to load data from comma-separated value (CSV) files stored in Azure Data Lake Gen 2 using an external table.

Files with an invalid schema cause errors to occur.

You need to monitor for an invalid schema error.

For which error should you monitor?


A.EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error

[com.microsoft.polybase.client.KerberosSecureLogin] occurred while accessing external files.'

B.EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [No FileSystem for scheme: wasbs] occurred while accessing external file.'

C.Cannot execute the query "Remote Query" against OLE DB provider "SQLNCLI11": for linked server "(null)", Query aborted-the maximum reject threshold (o rows) was reached while regarding from an external source: 1 rows rejected out of total 1 rows processed.

D.EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_Connect: Error [Unable to instantiate LoginClass] occurred while accessing external files.'


Answer: C


QUESTION 100

You use Azure Data Lake Storage Gen2.

You need to ensure that workloads can use filter predicates and column projections to filter data at the time the data is read from disk.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.


A.Reregister the Microsoft Data Lake Store resource provider.

B.Reregister the Azure Storage resource provider.

C.Create a storage policy that is scoped to a container.

D.Register the query acceleration feature.

E.Create a storage policy that is scoped to a container prefix filter.


Answer: BD


QUESTION 101

You have an enterprise data warehouse in Azure Synapse Analytics named DW1 on a server named Server1.

You need to verify whether the size of the transaction log file for each distribution of DW1 is smaller than 160 GB.

What should you do?


A.On the master database, execute a query against the sys.dm_pdw_nodes_os_performance_counters dynamic management view.

B.From Azure Monitor in the Azure portal, execute a query against the logs of DW1.

C.On DW1, execute a query against the sys.database_files dynamic management view.

D.Execute a query against the logs of DW1 by using the Get-AzOperationalInsightSearchResult PowerShell cmdlet.


Answer: A


QUESTION 102

You have a SQL pool in Azure Synapse.

A user reports that queries against the pool take longer than expected to complete.

You need to add monitoring to the underlying storage to help diagnose the issue.

Which two metrics should you monitor? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.


A.Cache used percentage

B.DWU Limit

C.Snapshot Storage Size

D.Active queries

E.Cache hit percentage


Answer: AE


QUESTION 103

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage account that contains 100 GB of files. The files contain text and numerical values. 75% of the rows contain description data that has an average length of 1.1 MB.

You plan to copy the data from the storage account to an Azure SQL data warehouse.

You need to prepare the files to ensure that the data copies quickly.

Solution: You modify the files to ensure that each row is more than 1 MB.

Does this meet the goal?


A.Yes

B.No


Answer: B


QUESTION 104

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage account that contains 100 GB of files. The files contain text and numerical values. 75% of the rows contain description data that has an average length of 1.1 MB.

You plan to copy the data from the storage account to an Azure SQL data warehouse.

You need to prepare the files to ensure that the data copies quickly.

Solution: You modify the files to ensure that each row is less than 1 MB.

Does this meet the goal?


A.Yes

B.No


Answer: A


QUESTION 105

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage account that contains 100 GB of files. The files contain text and numerical values. 75% of the rows contain description data that has an average length of 1.1 MB.

You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse Analytics.

You need to prepare the files to ensure that the data copies quickly.

Solution: You convert the files to compressed delimited text files.

Does this meet the goal?


A.Yes

B.No


Answer: A


2021 Latest Braindump2go DP-203 PDF and DP-203 VCE Dumps Free Share:

https://drive.google.com/drive/folders/1iYr0c-2LfLu8iev_F1XZJhK_LKXNTGhn?usp=sharing

4.7 Star App Store Review!
Cpl.dev***uke
The Communities are great you rarely see anyone get in to an argument :)
king***ing
Love Love LOVE
Download

Select Collections