az-204 practice-001

https://esi.microsoft.com/getcertification

https://learn.microsoft.com/en-us/credentials/certifications/exams/az-204/practice/results?assessmentId=35&snapshotId=2aa50094-0358-4888-94e1-64ede4934b80

Microsoft Certified: Azure Developer Associate - AZ-204

Microsoft Azure Developers design, build, test, and maintain cloud solutions, such as applications and services, partnering with cloud solution architects, cloud DBAs, cloud administrators, and clients to implement these solutions.

Question 1 of 50

You plan to use Azure API Management for Hybrid and multicloud API management.

You need to create a self-hosted gateway for production.

Which container image tag should you use?

2.1.0

This item tests the candidate’s knowledge of self-hosted gateways in Azure API Management.

In production, the version must be pinned. The only way to achieve that is by using a tag that follows the convention {major}.{minor}.{patch}. The v3 tag will result in always running a major version with every new feature and patch. The latest tag is used for evaluating the self-hosted gateway. The V3-preview tag should be used to run the latest preview container image.

Explore API Management - Training | Microsoft Learn

Self-hosted gateway overview | Microsoft Learn

 

Question 2 of 50

You have an Azure event hub.

You need to add partitions to the event hub.

Which code segment should you use?

 `az eventhubs eventhub update --resource-group MyResourceGroupName --namespace-name MyNamespaceName --name MyEventHubName --partition-count 12`

 已经有的,用update命令

This item tests the candidate’s knowledge of developing event-based solutions.

The code segment that includes az eventhubs eventhub update adds partitions to an existing event hub. The code segment that includes az eventhubs eventhub consumer-group update updates the event hub consumer group. The code segment that includes az eventhubs eventhub consumer-group create will create an event hub consumer group. The code segment that includes az eventhubs eventhub create --resource-group segment will create an event hub with partitions, not change an existing one

Control Azure services with the CLI - Training | Microsoft Learn

az eventhubs eventhub | Microsoft Learn

 

Question 3 of 50

You plan to implement event routing in your Azure subscription by using Azure Event Grid. An event is generated each time an Azure resource is deleted. A message corresponding to the event is automatically displayed in an Azure App Service web app you deployed into the same Azure subscription.

You create a custom topic.

You need to subscribe to the custom topic.

What should you do first?

Create a message point

This item tests the candidate’s knowledge of setting up Azure Event Grid subscriptions, which is an integral part of implementing solutions that use Azure Event Grid.

Before subscribing to the custom topic, you need to create an endpoint for event messages. The Azure App Service web app acts as the event handler in this case, so this task is already completed. The Azure Event Grid resource provider is already enabled at this point because this is a prerequisite for creating a custom topic. Event filtering is part of configuring an event subscription, so it takes place either during or after provisioning of the subscription.

Exercise: Route custom events to web endpoint by using Azure CLI - Training | Microsoft Learn

Quickstart: Send custom events with Event Grid and Azure CLI - Azure Event Grid | Microsoft Learn

Question 4 of 50

This item tests the candidate’s knowledge of reading events from Azure Event Hubs.

Inserting the code segment that includes startingPosition = EventPosition.Earliest at line 6 uses the earliest starting position, which is required to read all published events. Inserting the code segment that includes string partitionId = (await consumer.GetPartitionIdsAsync()).First(); at line 7 is required. The GetPartitionIdsAsync() method returns a string[]. The First() method will, therefore, return a string. The code segment at line 6 that uses startingPosition = EventPosition.Latest does not use the earliest starting position. The code segment at line 7 that includes int partitionId is incorrect because the GetPartitionIdsAsync() method returns a string[]. The First() method will, therefore, return a string, and not an int, as the return variable expects.

Perform common operations with the Event Hubs client library - Training | Microsoft Learn

EventHubProducerClient.GetPartitionIdsAsync(CancellationToken) Method (Azure.Messaging.EventHubs.Producer) - Azure for .NET Developers | Microsoft Learn

EventPosition.Earliest Property (Azure.Messaging.EventHubs.Consumer) - Azure for .NET Developers | Microsoft Learn

Question 5 of 50

You have an instance of Azure Event Grid.

You need to ensure an application can receive events filtered by values in the data field in the advanced filtering options.

Which filter should you use?

Advanced

This item tests the candidate’s knowledge of using event filters, which is part of developing event-based solutions.

An advanced filter is used to filter events by values in the data fields and specify the comparison operator. An event type filter is used to send only certain event types to the endpoint. A subject filter is used to specify a starting or ending value for the subject. Topics is not a type of filter; the event grid topic provides an endpoint where the source sends events.

Filter events - Training | Microsoft Learn

Azure Event Grid concepts - Azure Event Grid | Microsoft Learn

 Question 6 of 50

Create a new topic with a default time to live of 15 minutes. Send the messages to this topic.

This question tests the candidate's knowledge of Azure Service Bus message expiration.

To avoid affecting existing applications, the time to live of the existing topic must not be changed. A new topic needs to be created. Changing the topic's default time to live will affect other applications. A message-level time to live cannot be higher than the topic's time to live. To avoid affecting existing applications, the time to live of the existing topic or queue must not be changed.

Exercise: Send and receive message from a Service Bus queue by using .NET. - Training | Microsoft Learn

ServiceBusMessage.TimeToLive Property (Azure.Messaging.ServiceBus) - Azure for .NET Developers | Microsoft Learn

 

Question 7 of 50

You need to write a filter condition for an Azure Service Bus topic.

Which three filters can you use? Each correct answer presents a complete solution.

SQL
Boolean
Correlation

This item tests the candidate’s knowledge of implementing solutions that use Azure Service Bus.

A SqlFilter holds a SQL-like conditional expression that is evaluated in the broker against the arriving message’s user-defined properties and system properties. The TrueFilter and FalseFilter either cause all arriving messages (true) or none of the arriving messages (false) to be selected for the subscription. A CorrelationFilter holds a set of conditions that are matched against one or more of an arriving message's user and system properties. Size Filter and Content are not valid options for Service Bus topic filtering.

Implement message-based communication workflows with Azure Service Bus - Training | Microsoft Learn

Azure Service Bus topic filters - Azure Service Bus | Microsoft Learn

 

Question 8 of 50

You have an application that requires message queuing.

You need to recommend a solution that meets the following requirements:

  • automatic duplicate message detection.
  • ability to send 2 MB messages.

Which message queuing solution should you recommend?

Azure Service Bus Premium tier

This item tests the candidate's knowledge of Azure Service Bus.

Service Bus detects duplicate messages. The Premium tier is required to send messages larger than 256 KB. Although Service Bus detects duplicate messages, the Standard tier only supports messages that are up to 256 KB in size. Azure Storage queues do not support duplicate message detection. Azure Storage queues do not support duplicate message detection.

Explore Azure Service Bus - Training | Microsoft Learn

Compare Azure Storage queues and Service Bus queues - Azure Service Bus | Microsoft Learn

 Question 9 of 50

You plan to use a shared access signature to protect access to services within a general-purpose v2 storage account.

You need to identify the type of service that you can protect by using the user delegation shared access signature.

Which service should you identify?

 Blob

This item tests the candidate’s knowledge of identifying the supported authorization method, which is the first step of implementing it.

The blob service is the only one that supports user delegation shared access signatures. The file service supports account and service shared access signatures. The queue service supports account and service shared access signatures. The table service supports account and service shared access signatures.

Discover shared access signatures - Training | Microsoft Learn

Grant limited access to data with shared access signatures (SAS) - Azure Storage | Microsoft Learn

 

Question 10 of 50

You plan to use Microsoft Graph to retrieve a list of users in a Microsoft Entra ID tenant.

You need to optimize query results.

Which two query options should you use? Each correct answer presents part of the solution.

$filter
$select

This item tests the candidate's knowledge of Microsoft Graph query options.

The $filter query option must be used to limit the results returned. The $select query option limits the attributes projected from the result set, making the query more efficient. The $count query option is meant to retrieve the total count of matching resources. $expand query option is used to retrieve related resources.

Query Microsoft Graph by using REST - Training | Microsoft Learn

Paging Microsoft Graph data in your app - Microsoft Graph | Microsoft Learn

 

 Question 11 of 50

You have an Azure Storage account.

You need to provide external users the ability to create and update blobs.

Which enum value of BlobSasPermissions should you use?

 Write

This item tests the candidate’s knowledge of creating and implementing blobs.

The Write permission will allow users to create and update blobs. The Add permission is only applicable for append blobs. The Create permission only allows users to create blobs. It does not allow users to update blobs. The Read permission does not allow users to create and update blobs.

Control access to Azure Storage with shared access signatures - Training | Microsoft Learn

Create a service SAS for a container or blob - Azure Storage | Microsoft Learn

Question 12 of 50

You manage an Azure App Service web app named app1. App1 is registered as a multi-tenant application in a Microsoft Entra ID tenant named tenant1.

You need to grant app1 the permission to access the Microsoft Graph API in tenant1.

Which service principal should you use?

application

This item tests the candidate’s knowledge of accessing user data from Microsoft Graph, which is part of implementing user authentication and authorization.

A Microsoft Entra ID application is defined by its one and only application object, which resides in the Microsoft Entra ID tenant where the application was registered (known as the application's home tenant). The application service principal is used to configure permission for app1 in tenant1 to access the Microsoft Graph API. The legacy service principal is a legacy app, which is an app created before app registrations were introduced or an app created through legacy experiences. Managed identities eliminate the need to manage credentials in code. A system-assigned managed identity is restricted to one per resource and is tied to the lifecycle of the resource. Managed identities for Azure resources eliminate the need to manage credentials in code. A user-assigned managed identity can be created and assigned to one or more instances of an Azure service. The legacy, system-assigned managed identity, and user-assigned managed identity cannot be used to assign permission for app1 in tenant1 to access the Microsoft Graph API.

Explore the Microsoft identity platform - Training | Microsoft Learn

Explore service principals - Training | Microsoft Learn

Apps & service principals in Azure AD - Microsoft Entra | Microsoft Learn

 

Question 13 of 50

You have blobs in an Azure storage account.

You need to implement a stored access policy that will apply to shared access signatures generated for the blobs.

To which type of storage resource should you associate the policy?

the container that is hosting blobs

This item tests the candidate’s knowledge of configuring stored access policy, which is part of implementing authorization.

The container that is hosting blobs is used for associating the corresponding stored access policies.

The storage account can be associated with shared access signatures keys but not stored access policies.

The blob service of the storage account can be associated with shared access signatures keys but not stored access policies.

Each individual blob can be associated with shared access signatures keys but not stored access policies.

Use stored access policies to delegate access to Azure Storage - Training | Microsoft Learn

Define a stored access policy - Azure Storage | Microsoft Learn

 

Question 14 of 50

You need to generate a shared access signature token that grants the Read permission to a blob container.

Which code segment should you use?

```BlobSasBuilder sasBuilder = new BlobSasBuilder() { BlobContainerName = containerClient.Name, Resource = "c" }; sasBuilder.ExpiresOn = DateTimeOffset.UtcNow.AddHours(1); sasBuilder.SetPermissions(BlobContainerSasPermissions.Read); Uri sasUri = containerClient.GenerateSasUri(sasBuilder);``

This item tests the candidate’s knowledge of creating and implementing shared access signatures.

The code segment that includes Resource = "c" and sasBuilder.SetPermissions(BlobContainerSasPermissions.Read); will generate the shared access signatures token that grants the Read permission to a blob container. The code segment that includes resource = ‘b’ will generate a shared access signatures token at the blob level. The code segments that include sasBuilder.SetPermissions(BlobContainerSasPermissions.Create); will generate a shared access signatures token with the Create permission at the blob level.

Store data in Azure learning path - Training | Microsoft Learn

Create a service SAS for a container or blob - Azure Storage | Microsoft Learn

BlobSasBuilder.Resource Property

Specifies which resources are accessible via the shared access signature.

Specify "b" if the shared resource is a blob. This grants access to the content and metadata of the blob.

Specify "c" if the shared resource is a blob container. This grants access to the content and metadata of any blob in the container, and to the list of blobs in the container.

Beginning in version 2018-11-09, specify "bs" if the shared resource is a blob snapshot. This grants access to the content and metadata of the specific snapshot, but not the corresponding root blob.

Beginning in version 2019-12-12, specify "bv" if the shared resource is a blob version. This grants access to the content and metadata of the specific version, but not the corresponding root blob.

 

Question 15 of 50

You have 10 applications running in Azure App Service.

You need to ensure the applications have access to items stored in Azure App Configuration by using a common configuration. Passwords or keys must not be used.

Which solution should you use?

 user-assigned managed identity

This item tests the candidate's knowledge of managed identities.

User-assigned managed identities are a way to reuse the permissions across applications. User-assigned managed identities associate the managed identity to the new applications, with no keys or passwords.

System-assigned managed identities use a new identity for each application, which does not meet the common configuration requirement.

A service principal has keys that need to be rotated.

The developer does not run the application, so the developer’s identity cannot be assumed.

Implement Azure App Configuration - Training | Microsoft Learn

Managed identities - Azure App Service | Microsoft Learn

 

Question 16 of 50

You need to group keys in Azure App Configuration.

What are two possible ways to achieve this goal? Each correct answer presents a complete solution

Organize keys by using key prefixes.

Organize keys by using labels.

This item tests the candidate’s knowledge of best practices when working with keys in Azure App Configuration.

Key prefixes are the beginning parts of keys. A set of keys can be grouped by using the same prefix in names.

Labels are an attribute on keys. Labels are used to create variants of a key. For example, labels can be assigned to multiple versions of a key.

Authorizing role-based access control to read Azure App Configuration is not a valid way to group keys.

Authorizing a managed identity to read Azure App Configuration is not a valid way to group keys.

Implement Azure App Configuration - Training | Microsoft Learn

Azure App Configuration best practices | Microsoft Learn

 

Question 17 of 50

You manage an Azure App Service web app named app1 and an Azure Key Vault named vault1.

You need to ensure app1 can authenticate and conduct operations with vault1 without managing the rotation of a secret.

Which authentication method should you use for app1?

 system-assigned managed identity

This item tests the candidate’s knowledge of implementing Azure Key Vault, which is part of implementing Secure Cloud solutions.

A system-assigned managed identity can be used to ensure app1 can authenticate and perform operations with vault1 without managing rotation of a secret.

A user-assigned managed identity can be used to ensure app1 can authenticate and perform operations with vault1, but the secret rotation needs to be managed.

A service principal and a secret can be used to authenticate to the key vault, but it is difficult to automatically rotate the secret that is used to authenticate to the key vault.

A service principal and an associated certificate with access to the key vault can be used for authentication but would require managing the rotation of a secret.

Implement Azure Key Vault - Training | Microsoft Learn

Azure Key Vault soft-delete | Microsoft Learn

Assign an Azure Key Vault access policy (CLI) | Microsoft Learn

 

Question 18 of 50

A company plans to use Azure App Configuration for feature flags in an application.

The company has the following encryption requirements:

  • customer-managed keys
  • hardware security module (HSM)-protected keys

You need to recommend service tiers.

Which two tiers should you recommend? Each correct answer presents part of the solution.

Azure App Configuration Standard tier

Azure Key Vault Premium tier

This item tests the candidate’s knowledge of the service tiers for Azure App Configuration and Azure Key Vault.

App Configuration Standard tier must be used for customer-managed keys to be used in App Configuration. Key Vault Premium tier is required to support HSM-protected keys. App Configuration Free tier does not allow the use of customer-managed keys. Key Vault Standard tier does not support HSM-protected keys.

Secure app configuration data - Training | Microsoft Learn

Azure Managed HSM Overview - Azure Managed HSM | Microsoft Learn

 

Question 19 of 50

You have an Azure App Configuration instance named AppConfig1 and an Azure key vault named KeyVault1.

You plan to encrypt data stored in AppConfig1 by using your own key stored in KeyVault1.

You need to grant permissions in KeyVault1 to the identity assigned to AppConfig1.

Which three key-specific permissions should you use? Each correct answer presents part of the solution.

 get,unwrap,wrap

This item tests the candidate’s knowledge of implementing bring your own key encryption scenarios, which is an essential part of implementing secure cloud solutions.

To use the custom key stored in KeyVault1, the identity assigned to AppConfig1 needs to have GET, WRAP, and UNWRAP permissions to the custom key. The DECRYPT and ENCRYPT permissions are not required to use the custom key stored in KeyVault1 in this scenario.

Secure app configuration data - Training | Microsoft Learn

Use customer-managed keys to encrypt your configuration data | Microsoft Learn

 

Question 20 of 50

You develop a web application hosted on the Web Apps feature of Microsoft Azure App Service.

You need to enable and configure Azure Web Service Local Cache with 1.5 GB.

Which two code segments should you use? Each correct answer presents part of the solution.

`“WEBSITE_LOCAL_CACHE_OPTION”: “Always”`

`“WEBSITE_LOCAL_CACHE_SIZEINMB”: “1500”`

This item tests the candidate’s knowledge of configuring the settings of the Web Apps feature of Azure App Service.

By using WEBSITE_LOCAL_CACHE_OPTION = Always, local cache will be enabled. WEBSITE_LOCAL_CACHE_SIZEINMB will properly configure Local Cache with 1.5 GB of size. WEBSITE_LOCAL_CACHE_OPTION = Enable is not a valid value. 1.5 will not configure 1.5 GB for the local cache.

Configure web app settings - Training | Microsoft Learn

Local cache - Azure App Service | Microsoft Learn

Question 21 of 50

You plan to develop an Azure App Service web app named app1 by using a Windows custom container.

You need to load a TLS/SSL certificate in application code.

Which app setting should you configure?

`WEBSITE_LOAD_CERTIFICATES`

This item tests the candidate’s knowledge of configuring app settings, which is part of creating Azure App Service Web Apps.

The WEBSITE_LOAD_CERTIFICATES app setting makes the specified certificates accessible to Windows or Linux custom containers as files.

The WEBSITE_ROOT_CERTS_PATH app setting is read-only and does not allow comma-separated thumbprint values to be mentioned to the certificates and then be loaded in the code.

The WEBSITE_AUTH_TOKEN_CONTAINER_SASURL app setting is used to instruct the auth module to store and load all encrypted tokens to the specified blob storage container. This setting is used for Azure Storage and cannot be used to load certificates inside a Windows custom container.

Configure web app settings - Training | Microsoft Learn

Environment variables and app settings reference - Azure App Service | Microsoft Learn

Use a TLS/SSL certificate in code - Azure App Service | Microsoft Learn

Question 22 of 50

You manage an Azure App Service web app named app1. App1 uses a service plan based on the Basic pricing tier.

You need to create a deployment slot for app1.

What should you do first?

Scale up app1

This item tests the candidate’s knowledge of creating deployment slots, which ties directly to the pricing tier used by Azure App Service web apps. This is configured as part of the Azure App Service web app creation.

Deployment slots require at a minimum the Standard pricing tier, so to supply support for app1, it is necessary to scale it up.

Scaling out app1 provisions more instances of app1, but it does not provide the ability to create its deployment slot.

Automated deployment of app1 with Azure DevOps or GitHub is not a prerequisite of support for deployment slots, but it commonly is the reason for implementing them.

Examine Azure App Service - Training | Microsoft Learn

Deployment best practices - Azure App Service | Microsoft Learn

 

Question 23 of 50

You need to configure a web app to allow external requests from https://myapps.com.

Which Azure CLI command should you use?

 `az webapp cors add -g MyResourceGroup -n MyWebApp --allowed-origins https://myapps.com`

This item tests the candidate’s knowledge of configuring web app settings.

The code segment that includes the cors add will configure CORS to allow requests from https://myapps.com. The code segment that includes identity add will add a managed identity to a web app. The code segment that includes traffic-routing-set will configure a traffic routing to a deployment slot named myapps. The code segment that includes access-restriction add will add an access restriction on a web app.

Control Azure services with the CLI - Training | Microsoft Learn

az webapp config access-restriction | Microsoft Learn

 

 Question 24 of 50

You plan to create an Azure function app named app1.

You need to ensure that app1 will satisfy the following requirements:

  • Supports automatic scaling.
  • Has event-based scaling behavior.
  • Provides a serverless pricing model.

Which hosting plan should you use?

 Consumption

This item tests the candidate’s knowledge of selecting the appropriate hosting plan, which is part of the implementation of Azure Functions.

The Consumption hosting plan satisfies all requirements. It supports autoscaling, has event-based scaling behavior, and provides a serverless pricing model.

The App Service, App Service Environment, and Functions Premium hosting plans support autoscaling but does not provide the serverless pricing model. Its scaling behavior is not event based but performance based.

Compare Azure Functions hosting options - Training | Microsoft Learn

Azure Functions scale and hosting | Microsoft Learn

 

Question 25 of 50

A company plans to implement a Microsoft Defender for Cloud solution.

The company has the following requirements:

  • Notifies when DNS domains are not deleted when a new function app is deleted.
  • Use native alerting.
  • Minimize costs.

You need to select a hosting plan.

Which hosting plan should you use?

Basic

This item tests the candidate's knowledge about securing Azure Functions.

The Basic plan supports both custom domains and Microsoft Defender for Cloud, which can automatically alert on dangling DNS domains.

The Consumption plan is incorrect because it does not support Microsoft Defender for Cloud. This can automatically alert on dangling DNS domains.

The Premium plan supports custom domains and Microsoft Defender for Cloud, which can automatically alert on dangling DNS domains. This, however, is not the lowest cost option. The Free plan does not support custom domains, although it does support Microsoft Defender for Cloud, which can automatically alert on dangling DNS domains.

AZ-204: Implement Azure Functions - Training | Microsoft Learn

Microsoft Defender for App Service - the benefits and features | Microsoft Learn

Securing Azure Functions | Microsoft Learn

App Service Pricing | Microsoft Azure

Question 26 of 50

You have an Azure Key Vault named MyVault.

You need to use a key vault reference to access a secret named MyConnection from MyVault.

Which code segment should you use?

`@Microsoft.KeyVault(SecretName=MyConnection;VaultName=MyVault)`

This item tests the candidate’s knowledge of retrieving secrets from Key Vault in Azure Functions.

The code segment @Microsoft.KeyVault(SecretName=MyConnection;VaultName=MyVault) segment reads the secret from Key Vault. The code segment that includes Secret uses an invalid parameter. The code segment that includes Secret and Vault use invalid parameters. The code segment that includes SecretName and Vault use invalid parameters.

Create serverless applications learning path - Training | Microsoft Learn

Use Key Vault references - Azure App Service | Microsoft Learn

 

Question 27 of 50

A company plans to create an Azure function app.

You need to recommend a solution that meets the following requirements:

  • Executes multiple functions concurrently.
  • Performs aggregation on the results from the functions.
  • Avoids cold starts.
  • Minimizes costs.

Which two components should you recommend? Each correct answer presents part of the solution

The Premium plan

Fan-out/fan-in pattern

This item tests the candidate’s knowledge of Azure Durable Functions and hosting plans.

The Premium plan avoids cold starts and offers unlimited execution duration. The fan-out/fan-in pattern enables multiple functions to be executed in parallel, waiting for all functions to finish. Often, some aggregation work is done on the results that are returned from the functions. The Consumption plan avoids paying for idle time but might face cold starts. Furthermore, each function run is limited to 10 minutes. The function chaining pattern is a sequence of functions that execute in a specific order. In this pattern, the output of one function is applied to the input of another function.

AZ-204: Implement Azure Functions - Training | Microsoft Learn

Azure Functions Premium plan | Microsoft Learn

Fan-out/fan-in scenarios in Durable Functions - Azure | Microsoft Learn

Durable Functions Overview - Azure | Microsoft Learn

 Fan-out/fan-in refers to the pattern of executing multiple functions concurrently and then performing some aggregation on the results. 

 

Question 28 of 50

You need to deploy an Azure Files share along with a container group to Azure Container Instances (ACI).

Which deployment method should you use?

Azure Resource Manager template

This item tests the candidate’s knowledge of running containers by using Azure Container Instances (ACI). There are two common ways to deploy a multi-container group: use an Azure Resource Manager template or a YAML file. An Azure Resource Manager template is recommended when you need to deploy additional Azure service resources (for example, an Azure Files share) when you deploy the container instances. However, a YAML file does not support the deployment of additional Azure service resources along with container groups in ACI. Docker Compose and Azure CLI do not support the deployment of an Azure Files share along with a container group to ACI.

Explore Azure Container Instances

Tutorial: Deploy a multi-container group using a Resource Manager template

 

Question 29 of 50

A container group in Azure Container Instances has multiple containers.

The containers must restart when the process executed in the container group terminates due to an error.

You need to define the restart policy for the container group.

Which Azure CLI command should you use?

 az container create \ --resource-group myResourceGroup \ --name mycontainer \ --image mycontainerimage \ --restart-policy OnFailure

This item tests the candidate’s knowledge of running containers by using Azure Container Instances (ACI). Configurable restart policies can be specified for a container group in ACI. A configurable restart policy allows you to specify that containers are stopped when their processes have completed. When you create a container group in ACI, you can specify one of three restart policy settings: Always, Never, and OnFailure.

If the –restart-policy is mentioned as OnFailure, the containers in the container group are restarted only when the process executed in the container fails (when it terminates with a nonzero exit code). If the –restart-policy is mentioned as Always, the containers in the container group are always restarted irrespective of the success or failure of process execution in a container. If the –restart-policy is mentioned as Never, the containers in the container group will only run at most once.

The az container restart command is used to restart all the containers in a container group, not to define a restart policy for a container group.

Run containerized tasks with restart policies

Manage Azure Container Instances

 

Question 30 of 50

You are developing a cloud native containerized background task application.

You need to choose the appropriate container deployment option based on the following requirements:

  • Minimize cost
  • Support service discovery and traffic splitting
  • Enable event-driven application architecture
  • Do not require access to native Kubernetes API

What should you use?

Azure Container Apps

This item tests the candidate’s knowledge of creating solutions by using Azure Container Apps. Azure Container Apps enables you to build serverless microservices based on containers. It is optimized for running general purpose containers and provides many application-specific concepts on top of containers.

Azure Spring Apps is a fully managed service for Spring developers. It provides lifecycle management to run Spring Boot, Spring Cloud, or any other Spring applications on Azure.

Azure Container Instances does not support scaling, load balancing, revisions, scale, or environments and does not meet the mentioned requirements.

Azure Functions is a serverless Function as a Service (FaaS) solution. It can be used for running event-driven applications by using the functions programming model. However, it cannot be used to deploy a container image.

Comparing Container Apps with other Azure container options

Azure Container Apps documentation

 

Question 31 of 50

You are developing an Azure Function app that will be deployed to a Consumption plan. The app consumes data from a database server that has limited throughput.

You need to use the functionAppScaleLimit property to control the number of instances of the app that will be created.

10

Which value should you use for the property setting?

This item tests the candidate’s knowledge of configuring an Azure Function app. Imposing limits on the scaling out capacity of a function app can help when the app connects to components that have limited throughput. The functionAppScaleLimit property lets you define the number of instances of the Function app that will be created. Therefore, setting it to a low value, such as 10, is appropriate in this scenario. Function apps in the Consumption plan can scale out and have 200 instances as a default. A value of 0 or null for the functionAppScaleLimit property means that an unrestricted number of instances of the Function app will be created.

Scale Azure Functions - Training | Microsoft Learn

Azure Functions scale and hosting | Microsoft Learn

 

Question 32 of 50

You are developing an Azure Function app that will be deployed to a Dedicated plan.

When there is a resource shortage in the app, it must send a “429 Too Busy” response.

You need to apply the appropriate configuration to all functions in a function app instance.

Which configuration should you set?

dynamicThrottlesEnabled in the host.json file

This item tests the candidate’s knowledge of controlling scaling of functions. Using the dynamicThrottlesEnabled property allows developers to let the system respond dynamically to an increased utilization, returning “429 Too Busy” errors. This property is defined in the host.json file. The bindings section, part of the function.json file, is used to define the bindings and triggers for a function.

The maxConcurrentRequests property is used to determine the maximum number of function instances to run in parallel. It is defined in the function.json file.

The maxOutstandingRequests property, defined in the host.json file, defines the maximum number of requests, queued or in progress, held at any given time.

Create triggers and bindings - Training | Microsoft Learn

Azure Functions HTTP triggers and bindings | Microsoft Learn

 

Question 33 of 50

A company plans to use Azure Cache for Redis. The company plans to use Redis modules.

You need to recommend an Azure Cache for Redis service tier.

Which service tier should you recommend?

Enterprise

This item tests the candidate's knowledge of Azure Cache for Redis service tiers.

Redis modules are only supported in the Enterprise service tier. The Basic, Standard, and Premium service tiers do not support Redis modules.

Develop for Azure Cache for Redis - Training | Microsoft Learn

Explore Azure Cache for Redis - Training | Microsoft Learn

What is Azure Cache for Redis? | Microsoft Learn

 

Question 34 of 50

You manage an Azure Cache for Redis instance.

You need to load data on demand into the cache from a large database.

Which application architecture pattern should you use?

data cache

This item tests the candidate’s knowledge of application architecture design pattern, which is part of implementing caching for solutions.

Databases often are too large to load directly into a cache, so it is common to use data cache pattern.

Session store is used to store user-session information instead of storing too much data in a cookie that can adversely affect performance.

Distributed transactions allow a series of commands to run on a back-end datastore as a single operation.

By using content cache, you can provide quicker access to static content compared to back-end datastores.

Session store, distributed transactions, and content cache cannot be used to load data on demand.

What is Azure Cache for Redis? - Training | Microsoft Learn

Cache-Aside pattern - Azure Architecture Center | Microsoft Learn

 

Question 35 of 50

You need to store an instance of the GameStat class in Azure Cache for Redis.

public class GameStat
{
    public string Id { get; set; }
    public string Sport { get; set; }
    public DateTimeOffset DatePlayed { get; set; }
    public string Game { get; set; }
    public IReadOnlyList<string> Teams { get; set; }
    public IReadOnlyList<(string team, int score)> Results { get; set; }

    public GameStat(string sport, DateTimeOffset datePlayed, string game, string[] teams, IEnumerable<(string team, int score)> results)
    {
        Id = Guid.NewGuid().ToString();
        Sport = sport;
        DatePlayed = datePlayed;
        Game = game;
        Teams = teams.ToList();
        Results = results.ToList();
    }

    public override string ToString()
    {
        return $"{Sport} {Game} played on {DatePlayed.Date.ToShortDateString()} - " +
               $"{String.Join(',', Teams)}\r\n\t" + 
               $"{String.Join('\t', Results.Select(r => $"{r.team } - {r.score}\r\n"))}";
    }

Which two code segments should you use? Each correct answer presents a complete solution.

注意这里的类,override了ToString方法

```var stat = new GameStat("Soccer", new DateTime(2019, 7, 16), "Local Game", new[] { "Team 1", "Team 2" }, new[] { ("Team 1", 2), ("Team 2", 1) }); string serializedValue = System.Text.Json.JsonSerializer.Serialize (stat); bool added = db.StringSet("event:1950-world-cup", serializedValue);```

```var stat = new GameStat("Soccer", new DateTime(2019, 7, 16), "Local Game", new[] { "Team 1", "Team 2" }, new[] { ("Team 1", 2), ("Team 2", 1) }); bool added = db.StringSet("event:1950-world-cup", stat.ToString());```

This item tests the candidate’s knowledge of how to implement caching.

The code segments that include the StringSet operation will properly serialize and store the content of the GameStat class into Azure Cache for Redis. The code segments that include the StringGet operation will not.

Interact with Azure Cache for Redis by using .NET - Training | Microsoft Learn

What is Azure Cache for Redis? | Microsoft Learn

 

Question 36 of 50

You have an Azure web application.

You need to configure an application performance management (APM) service to collect and monitor the application log data.

Which Azure service should you configure?
Application Insights

This item tests the candidate’s knowledge of configuring an app to use Application Insights, which is part of troubleshooting solutions by using metrics and log data.

Application Insights is a feature of Azure Monitor that provides extensible application performance management (APM) and monitoring for live web applications.

Azure Monitor helps you maximize the availability and performance of applications and services. Application Insights is part of Azure Monitor.

Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results.

Azure Advisor scans your Azure configuration and recommends changes to optimize deployments, increase security, and save money.

Monitor app performance - Training | Microsoft Learn

Azure Advisor documentation - Azure Advisor | Microsoft Learn

Log Analytics tutorial - Azure Monitor | Microsoft Learn

 

Question 37 of 50

You need to capture user actions by using the Azure Application Insights API.

Which API call should you use?

`TrackEvent`

This item tests the candidate's knowledge about Azure Application Insights API calls.

The TrackEvent API call tracks user actions and other events. It is used to track user behavior or to monitor performance.

The TrackMetric API call is used to track performance measurements such as queue length.

The TrackRequest API call is used to log the frequency and duration of server requests for performance analysis.

The TrackTrace API call is used to capture Resource Diagnostic log messages and can also be used to capture third-party logs.

Instrument an app for monitoring - Training | Microsoft Learn

Application Insights API for custom events and metrics - Azure Monitor | Microsoft Learn

 

Question 38 of 50

You plan to develop a web job that performs calculations on top of data that is collected from users.

You need to send pre-aggregated summary metrics to Azure Monitor.

Which Application Insights method should you use?

 `GetMetric`

This item tests the candidate’s knowledge of using metrics and log data.

The GetMetric method handles local pre-aggregation and then only submits an aggregated summary metric at a fixed interval of one minute.

TrackMetric sends raw telemetry, missing pre-aggregation. SetMetric and LogMetric are not valid methods to send pre-aggregated summary metrics to Azure Monitor.

AZ-204: Instrument solutions to support monitoring and logging - Training | Microsoft Learn

Get-Metric in Azure Monitor Application Insights - Azure Monitor | Microsoft Learn

 

 Question 39 of 50

You have an Application Insights instance named insight1.

You need to configure a web app to send telemetry data to insight1.

Which Application Insights parameter should you use?

 instrumentation key

This item tests the candidate’s knowledge of configuring an app to use Application Insights, which is part of troubleshooting solutions by using metrics and log data.

To send telemetry data to an Application Insights resource from an app, you need to configure the app with the instrumentation key of the Application Insights instance. You can use alerts to ensure your team is aware of critical issues immediately.

Alerts need to be configured inside insight1 and not the web app. You can use the data shown with each component to diagnose performance bottlenecks and failure hotspots. It needs to be configured inside insight1 and not the web app.

Usage analysis provides information about an app's users and needs to be configured in insight1, not the web app.

Enable Application Insights on an Azure web app - Training | Microsoft Learn

Application Insights overview - Azure Monitor | Microsoft Learn

Question 40 of 50

You plan to use Application Insights to monitor the performance of an on-premises web application.

You need to identify a configuration that satisfies the following requirements:

  • Minimize the volume of data ingested into Application Insights.
  • Maximize the accuracy of the collected metrics.

What should you do?

Use standard metrics

This item tests the candidate’s knowledge of configuring an app or service to use Application Insights.

Using standard metrics both minimizes the volume of data ingested into Application Insights and maximizes the accuracy of the collected metrics.

Applying sampling and filtering would negatively affect the accuracy of the collected metrics.

Using log-based metrics does not minimize the volume of data ingested into Application Insights.

Discover log-based metrics - Training | Microsoft Learn

Log-based and pre-aggregated metrics in Application Insights - Azure Monitor | Microsoft Learn

Question 41 of 50

You manage an Azure Cosmos DB container named container1.

You need to use the ReadItemAsync method to read an item from the Azure Cosmos service.

Which two parameters do you need to provide? Each correct answer presents part of the solution.

`partitionKey`

 `itemId`

This item tests the candidate’s knowledge of setting the partition key, which is part of developing Azure Cosmos DB solutions.

The ReadItemAsync method of the container class of .NET SDK for Azure Cosmos DB has two mandatory parameters: partitionKey and itemId. The consistencyLevel parameter is part of the optional requestOptions parameter of the ReadItemAsync. The eTag and sessionToken parameters are part of the optional requestOptions parameter of the ReadItemAsync method.

Explore Microsoft .NET SDK v3 for Azure Cosmos DB - Training | Microsoft Learn

Container.ReadItemAsync&lt;T&gt; Method (Microsoft.Azure.Cosmos) - Azure for .NET Developers | Microsoft Learn

Question 42 of 50

You plan to implement a storage mechanism for managing state across multiple change feed consumers.

You need to configure the change feed processor in the .NET SDK for Azure Cosmos DB for NoSQL API.

Which component should you use?

Lease container

This item tests the candidate’s knowledge of configuring change feed processor as part of developing solutions that use Azure Cosmos DB.

The lease container component serves as a storage mechanism to manage state across multiple change feed consumers. The delegate component is the code within the client application that implements business logic for each batch of changes. The host component is a client application instance that listens for changes from the change feed. The monitored container component is monitored for any insert or update operations. It does not serve as a storage mechanism to manage state across multiple change feed consumers.

Understand change feed features in the SDK - Training | Microsoft Learn

How to use Azure Cosmos DB change feed with Azure Functions | Microsoft Learn

Question 43 of 50

You have blobs in Azure Blob storage. The blobs store pictures.

You need to record the location and weather condition information from when the pictures were taken. You must ensure you can use up to 2,000 characters when recording the information.

What should you do?

 Use metadata headers defined with a PUT request.

This item tests the candidate's knowledge about structuring data for blob storage.

Metadata is the proper way to define this kind of data, allowing independent modification and supporting up to 8 KB in total size. The HTTP verb to define metadata is a PUT, and this is the correct format to define metadata values. The maximum size of a blob name is 1,024 characters. Also, this is not an optimal approach because metadata can be obtained and set independently, maintaining the same file name. Metadata is the proper way to define this kind of data, allowing independent modification and supporting up to 8 KB in total size. But the HTTP verb to define metadata is a PUT, not POST. The combination of locations and weather types can be potentially unlimited, and container names are limited to 63 characters.

AZ-204: Develop solutions that use Blob storage - Training | Microsoft Learn

Setting and retrieving properties and metadata for Blob service resources (REST API) - Azure Storage | Microsoft Learn

Naming and Referencing Containers, Blobs, and Metadata - Azure Storage | Microsoft Learn

Question 44 of 50

You need to download blob content to a byte array after a transient fault happens.

Which code statement should you use?

```byte[] data; BlobClientOptions options = new BlobClientOptions(); options.Retry.MaxRetries = 10; options.Retry.Delay = TimeSpan.FromSeconds(20); BlobClient client = new BlobClient(new Uri("https://mystorageaccount.blob.core.windows.net/containers/blob.txt"), options); Response response = client.DownloadContent(); data = response.Value.Content.ToArray();```

This item tests the candidate’s knowledge of implementing storage policies.

The code segment that includes options.Retry.MaxRetries = 10; and options.Retry.Delay = TimeSpan.FromSeconds(20); defines the retry strategy and downloads the content to the variable data. The code segments that do not include these parameters do not define the retry strategy.

Azure Fundamentals: Describe Azure architecture and services - Training | Microsoft Learn

BlobBaseClien

Question 45 of 50

You have an Azure storage lifecycle policy for block blobs.

You need to create a prefixMatch filter rule that will contain an array of strings for prefixes to be matched.

What should be the first element of the prefix string?

a container name

This item tests the candidate’s knowledge of configuring prefixMatch filter, which is an essential part of setting up storage policy and is part of solution development for blob storage.

When creating a prefixMatch filter rule for an Azure storage lifecycle policy for block blobs, the first element of the prefix string must be a container name not a block blob index tag, block blob name, or storage account name.

Discover Blob storage lifecycle policies - Training | Microsoft Learn

Optimize costs by automatically managing the data lifecycle - Azure Storage | Microsoft Learn

Question 46 of 50

A company plans to host a static website that uses a custom domain and Azure Storage in multiple regions.

You need to serve website content and minimize latency.

What are two possible ways to achieve this goal? Each correct answer presents part of the solution.

 Upload static content to a storage container named **$web.**

Use Azure Content Delivery Network for regional caching.

This item tests the candidate’s knowledge of developing solutions that use blob storage.

Static content needs to be uploaded to a storage container named $web. Using Azure Content Delivery Network is required for multiregional website hosting.

Azure Traffic Manager is not recommended when using a custom domain because of how Azure Storage verifies custom domain names. The storage container needs to be named $web.

Create a Content Delivery Network for your Website with Azure CDN and Blob Services - Training | Microsoft Learn

Static website hosting in Azure Storage | Microsoft Learn

 Question 47 of 50

A company implements a multi-region Azure Cosmos DB account.

You need to configure the default consistency level for the account. The consistency level must ensure that update operations made as a batch within a transaction are always visible together.

Which consistency level should you use?

 Consistent Prefix

This item tests the candidate’s knowledge of selecting the appropriate consistency level for operations in Azure Cosmos DB. The Consistent Prefix consistency level ensures that updates made as a batch within a transaction are returned consistently with the transaction in which they were committed. Write operations within a transaction of multiple documents are always visible together.

The Bounded Staleness consistency level is used to manage the lag of data between any two regions based on an updated version of an item or the time intervals between read and write.

The Session consistency level is used to ensure that within a single client session, reads are guaranteed to honor the read-your-writes and write-follows-reads guarantees.

The Eventual consistency level is used when no ordering guarantee is required.

Explore consistency levels

Consistency levels in Azure Cosmos DB

Question 48 of 50 

You manage the deployment of an Azure Cosmos DB account.

You must define custom logic by using the .NET SDK change feed processor to process changes that the change feed reads.

You need to select the appropriate change feed processor component.

Which component should you use?

delegate

This item tests the candidate’s knowledge of implementing change feed notifications in Azure Cosmos DB. The change feed processor in Azure Cosmos DB simplifies the process of reading the change feed and can be used to distribute the event processing across multiple consumers effectively. There are four main components in the change feed processor: the monitored container, the lease container, the compute instance, and the delegate.

The monitored container has the data from which the change feed is generated.

The delegate component can be used to define custom logic to process the changes that the change feed reads.

The compute instance hosts the change feed processor to listen for changes. It can be represented by a VM, a Kubernetes pod, an Azure App Service instance, or an actual physical machine.

The lease container acts as a state storage and coordinates the processing of the change feed across multiple workers.

Understand change feed features in the SDK

Change feed processor in Azure Cosmos DB

Question 49 of 50

A company implements an Azure Cosmos DB account named Account1 to store product details.

You need to write a parameterized SQL query to get items from the Products container based on category and price as parameters.

Which SQL query should you write?

SELECT * FROM Products p WHERE p.category = @Category AND p.price = @Price

This item tests the candidate’s knowledge of performing operations on containers and items by using the SDK. Azure Cosmos DB supports SQL queries with parameters expressed by the @ notation. When writing SQL queries based on parameters, we need to mention the name of the container in the Azure Cosmos DB account. We do not use [accountname].[containername] or just [accountname] in the SQL query. Including the parameter name in single quotes is not the correct format.

Explore Microsoft .NET SDK v3 for Azure Cosmos DB

Parameterized queries in Azure Cosmos DB

 Question 50 of 50

You are developing an application.

You need to set the standard HTTP properties of containers in Azure Blob Storage.

Which two HTTP properties can you set? Each correct answer presents part of the solution.

 ETag

Last-Modified

This item tests the candidate’s knowledge of setting and retrieving properties and metadata. Metadata in Azure Storage objects is defined through headers starting with x-ms-meta-. Some standard HTTP properties are also available for both objects and containers. The only two HTTP properties that are available for containers are ETag and Last-Modified.

Last-Modified, Cache-Control, Origin and Range are properties only available for blobs.

Set and retrieve properties and metadata for blob resources by using REST

 

 

posted @ 2023-11-14 14:46  ChuckLu  阅读(156)  评论(0编辑  收藏  举报