AZ 204T00A ENU TrainerHandbook
AZ 204T00A ENU TrainerHandbook
AZ 204T00A ENU TrainerHandbook
AZ-204T00
Developing Solutions for
Microsoft Azure
MCT USE ONLY. STUDENT USE PROHIBITED
AZ-204T00
Developing Solutions for
Microsoft Azure
MCT USE ONLY. STUDENT USE PROHIBITED II Disclaimer
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is
not responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
© 2019 Microsoft Corporation. All rights reserved.
Microsoft and the trademarks listed at http://www.microsoft.com/trademarks 1are trademarks of the
Microsoft group of companies. All other trademarks are property of their respective owners.
1 http://www.microsoft.com/trademarks
MCT USE ONLY. STUDENT USE PROHIBITED
EULA III
Courseware. These classes are not advertised or promoted to the general public and class attend-
ance is restricted to individuals employed by or contracted by the corporate customer.
14. “Trainer” means (i) an academically accredited educator engaged by a Microsoft IT Academy
Program Member to teach an Authorized Training Session, and/or (ii) a MCT.
15. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and
additional supplemental content designated solely for Trainers’ use to teach a training session
using the Microsoft Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint
presentations, trainer preparation guide, train the trainer materials, Microsoft One Note packs,
classroom setup guide and Pre-release course feedback form. To clarify, Trainer Content does not
include any software, virtual hard disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one
copy per user basis, such that you must acquire a license for each individual that accesses or uses the
Licensed Content.
●● 2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
1. If you are a Microsoft IT Academy Program Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User who is enrolled in the Authorized Training Session, and only immediately
prior to the commencement of the Authorized Training Session that is the subject matter
of the Microsoft Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they
can access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content, provided you comply with the following:
3. you will only provide access to the Licensed Content to those individuals who have acquired
a valid license to the Licensed Content,
4. you will ensure each End User attending an Authorized Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Author-
ized Training Session,
5. you will ensure that each End User provided with the hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject to
the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agreement
in a manner that is enforceable under local law prior to their accessing the Microsoft
Instructor-Led Courseware,
6. you will ensure that each Trainer teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
MCT USE ONLY. STUDENT USE PROHIBITED
EULA V
7. you will only use qualified Trainers who have in-depth knowledge of and experience with
the Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware
being taught for all your Authorized Training Sessions,
8. you will only deliver a maximum of 15 hours of training per week for each Authorized
Training Session that uses a MOC title, and
9. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer
resources for the Microsoft Instructor-Led Courseware.
2. If you are a Microsoft Learning Competency Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or MCT, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Authorized Training Session and only immediately prior to
the commencement of the Authorized Training Session that is the subject matter of the
Microsoft Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) MCT with the unique redemption code and instructions on how
they can access one (1) Trainer Content, provided you comply with the following:
3. you will only provide access to the Licensed Content to those individuals who have acquired
a valid license to the Licensed Content,
4. you will ensure that each End User attending a Private Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Private
Training Session,
5. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject to
the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agreement
in a manner that is enforceable under local law prior to their accessing the Microsoft
Instructor-Led Courseware,
6. you will ensure that each MCT teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
7. you will only use qualified MCTs who also hold the applicable Microsoft Certification
credential that is the subject of the MOC title being taught for all your Authorized Training
Sessions using MOC,
8. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
9. you will only provide access to the Trainer Content to MCTs.
MCT USE ONLY. STUDENT USE PROHIBITED VI EULA
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Private Training Session, and only immediately prior to the
commencement of the Private Training Session that is the subject matter of the Micro-
soft Instructor-Led Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the
unique redemption code and instructions on how they can access one (1) Trainer
Content, provided you comply with the following:
3. you will only provide access to the Licensed Content to those individuals who have acquired
a valid license to the Licensed Content,
4. you will ensure that each End User attending a Private Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Private
Training Session,
5. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject to
the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agreement
in a manner that is enforceable under local law prior to their accessing the Microsoft
Instructor-Led Courseware,
6. you will ensure that each Trainer teaching a Private Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Private Training Session,
7. you will only use qualified Trainers who hold the applicable Microsoft Certification creden-
tial that is the subject of the Microsoft Instructor-Led Courseware being taught for all your
Private Training Sessions,
8. you will only use qualified MCTs who hold the applicable Microsoft Certification credential
that is the subject of the MOC title being taught for all your Private Training Sessions using
MOC,
9. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
10. you will only provide access to the Trainer Content to Trainers.
4. If you are an End User:
For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for
your personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you
may access the Microsoft Instructor-Led Courseware online using the unique redemption code
provided to you by the training provider and install and use one (1) copy of the Microsoft
MCT USE ONLY. STUDENT USE PROHIBITED
EULA VII
Instructor-Led Courseware on up to three (3) Personal Devices. You may also print one (1) copy
of the Microsoft Instructor-Led Courseware. You may not install the Microsoft Instructor-Led
Courseware on a device you do not own or control.
5. If you are a Trainer.
1. For each license you acquire, you may install and use one (1) copy of the Trainer Content in
the form provided to you on one (1) Personal Device solely to prepare and deliver an
Authorized Training Session or Private Training Session, and install one (1) additional copy
on another Personal Device as a backup copy, which may be used only to reinstall the
Trainer Content. You may not install or use a copy of the Trainer Content on a device you do
not own or control. You may also print one (1) copy of the Trainer Content solely to prepare
for and deliver an Authorized Training Session or Private Training Session.
2. You may customize the written portions of the Trainer Content that are logically associated
with instruction of a training session in accordance with the most recent version of the MCT
agreement. If you elect to exercise the foregoing rights, you agree to comply with the
following: (i) customizations may only be used for teaching Authorized Training Sessions
and Private Training Sessions, and (ii) all customizations will comply with this agreement. For
clarity, any use of “customize” refers only to changing the order of slides and content, and/
or not using all the slides or content, it does not mean changing or modifying any slide or
content.
●● 2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may
not separate their components and install them on different devices.
●● 2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above,
you may not distribute any Licensed Content or any portion thereof (including any permitted
modifications) to any third parties without the express written permission of Microsoft.
●● 2.4 Third Party Notices. The Licensed Content may include third party code that Microsoft, not
the third party, licenses to you under this agreement. Notices, if any, for the third party code are
included for your information only.
●● 2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and
licenses also apply to your use of that respective component and supplements the terms described
in this agreement.
3. LICENSED CONTENT BASED ON PRE-RELEASE TECHNOLOGY. If the Licensed Content’s subject
matter is based on a pre-release version of Microsoft technology ("Pre-release"), then in addition to
the other provisions in this agreement, these terms also apply:
1. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version
of the Microsoft technology. The technology may not work the way a final version of the technolo-
gy will and we may change the technology for the final version. We also may not release a final
version. Licensed Content based on the final version of the technology may not contain the same
information as the Licensed Content based on the Pre-release version. Microsoft is under no
obligation to provide you with any further content, including any Licensed Content based on the
final version of the technology.
2. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly
or through its third party designee, you give to Microsoft without charge, the right to use, share
and commercialize your feedback in any way and for any purpose. You also give to third parties,
without charge, any patent rights needed for their products, technologies and services to use or
interface with any specific parts of a Microsoft technology, Microsoft product, or service that
includes the feedback. You will not give feedback that is subject to a license that requires Microsoft
MCT USE ONLY. STUDENT USE PROHIBITED VIII EULA
to license its technology, technologies, or products to third parties because we include your
feedback in them. These rights survive this agreement.
3. Pre-release Term. If you are a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed
Content on the Pre-release technology upon (i) the date which Microsoft informs you is the end
date for using the Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the
commercial release of the technology that is the subject of the Licensed Content, whichever is
earliest ("Pre-release term"). Upon expiration or termination of the Pre-release term, you will
irretrievably delete and destroy all copies of the Licensed Content in your possession or under
your control.
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you
more rights despite this limitation, you may use the Licensed Content only as expressly permitted in
this agreement. In doing so, you must comply with any technical limitations in the Licensed Content
that only allows you to use it in certain ways. Except as expressly permitted in this agreement, you
may not:
●● access or allow any individual to access the Licensed Content if they have not acquired a valid
license for the Licensed Content,
●● alter, remove or obscure any copyright or other protective notices (including watermarks), brand-
ing or identifications contained in the Licensed Content,
●● modify or create a derivative work of any Licensed Content,
●● publicly display, or make the Licensed Content available for others to access or use,
●● copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
●● work around any technical limitations in the Licensed Content, or
●● reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property
laws and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property
rights in the Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regula-
tions. You must comply with all domestic and international export laws and regulations that apply to
the Licensed Content. These laws include restrictions on destinations, end users and end use. For
additional information, see www. microsoft. com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for
it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you
fail to comply with the terms and conditions of this agreement. Upon termination of this agreement
for any reason, you will immediately stop all use of and delete and destroy all copies of the Licensed
Content in your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible
for the contents of any third party sites, any links contained in third party sites, or any changes or
MCT USE ONLY. STUDENT USE PROHIBITED
EULA IX
updates to third party sites. Microsoft is not responsible for webcasting or any other form of transmis-
sion received from any third party sites. Microsoft is providing these links to third party sites to you
only as a convenience, and the inclusion of any link does not imply an endorsement by Microsoft of
the third party site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11. APPLICABLE LAW.
1. United States. If you acquired the Licensed Content in the United States, Washington state law
governs the interpretation of this agreement and applies to claims for breach of it, regardless of
conflict of laws principles. The laws of the state where you live govern all other claims, including
claims under state consumer protection laws, unfair competition laws, and in tort.
2. Outside the United States. If you acquired the Licensed Content in any other country, the laws of
that country apply.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
Licensed Content. This agreement does not change your rights under the laws of your country if the
laws of your country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED"AS-IS"AND"AS AVAILABLE.
"YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE AFFILIATES GIVES NO
EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY HAVE ADDITIONAL CON-
SUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. TO
THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND ITS RESPECTIVE AFFILI-
ATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICU-
LAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO
US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST
PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
This limitation applies to
●● anything related to the Licensed Content, services, content (including code) on third party Internet
sites or third-party programs; and
●● claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion
or limitation of incidental, consequential or other damages.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque: Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre
garantie expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection
dues consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les
MCT USE ONLY. STUDENT USE PROHIBITED X EULA
Audience
This course is for Azure Developers. They design and build cloud solutions such as applications and
services. They participate in all phases of development, from solution design, to development and
deployment, to testing and maintenance. They partner with cloud solution architects, cloud DBAs, cloud
administrators, and clients to implement the solution.
Prerequisites
This course assumes you have already acquired the following skills and experience:
●● At least one year of experience developing scalable solutions through all phases of software develop-
ment.
●● Be skilled in at least one cloud-supported programming language. Much of the course focuses on C#,
.NET Framework, HTML, and using REST in applications.
●● Have a base understanding of Azure and cloud concepts, services, and the Azure Portal. If you need to
ramp up you can start with the Azure Fundamentals1 course which is freely available.
●● Are familiar with PowerShell and/or Azure CLI.
✔️ Note: This course presents more Azure CLI examples overall than PowerShell.
1 https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
MCT USE ONLY. STUDENT USE PROHIBITED 2 Module 0 Course introduction
Course syllabus
The course content includes a mix of content, demonstrations, hands-on labs, and reference links.
Module Name
0 Welcome to the course
1 Create Azure App Service Web Apps
2 Implement Azure functions
3 Develop solutions that use blob storage
4 Develop solutions that use Cosmos DB storage
5 Implement IaaS solutions
6 Implement secure cloud solutions
7 Implement user authentication and authorization
8 Implement API Management
9 Develop App Service Logic Apps
2 https://docs.microsoft.com/en-us/learn/certifications/exams/az-204
3 https://docs.microsoft.com/en-us/learn/certifications/exams/az-204
4 https://docs.microsoft.com/en-us/learn/certifications/azure-developer
MCT USE ONLY. STUDENT USE PROHIBITED
About this course 3
Module Name
10 Develop event-based solutions
11 Develop message-based solutions
12 Instrument solutions to support monitoring and
logging
13 Integrate caching and content delivery within
solutions
Course resources
There are a lot of resources to help you learn about Azure. We recommend you bookmark these pages.
●● Microsoft Learning Community Blog:5 Get the latest information about the certification tests and
exam study groups.
●● Microsoft Learn:6 Free role-based learning paths and hands-on experiences for practice
●● Azure Fridays:7 Join Scott Hanselman as he engages one-on-one with the engineers who build the
services that power Microsoft Azure, as they demo capabilities, answer Scott's questions, and share
their insights.
●● Microsoft Azure Blog:8 Keep current on what's happening in Azure, including what's now in preview,
generally available, news & updates, and more.
●● Azure Documentation:9 Stay informed on the latest products, tools, and features. Get information on
pricing, partners, support, and solutions.
5 https://www.microsoft.com/en-us/learning/community-blog.aspx
6 https://docs.microsoft.com/en-us/learn/
7 https://channel9.msdn.com/Shows/Azure-Friday
8 https://azure.microsoft.com/en-us/blog/
9 https://docs.microsoft.com/en-us/azure/
MCT USE ONLY. STUDENT USE PROHIBITED
Module 1 Creating Azure App Service Web
Apps
Deployment slots
Using the Azure portal, you can easily add deployment slots to an App Service web app. For instance, you
can create a staging deployment slot where you can push your code to test on Azure. Once you are
happy with your code, you can easily swap the staging deployment slot with the production slot. You do
all this with a few simple mouse clicks in the Azure portal.
MCT USE ONLY. STUDENT USE PROHIBITED 6 Module 1 Creating Azure App Service Web Apps
●● Number of VM instances
●● Size of VM instances (Small, Medium, Large)
●● Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, Isolated, Consumption)
The pricing tier of an App Service plan determines what App Service features you get and how much you
pay for the plan. There are a few categories of pricing tiers:
●● Shared compute:Free and Shared, the two base tiers, runs an app on the same Azure VM as other
App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that
runs on the shared resources, and the resources cannot scale out.
●● Dedicated compute: The Basic, Standard, Premium, and PremiumV2 tiers run apps on dedicated
Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher
the tier, the more VM instances are available to you for scale-out.
●● Isolated: This tier runs dedicated Azure VMs on dedicated Azure Virtual Networks, which provides
network isolation on top of compute isolation to your apps. It provides the maximum scale-out
capabilities.
●● Consumption: This tier is only available to function apps. It scales the functions dynamically depend-
ing on workload.
✔️ Note: App Service Free and Shared (preview) hosting plans are base tiers that run on the same Azure
VM as other App Service apps. Some apps may belong to other customers. These tiers are intended to be
used only for development and testing purposes.
Each tier also provides a specific subset of App Service features. These features include custom domains
and SSL certificates, autoscaling, deployment slots, backups, Traffic Manager integration, and more. The
higher the tier, the more features are available. To find out which features are supported in each pricing
tier, see App Service plan details1.
1 https://azure.microsoft.com/pricing/details/app-service/plans/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure App Service core concepts 9
If your app is in the same App Service plan with other apps, you may want to improve the app's perfor-
mance by isolating the compute resources. You can do it by moving the app into a separate App Service
plan.
Since you pay for the computing resources your App Service plan allocates, you can potentially save
money by putting multiple apps into one App Service plan. However, keep in mind that apps in the same
App Service plan all share the same compute resources. To determine whether the new app has the
necessary resources, you need to understand the capacity of the existing App Service plan, and the
expected load for the new app.
Isolate your app into a new App Service plan when:
●● The app is resource-intensive.
●● You want to scale the app independently from the other apps the existing plan.
●● The app needs resource in a different geographical region.
This way you can allocate a new set of resources for your app and gain greater control of your apps.
Manual deployment
There are a few options that you can use to manually push your code to Azure:
●● Git: App Service web apps feature a Git URL that you can add as a remote repository. Pushing to the
remote repository will deploy your app.
●● CLI: webapp up is a feature of the az command-line interface that packages your app and deploys it.
Unlike other deployment methods, az webapp up can create a new App Service web app for you if
you haven't already created one.
●● Zipdeploy: Use curl or a similar HTTP utility to send a ZIP of your application files to App Service.
●● Visual Studio: Visual Studio features an App Service deployment wizard that can walk you through
the deployment process.
●● FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting environments, including
App Service.
MCT USE ONLY. STUDENT USE PROHIBITED 10 Module 1 Creating Azure App Service Web Apps
How it works
The authentication and authorization module runs in the same sandbox as your application code. When
it's enabled, every incoming HTTP request passes through it before being handled by your application
code. This module handles several things for your app:
●● Authenticates users with the specified provider
●● Validates, stores, and refreshes tokens
●● Manages the authenticated session
●● Injects identity information into request headers
The module runs separately from your application code and is configured using app settings. No SDKs,
specific languages, or changes to your application code are required.
User claims
For all language frameworks, App Service makes the user's claims available to your code by injecting
them into the request headers. For ASP.NET 4.6 apps, App Service populates ClaimsPrincipal.
Current with the authenticated user's claims, so you can follow the standard .NET code pattern, includ-
ing the [Authorize] attribute. Similarly, for PHP apps, App Service populates the _SERVER['REMOTE_
USER'] variable.
For Azure Functions, ClaimsPrincipal.Current is not hydrated for .NET code, but you can still find
the user claims in the request headers.
Token store
App Service provides a built-in token store, which is a repository of tokens that are associated with the
users of your web apps, APIs, or native mobile apps. When you enable authentication with any provider,
this token store is immediately available to your app. If your application code needs to access data from
these providers on the user's behalf, such as:
●● post to the authenticated user's Facebook timeline
●● read the user's corporate data from the Azure Active Directory Graph API or even the Microsoft Graph
MCT USE ONLY. STUDENT USE PROHIBITED
Azure App Service core concepts 11
You typically must write code to collect, store, and refresh these tokens in your application. With the
token store, you just retrieve the tokens when you need them and tell App Service to refresh them when
they become invalid.
The id tokens, access tokens, and refresh tokens cached for the authenticated session, and they're
accessible only by the associated user.
Identity providers
App Service uses federated identity, in which a third-party identity provider manages the user identities
and authentication flow for you. Five identity providers are available by default:
When you enable authentication and authorization with one of these providers, its sign-in endpoint is
available for user authentication and for validation of authentication tokens from the provider. You can
provide your users with any number of these sign-in options with ease. You can also integrate another
identity provider or your own custom identity solution.
Authentication flow
The authentication flow is the same for all providers, but differs depending on whether you want to sign
in with the provider's SDK:
●● Without provider SDK: The application delegates federated sign-in to App Service. This is typically
the case with browser apps, which can present the provider's login page to the user. The server code
manages the sign-in process, so it is also called server-directed flow or server flow. This case applies to
web apps. It also applies to native apps that sign users in using the Mobile Apps client SDK because
the SDK opens a web view to sign users in with App Service authentication.
●● With provider SDK: The application signs users in to the provider manually and then submits the
authentication token to App Service for validation. This is typically the case with browser-less apps,
which can't present the provider's sign-in page to the user. The application code manages the sign-in
process, so it is also called client-directed flow or client flow. This case applies to REST APIs, Azure
Functions, and JavaScript browser clients, as well as web apps that need more flexibility in the sign-in
process. It also applies to native mobile apps that sign users in using the provider's SDK.
MCT USE ONLY. STUDENT USE PROHIBITED 12 Module 1 Creating Azure App Service Web Apps
✔️ Note: Calls from a trusted browser app in App Service calls another REST API in App Service or Azure
Functions can be authenticated using the server-directed flow. For more information, see Customize
authentication and authorization in App Service2.
The table below shows the steps of the authentication flow.
Authorization behavior
In the Azure portal, you can configure App Service authorization with a number of behaviors:
1. Allow Anonymous requests (no action): This option defers authorization of unauthenticated traffic
to your application code. For authenticated requests, App Service also passes along authentication
information in the HTTP headers.This option provides more flexibility in handling anonymous re-
quests. It lets you present multiple sign-in providers to your users.
2. Allow only authenticated requests: The option is Log in with <provider>. App Service redirects all
anonymous requests to /.auth/login/<provider> for the provider you choose. If the anony-
mous request comes from a native mobile app, the returned response is an HTTP 401 Unauthor-
ized. With this option, you don't need to write any authentication code in your app.
Caution: Restricting access in this way applies to all calls to your app, which may not be desirable for
apps wanting a publicly available home page, as in many single-page applications.
2 https://docs.microsoft.com/en-us/azure/app-service/app-service-authentication-how-to
MCT USE ONLY. STUDENT USE PROHIBITED
Azure App Service core concepts 13
3 https://technet.microsoft.com/security/bulletins.aspx
4 https://azure.microsoft.com/blog/topics/security/
MCT USE ONLY. STUDENT USE PROHIBITED 14 Module 1 Creating Azure App Service Web Apps
Deprecated versions
When an older version is deprecated, the removal date is announced so that you can plan your runtime
version upgrade accordingly.
To find all possible outbound IP addresses for your app, regardless of pricing tiers, run the following
command in the Cloud Shell.
MCT USE ONLY. STUDENT USE PROHIBITED 16 Module 1 Creating Azure App Service Web Apps
az webapp show \
--resource-group <group_name> \
--name <app_name> \
--query possibleOutboundIpAddresses \
--output tsv
Routing methods
Azure Traffic Manager uses four different routing methods. These methods are described in the following
list as they pertain to Azure App Service.
●● Priority: use a primary app for all traffic, and provide backups in case the primary or the backup apps
are unavailable.
●● Weighted: distribute traffic across a set of apps, either evenly or according to weights, which you
define.
●● Performance: when you have apps in different geographic locations, use the “closest” app in terms of
the lowest network latency.
●● Geographic: direct users to specific apps based on which geographic location their DNS query
originates from.
For more information, see Traffic Manager routing methods5.
5 https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods
MCT USE ONLY. STUDENT USE PROHIBITED
Azure App Service core concepts 17
Languages
App Service on Linux supports a number of Built-in images in order to increase developer productivity. If
the runtime your application requires is not supported in the built-in images, you build your own Docker
image to deploy to Web App for Containers. Creating Docker images is covered later in the course.
●● FTP
●● Local Git
●● GitHub
●● Bitbucket
●● DevOps
●● Staging environments
●● Azure Container Registry and DockerHub CI/CD
●● Console, Publishing, and Debugging
●● Environments
●● Deployments
●● Basic console
●● SSH
●● Scaling
●● Customers can scale web apps up and down by changing the tier of their App Service plan
●● Locations
Limitations
App Service on Linux is only supported with Free, Basic, Standard, and Premium app service plans and
does not have a Shared tier. You cannot create a Linux Web App in an App Service plan already hosting
non-Linux Web Apps.
Based on a current limitation, for the same resource group you cannot mix Windows and Linux apps in
the same region.
Troubleshooting
When your application fails to start or you want to check the logging from your app, check the Docker
logs in the LogFiles directory. You can access this directory either through your SCM site or via FTP. To log
the stdout and stderr from your container, you need to enable Docker Container logging under
App Service Logs. The setting takes effect immediately. App Service detects the change and restarts the
container automatically.
You can access the SCM site from Advanced Tools in the Development Tools menu.
MCT USE ONLY. STUDENT USE PROHIBITED
Creating an Azure App Service Web App 19
6 http://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 20 Module 1 Creating Azure App Service Web Apps
7. Once the app is ready you can select the Go to resource button and the portal will display the web
app overview page. To preview your new web app's default content, select its URL at the top right. The
placeholder page that loads indicates that your web app is up and running and ready to receive
deployment of your app's code.
Clean up resources
1. In the Azure Portal select Resource groups.
2. Right-click on the resource group you created above and select Delete resource group. You will be
prompted to enter the resource group name to verify you want to delete it. Enter the name of the
resource group and select Delete.
Prerequisites
This demo is performed in the Cloud Shell using the Bash environment.
MCT USE ONLY. STUDENT USE PROHIBITED
Creating an Azure App Service Web App 21
Login to Azure
1. Login to the Azure portal7 and open open the cloud shell.
2. Be sure to select the Bash environment.
cd $HOME/demoHTML
2. Run the following command to clone the sample app repository to your demoHTML directory.
git clone https://github.com/Azure-Samples/html-docs-hello-world.git
This command may take a few minutes to run. While running, it displays information similar to the
example below. Make a note of the resourceGroup value. You need it for the clean up resources
section.
{
"app_url": "https://<app_name>.azurewebsites.net",
"location": "westeurope",
"name": "<app_name>",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_westeurope",
"serverfarm": "appsvc_asp_Windows_westeurope",
"sku": "FREE",
"src_path": "/home/<username>/demoHTML/html-docs-hello-world ",
< JSON data removed for brevity. >
}
7 https://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 22 Module 1 Creating Azure App Service Web Apps
4. Once deployment is completed switch back to the browser from step 2 in the “Create the web app”
section above and refresh the page.
Clean up resources
1. After completing the demo you can delete the resources you created using the resource group name
you noted in step 1 of the “Create the web app” section above.
az group delete --name <resource_group> --no-wait
Later in the demo you'll be entering more commands in the Git Bash window so be sure to leave it
open.
2. Launch the Azure Cloud Shell and be sure to select the Bash environment.
●● You can either launch the Cloud Shell through the portal (https://portal.azure.com),
or by launching the shell directly (https://shell.azure.com).
●● The username must be unique within Azure, and for local Git pushes, must not contain the ‘@’
symbol.
●● The password must be at least eight characters long, with two of the following three elements:
letters, numbers, and symbols.
●● The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error,
change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger
password.
Record your username and password to use to deploy your web apps.
2. Get the web app deployment URL, the deployment URL is used in the Git Bash window to connect
your local Git repository to the web app:
az webapp deployment source config-local-git --name <MyUniqueApp> --resource-group <MyRe-
sourceGroup>
The command will return JSON similar to the example below, you'll use the URL in the Git Bash
window in the next step.
{
"url": "https://<deployment-user>@<MyUniqueApp>.scm.azurewebsites.net/<MyUniqueApp>.git"
}
Verify results
In the Azure Portal navigate to the web app you created above:
1. In the Overview section select the URL to verify the app was deployed successfully.
2. Select Deployment Center to view deployment information.
From here you can make change to the code in the local repository and push the change to the web app.
Clean up resources
In the Cloud Shell use the following command to delete the resource group and the resources it contains.
The --no-wait portion of the command will return you to the Bash prompt quickly without showing
you the results of the command. You can confirm the resource group was deleted in the Azure Portal
az group delete --name <MyResourceGroup> --no-wait
MCT USE ONLY. STUDENT USE PROHIBITED
Configuring and Monitoring App Service apps 25
Editing in bulk
To add or edit app settings in bulk, click the Advanced edit button. When finished, click Update. App
settings have the following JSON formatting:
[
{
"name": "<key-1>",
"value": "<value-1>",
"slotSetting": false
},
{
"name": "<key-2>",
"value": "<value-2>",
"slotSetting": false
},
...
MCT USE ONLY. STUDENT USE PROHIBITED 26 Module 1 Creating Azure App Service Web Apps
●● Script processor: The absolute path of the script processor to you. Requests to files that match the
file extension are processed by the script processor. Use the path D:\home\site\wwwroot to refer
to your app's root directory.
●● Arguments: Optional command-line arguments for the script processor.
Each app has the default root path (/) mapped to D:\home\site\wwwroot, where your code is
deployed by default. If your app root is in a different folder, or if your repository has more than one
application, you can edit or add virtual applications and directories here.
To configure virtual applications and directories, specify each virtual directory and its corresponding
physical path relative to the website root (D:\home). Optionally, you can select the Application check-
box to mark a virtual directory as an application.
Containerized apps
You can add custom storage for your containerized app. Containerized apps include all Linux apps and
also the Windows and Linux custom containers running on App Service. Click New Azure Storage
Mount and configure your custom storage as follows:
●● Name: The display name.
●● Configuration options: Basic or Advanced.
●● Storage accounts: The storage account with the container you want.
●● Storage type: Azure Blobs or Azure Files. Windows container apps only support Azure Files.
●● Storage container: For basic configuration, the container you want.
●● Share name: For advanced configuration, the file share name.
●● Access key: For advanced configuration, the access key.
●● Mount path: The absolute path in your container to mount the custom storage.
Stream logs
Before you stream logs in real time, enable the log type that you want. Any information written to files
ending in .txt, .log, or .htm that are stored in the /LogFiles directory (d:/home/logfiles) is streamed
by App Service.
Note: Some types of logging buffer write to the log file, which can result in out of order events in the
stream. For example, an application log entry that occurs when a user visits a page may be displayed in
the stream before the corresponding HTTP log entry for the page request.
●● Azure Portal - To stream logs in the Azure portal, navigate to your app and select Log stream.
●● Azure CLI - To stream logs live in Cloud Shell, use the following command:
MCT USE ONLY. STUDENT USE PROHIBITED
Configuring and Monitoring App Service apps 31
●● Local console - To stream logs in the local console, install Azure CLI and sign in to your account. Once
signed in, follow the instructions for Azure CLI above.
8 https://azure.microsoft.com/pricing/details/web-sites/
MCT USE ONLY. STUDENT USE PROHIBITED
Scaling App Service apps 35
3. Choose your tier, and then select Apply. Select the different categories (for example, Production) and
also See additional options to show more tiers.
When the operation is complete, you see a notification pop-up with a green success check mark.
Manual scale
Using the Manual scale option works best when your app is under fairly consistent loads over time.
Setting the instance count higher than your app needs means you're paying for capacity you aren't using.
And, setting the instance count lower means your app may experience periods where it isn't responsive.
Changing the instance count is straightforward, simply adjust the instance count by either dragging the
Instance count slider, or entering the number manually, and selecting Save.
Custom autoscale
To get started scaling your app based on custom metrics and/or dates, select the Custom autoscale
option on the page. When you select that option a Default autoscale condition is created for you and
that condition is is executed when none of the other scale condition(s) match. You are required to have at
least one condition in place.
MCT USE ONLY. STUDENT USE PROHIBITED
Scaling App Service apps 37
Important: Always use a scale-out and scale-in rule combination that performs an increase and decrease.
If you use only one part of the combination, autoscale will only take action in a single direction (scale out,
or in) until it reaches the maximum, or minimum instance counts of defined in the profile. This is not
optimal, ideally you want your resource to scale up at times of high usage to ensure availability. And at
times of low usage you want your resource to scale down, so you can realize cost savings.
●● Profile 1
●● Autoscale rule 1
●● Autoscale rule 2
●● Profile 2
●● Autoscale rule 1
Below is an example of an autoscale setting that scales out on Friday and Saturday and scales in for the
rest of the week, so it's not using any rules containing metrics to trigger scaling events. The example has
been truncated for readability.
Note the example below displays a single profile:
1. Scale-out Weekends
* capacity is set to 2 instances for the minimum, maximum, and default
* recurrence is set to Week and days is set to “Friday”, "Saturday"
{
"location": "West US",
"tags": {},
"properties": {
"name": "az204-scale-appsvcpln-Autoscale-136",
"enabled": true,
"targetResourceUri": ".../az204-scale-appsvcpln",
"profiles": [
{
"name": "Scale-out Weekends",
"capacity": {
"minimum": "2",
"maximum": "2",
"default": "2"
},
"rules": [],
"recurrence": {
"frequency": "Week",
"schedule": {
"timeZone": "Pacific Standard Time",
"days": [
MCT USE ONLY. STUDENT USE PROHIBITED 38 Module 1 Creating Azure App Service Web Apps
"Friday",
"Saturday"
],
"hours": [
6
],
"minutes": [
0
]
}
}
},
],
"notifications": [],
"targetResourceLocation": "West US"
},
"id": "...",
"name": "az204-scale-appsvcpln-Autoscale-136",
"type": "Microsoft.Insights/autoscaleSettings"
}
Autoscale profiles
There are three types of Autoscale profiles:
●● Regular profile: The most common profile. If you don’t need to scale your resource based on the day
of the week, or on a particular day, you can use a regular profile. This profile can then be configured
with metric rules that dictate when to scale out and when to scale in. You should only have one
regular profile defined.
●● Fixed date profile: This profile is for special cases. For example, let’s say you have an important event
coming up on December 26, 2017 (PST). You want the minimum and maximum capacities of your
resource to be different on that day, but still scale on the same metrics. In this case, you should add a
fixed date profile to your setting’s list of profiles. The profile is configured to run only on the event’s
day. For any other day, Autoscale uses the regular profile.
●● Recurrence profile: This type of profile enables you to ensure that this profile is always used on a
particular day of the week. Recurrence profiles only have a start time. They run until the next recur-
rence profile or fixed date profile is set to start. An Autoscale setting with only one recurrence profile
runs that profile, even if there is a regular profile defined in the same setting.
Autoscale evaluation
Given that Autoscale settings can have multiple profiles, and each profile can have multiple metric rules,
it is important to understand how an Autoscale setting is evaluated. Each time the Autoscale job runs, it
begins by choosing the profile that is applicable. Then Autoscale evaluates the minimum and maximum
values, and any metric rules in the profile, and decides if a scale action is necessary.
MCT USE ONLY. STUDENT USE PROHIBITED
Scaling App Service apps 39
autoscale action. You can also configure email or webhook notifications to get notified for successful
scale actions via the notifications tab on the autoscale setting.
5. To avoid this situation (termed “flapping”), autoscale does not scale down at all. Instead, it skips and
reevaluates the condition again the next time the service's job executes. This can confuse many
people because autoscale wouldn't appear to work when the average thread count was 575.
Estimation during a scale-in is intended to avoid “flapping” situations, where scale-in and scale-out
actions continually go back and forth. Keep this behavior in mind when you choose the same thresholds
for scale-out and in.
We recommend choosing an adequate margin between the scale-out and in thresholds. As an example,
consider the following better rule combination.
●● Increase instances by 1 count when CPU% >= 80
●● Decrease instances by 1 count when CPU% <= 60
In this case
1. Assume there are 2 instances to start with.
2. If the average CPU% across instances goes to 80, autoscale scales out adding a third instance.
3. Now assume that over time the CPU% falls to 60.
4. Autoscale's scale-in rule estimates the final state if it were to scale-in. For example, 60 x 3 (current
instance count) = 180 / 2 (final number of instances when scaled down) = 90. So autoscale does not
scale-in because it would have to scale-out again immediately. Instead, it skips scaling down.
5. The next time autoscale checks, the CPU continues to fall to 50. It estimates again - 50 x 3 instance =
150 / 2 instances = 75, which is below the scale-out threshold of 80, so it scales in successfully to 2
instances.
5. Select the new deployment slot to open that slot's resource page. The staging slot has a management
page just like any other App Service app. You can change the slot's configuration. The name of the
slot is shown at the top of the page to remind you that you're viewing the deployment slot.
6. Select the app URL on the slot's resource page. The deployment slot has its own host name and is
also a live app.
The new deployment slot has no content, even if you clone the settings from a different slot. For exam-
ple, you can publish to this slot with Git. You can deploy to the slot from a different repository branch or
a different repository.
The dialog box shows you how the configuration in the source slot changes in phase 1, and how the
source and target slot change in phase 2.
2. When you're ready to start the swap, select Start Swap.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure App Service staging environments 47
When phase 1 finishes, you're notified in the dialog box. Preview the swap in the source slot by going
to https://<app_name>-<source-slot-name>.azurewebsites.net.
3. When you're ready to complete the pending swap, select Complete Swap in Swap action and select
Complete Swap.
To cancel a pending swap, select Cancel Swap instead.
4. When you're finished, close the dialog box by selecting Close.
Monitor a swap
If the swap operation takes a long time to complete, you can get information on the swap operation in
the activity log.
On your app's resource page in the portal, in the left pane, select Activity log.
A swap operation appears in the log query as Swap Web App Slots. You can expand it and select one
of the suboperations or errors to see the details.
</system.webServer>
For more information on customizing the applicationInitialization element, see Most common
deployment slot swap failures and how to fix them9.
You can also customize the warm-up behavior with one or both of the following app settings:
●● WEBSITE_SWAP_WARMUP_PING_PATH: The path to ping to warm up your site. Add this app setting
by specifying a custom path that begins with a slash as the value. An example is /statuscheck. The
default value is /.
●● WEBSITE_SWAP_WARMUP_PING_STATUSES: Valid HTTP response codes for the warm-up operation.
Add this app setting with a comma-separated list of HTTP codes. An example is 200,202 . If the
returned status code isn't in the list, the warmup and swap operations are stopped. By default, all
response codes are valid.
Note:<applicationInitialization> is part of each app start-up, where as these two app settings
apply only to slot swaps.
Routing traffic
By default, all client requests to the app's production URL (http://<app_name>.azurewebsites.
net) are routed to the production slot. You can route a portion of the traffic to another slot. This feature
is useful if you need user feedback for a new update, but you're not ready to release it to production.
9 https://ruslany.net/2017/11/most-common-deployment-slot-swap-failures-and-how-to-fix-them/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure App Service staging environments 49
The string x-ms-routing-name=self specifies the production slot. After the client browser accesses
the link, it's redirected to the production slot. Every subsequent request has the x-ms-rout-
ing-name=self cookie that pins the session to the production slot.
To let users opt in to your beta app, set the same query parameter to the name of the non-production
slot. Here's an example:
<webappname>.azurewebsites.net/?x-ms-routing-name=staging
By default, new slots are given a routing rule of 0%, a default value is displayed in grey. When you
explicitly set this value to 0% it is displayed in black, your users can access the staging slot manually by
using the x-ms-routing-name query parameter. But they won't be routed to the slot automatically
because the routing percentage is set to 0. This is an advanced scenario where you can “hide” your
staging slot from the public while allowing internal teams to test changes on the slot.
MCT USE ONLY. STUDENT USE PROHIBITED 50 Module 1 Creating Azure App Service Web Apps
Lab scenario
You're the owner of a startup organization and have been building an image gallery application for
people to share great images of food. To get your product to market as quickly as possible, you decided
to use Microsoft Azure App Service to host your web apps and APIs.
Objectives
●● After you complete this lab, you will be able to:
●● Create various apps by using App Service.
●● Configure application settings for an app.
●● Deploy apps by using Kudu, the Azure Command-Line Interface (CLI), and zip file deployment.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
Which of the following App Service plans is available only to function apps?
Shared compute
Dedicated compute
Isolated
Consumption
None of the above
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 51
Review Question 3
Azure Traffic Manager to control how requests from web clients are distributed to apps in Azure App Service.
Which of the options listed below are valid routing methods for Azure Traffic Manager?
Select all that apply.
Priority
Weighted
Performance
Scale
Geographic
Review Question 4
Which of the following App Service tiers is not currently available to App Service on Linux?
Free
Basic
Shared
Standard
Premium
Review Question 5
Which of the following settings are not not swapped when you swap an an app?
Select all that apply.
Handler mappings
Publishing endpoints
General settings, such as framework version, 32/64-bit, web sockets
Always On
Custom domain names
MCT USE ONLY. STUDENT USE PROHIBITED 52 Module 1 Creating Azure App Service Web Apps
Answers
Review Question 1
You have multiple apps running in a single App Service plan. True or False: Each app in the service plan
can have different scaling rules.
True
■■ False
Explanation
The App Service plan is the scale unit of the App Service apps. If the plan is configured to run five VM
instances, then all apps in the plan run on all five instances. If the plan is configured for autoscaling, then all
apps in the plan are scaled out together based on the autoscale settings.
Review Question 2
Which of the following App Service plans is available only to function apps?
Shared compute
Dedicated compute
Isolated
■■ Consumption
None of the above
Explanation
The consumption tier is only available to function apps. It scales the functions dynamically depending on
workload.
Review Question 3
Azure Traffic Manager to control how requests from web clients are distributed to apps in Azure App
Service. Which of the options listed below are valid routing methods for Azure Traffic Manager?
Select all that apply.
■■ Priority
■■ Weighted
■■ Performance
Scale
■■ Geographic
Explanation
Azure Traffic Manager uses four different routing methods.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 53
Review Question 4
Which of the following App Service tiers is not currently available to App Service on Linux?
Free
Basic
■■ Shared
Standard
Premium
Explanation
App Service on Linux is only supported with Free, Basic, Standard, and Premium app service plans and does
not have a Shared tier. You cannot create a Linux Web App in an App Service plan already hosting non-Li-
nux Web Apps.
Review Question 5
Which of the following settings are not not swapped when you swap an an app?
Select all that apply.
Handler mappings
■■ Publishing endpoints
General settings, such as framework version, 32/64-bit, web sockets
■■ Always On
■■ Custom domain names
Explanation
Some configuration elements follow the content across a swap (not slot specific), whereas other configura-
tion elements stay in the same slot after a swap (slot specific). The following are slot specific:
MCT USE ONLY. STUDENT USE PROHIBITED
Module 2 Implement Azure functions
Integrations
Azure Functions integrates with various Azure and 3rd-party services. These services can trigger your
function and start execution, or they can serve as input and output for your code. The following service
integrations are supported by Azure Functions:
●● Azure Cosmos DB
●● Azure Event Hubs
●● Azure Event Grid
●● Azure Notification Hubs
●● Azure Service Bus (queues and topics)
●● Azure Storage (blob, queues, and tables)
●● On-premises (using Service Bus)
MCT USE ONLY. STUDENT USE PROHIBITED 56 Module 2 Implement Azure functions
Consumption plan
When you're using the Consumption plan, instances of the Azure Functions host are dynamically added
and removed based on the number of incoming events. This serverless plan scales automatically, and
you're charged for compute resources only when your functions are running. On a Consumption plan, a
function execution times out after a configurable period of time.
Billing is based on number of executions, execution time, and memory used. Billing is aggregated across
all functions within a function app.
The Consumption plan is the default hosting plan and offers the following benefits:
●● Pay only when your functions are running
●● Scale out automatically, even during periods of high load
Function apps in the same region can be assigned to the same Consumption plan. There's no downside
or impact to having multiple apps running in the same Consumption plan. Assigning multiple apps to the
same consumption plan has no impact on resilience, scalability, or reliability of each app.
Premium plan
When you're using the Premium plan, instances of the Azure Functions host are added and removed
based on the number of incoming events just like the Consumption plan. Premium plan supports the
following features:
●● Perpetually warm instances to avoid any cold start
●● VNet connectivity
●● Unlimited execution duration
●● Premium instance sizes (one core, two core, and four core instances)
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Functions overview 57
Always On
If you run on an App Service plan, you should enable the Always on setting so that your function app
runs correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity,
so only HTTP triggers will “wake up” your functions. Always on is available only on an App Service plan.
On a Consumption plan, the platform activates function apps automatically.
ing triggers and logging function executions, but some storage accounts do not support queues and
tables. These accounts, which include blob-only storage accounts (including premium storage) and
general-purpose storage accounts with zone-redundant storage replication, are filtered-out from your
existing Storage Account selections when you create a function app.
The same storage account used by your function app can also be used by your triggers and bindings to
store your application data. However, for storage-intensive operations, you should use a separate storage
account.
Runtime scaling
Azure Functions uses a component called the scale controller to monitor the rate of events and deter-
mine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For
example, when you're using an Azure Queue storage trigger, it scales based on the queue length and the
age of the oldest queue message.
The unit of scale for Azure Functions is the function app. When the function app is scaled out, additional
resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute
demand is reduced, the scale controller removes function host instances. The number of instances is
eventually scaled down to zero when no functions are running within a function app.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Functions overview 59
Overview
Triggers are what cause a function to run. A trigger defines how a function is invoked and a function must
have exactly one trigger. Triggers have associated data, which is often provided as the payload of the
function.
Binding to a function is a way of declaratively connecting another resource to the function; bindings may
be connected as input bindings, output bindings, or both. Data from bindings is provided to the function
as parameters.
You can mix and match different bindings to suit your needs. Bindings are optional and a function might
have one or multiple input and/or output bindings.
Triggers and bindings let you avoid hardcoding access to other services. Your function receives data (for
example, the content of a queue message) in function parameters. You send data (for example, to create
a queue message) by using the return value of the function.
MCT USE ONLY. STUDENT USE PROHIBITED 60 Module 2 Implement Azure functions
Binding direction
All triggers and bindings have a direction property in the function.json file:
●● For triggers, the direction is always in
●● Input and output bindings use in and out
●● Some bindings support a special direction inout. If you use inout, only the Advanced editor is
available via the Integrate tab in the portal.
When you use attributes in a class library to configure triggers and bindings, the direction is provided in
an attribute constructor or inferred from the parameter type.
"connection": "MY_STORAGE_ACCT_APP_SETTING"
},
{
"type": "table",
"direction": "out",
"name": "$return",
"tableName": "outTable",
"connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
}
]
}
The first element in the bindings array is the Queue storage trigger. The type and direction
properties identify the trigger. The name property identifies the function parameter that receives the
queue message content. The name of the queue to monitor is in queueName, and the connection string
is in the app setting identified by connection.
The second element in the bindings array is the Azure Table Storage output binding. The type and
direction properties identify the binding. The name property specifies how the function provides the
new table row, in this case by using the function return value. The name of the table is in tableName,
and the connection string is in the app setting identified by connection.
To view and edit the contents of function.json in the Azure portal, click the Advanced editor option on
the Integrate tab of your function.
C# script example
Here's C# script code that works with this trigger and binding. Notice that the name of the parameter
that provides the queue message content is order; this name is required because the name property
value in function.json is order.
#r "Newtonsoft.Json"
using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;
// From an incoming queue message that is a JSON object, add fields and write to Table storage
// The method return value creates a new row in Table Storage
public static Person Run(JObject order, ILogger log)
{
return new Person() {
PartitionKey = "Orders",
RowKey = Guid.NewGuid().ToString(),
Name = order["Name"].ToString(),
MobileNumber = order["MobileNumber"].ToString() };
}
JavaScript example
The same function.json file can be used with a JavaScript function:
// From an incoming queue message that is a JSON object, add fields and write to Table Storage
// The second parameter to context.done is used as the value for the new row
module.exports = function (context, order) {
order.PartitionKey = "Orders";
order.RowKey = generateRandomId();
context.done(null, order);
};
function generateRandomId() {
return Math.random().toString(36).substring(2, 15) +
Math.random().toString(36).substring(2, 15);
}
Further reading
For more detailed examples of triggers and bindings please visit:
●● Azure Blob storage bindings for Azure Functions
●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob
●● Azure Cosmos DB bindings for Azure Functions 2.x
●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2
●● Timer trigger for Azure Functions
●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer
●● Azure Functions HTTP triggers and bindings
●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
MCT USE ONLY. STUDENT USE PROHIBITED 64 Module 2 Implement Azure functions
Function code
A function contains two important pieces - your code, which can be written in a variety of languages, and
some config, the function.json file. For compiled languages, this config file is generated automatically
from annotations in your code. For scripting languages, you must provide the config file yourself.
The function.json file defines the function's trigger, bindings, and other configuration settings. Every func-
tion has one and only one trigger. The runtime uses this config file to determine the events to monitor
and how to pass data into and return data from a function execution. The following is an example
function.json file.
{
"disabled":false,
"bindings":[
// ... bindings here
{
"type": "bindingType",
"direction": "in",
"name": "myParamName",
// ... more depending on binding
}
]
}
The bindings property is where you configure both triggers and bindings. Each binding shares a few
common settings and some settings which are specific to a particular type of binding. Every binding
requires the following settings:
Function app
A function app provides an execution context in Azure in which your functions run. As such, it is the unit
of deployment and management for your functions. A function app is comprised of one or more individ-
ual functions that are managed, deployed, and scaled together. All of the functions in a function app
share the same pricing plan, deployment method, and runtime version. Think of a function app as a way
to organize and collectively manage your functions.
Folder structure
The code for all the functions in a specific function app is located in a root project folder that contains a
host configuration file and one or more subfolders. Each subfolder contains the code for a separate
function. The folder structure is shown in the following figure.
The host.json file contains runtime-specific configurations and is in the root folder of the function app. A
bin folder contains packages and other library files that the function app requires.
The Functions editor built into the Azure portal lets you update your code and your function.json file
directly inline. This is recommended only for small changes or proofs of concept - best practice is to use a
local development tool like VS Code.
6. Select Create to provision and deploy the function app. When the deployment is complete select Go
to resource to view your new function app.
Next, you'll create a function in the new function app.
2. In the dialog box that appears select default (Function key), and then click Copy.
3. Paste the function URL into your browser's address bar. Add the query string value &name=<your-
name> to the end of this URL and press the Enter key on your keyboard to execute the request. You
should see the response returned by the function displayed in the browser.
4. When your function runs, trace information is written to the logs. To see the trace output from the
previous execution, return to your function in the portal and click the arrow at the bottom of the
screen to expand the Logs.
Clean up resources
You can clean up the resources created in this demo simply by deleting the resource group that was
created early in the demo.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Durable Functions 69
Supported languages
Durable Functions currently supports the following languages:
●● C#: both precompiled class libraries and C# script.
●● JavaScript: supported only for version 2.x of the Azure Functions runtime. Requires version 1.7.0 of
the Durable Functions extension, or a later version.
●● F#: precompiled class libraries and F# script. F# script is only supported for version 1.x of the Azure
Functions runtime.
Application patterns
The primary use case for Durable Functions is simplifying complex, stateful coordination requirements in
serverless applications. The following sections describe typical application patterns that can benefit from
Durable Functions:
●● Function chaining
●● Fan-out/fan-in
●● Async HTTP APIs
Function chaining
In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the
output of one function is applied to the input of another function.
In the example below, the values F1, F2, F3, and F4 are the names of other functions in the function app.
You can implement control flow by using normal imperative coding constructs. Code executes from the
top down. The code can involve existing language control flow semantics, like conditionals and loops.
You can include error handling logic in try/catch/finally blocks.
// Functions 2.0 only
const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
MCT USE ONLY. STUDENT USE PROHIBITED 70 Module 2 Implement Azure functions
Fan out/fan in
In the fan out/fan in pattern, you execute multiple functions in parallel and then wait for all functions to
finish. Often, some aggregation work is done on the results that are returned from the functions.
With normal functions, you can fan out by having the function send multiple messages to a queue. To fan
in you write code to track when the queue-triggered functions end, and then store function outputs.
In the example below, the fan-out work is distributed to multiple instances of the F2 function. The work is
tracked by using a dynamic list of tasks. The .NET Task.WhenAll API or JavaScript context.df.Task.
all API is called, to wait for all the called functions to finish. Then, the F2 function outputs are aggregat-
ed from the dynamic task list and passed to the F3 function.
const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
const parallelTasks = [];
yield context.df.Task.all(parallelTasks);
The automatic checkpointing that happens at the await or yield call on Task.WhenAll or context.
df.Task.all ensures that a potential midway crash or reboot doesn't require restarting an already
completed task.
Durable Functions provides built-in support for this pattern, simplifying or even removing the code you
need to write to interact with long-running function executions. After an instance starts, the extension
exposes webhook HTTP APIs that query the orchestrator function status.
The following example shows REST commands that start an orchestrator and query its status. For clarity,
some protocol details are omitted from the example.
> curl -X POST https://myfunc.azurewebsites.net/orchestrators/DoWork -H
"Content-Length: 0" -i
HTTP/1.1 202 Accepted
Content-Type: application/json
Location: https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/
b79baf67f717453ca9e86c5da21e03ec
{"id":"b79baf67f717453ca9e86c5da21e03ec", ...}
{"runtimeStatus":"Running","lastUpdatedTime":"2019-03-16T21:20:47Z", ...}
Content-Type: application/json
{"runtimeStatus":"Completed","lastUpdatedTime":"2019-03-16T21:20:57Z", ...}
The Durable Functions extension exposes built-in HTTP APIs that manage long-running orchestrations.
You can alternatively implement this pattern yourself by using your own function triggers (such as HTTP,
a queue, or Azure Event Hubs) and the orchestration client binding.
Additional resources
●● To learn about the differences between Durable Functions 1.x and 2.x visit:
●● https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-ver-
sions
Durable Orchestrations
This lesson gives you an overview of orchestrator functions and how they can help you solve various app
development challenges.
Orchestration identity
Each instance of an orchestration has an instance identifier (also known as an instance ID). By default,
each instance ID is an autogenerated GUID. However, instance IDs can also be any user-generated string
value. Each orchestration instance ID must be unique within a task hub.
✔️Note: It is generally recommended to use autogenerated instance IDs whenever possible. User-gener-
ated instance IDs are intended for scenarios where there is a one-to-one mapping between an orchestra-
tion instance and some external application-specific entity, like a purchase order or a document.
An orchestration's instance ID is a required parameter for most instance management operations. They
are also important for diagnostics, such as searching through orchestration tracking data in Application
Insights for troubleshooting or analytics purposes. For this reason, it is recommended to save generated
instance IDs to some external location (for example, a database or in application logs) where they can be
easily referenced later.
Reliability
Orchestrator functions reliably maintain their execution state by using the event sourcing design pattern.
Instead of directly storing the current state of an orchestration, the Durable Task Framework uses an
append-only store to record the full series of actions the function orchestration takes.
Durable Functions uses event sourcing transparently. Behind the scenes, the await (C#) or yield
(JavaScript) operator in an orchestrator function yields control of the orchestrator thread back to the
Durable Task Framework dispatcher. The dispatcher then commits any new actions that the orchestrator
function scheduled (such as calling one or more child functions or scheduling a durable timer) to storage.
The transparent commit action appends to the execution history of the orchestration instance. The
history is stored in a storage table. The commit action then adds messages to a queue to schedule the
actual work. At this point, the orchestrator function can be unloaded from memory.
When an orchestration function is given more work to do, the orchestrator wakes up and re-executes the
entire function from the start to rebuild the local state. During the replay, if the code tries to call a
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Durable Functions 73
function (or do any other async work), the Durable Task Framework consults the execution history of the
current orchestration. If it finds that the activity function has already executed and yielded a result, it
replays that function's result and the orchestrator code continues to run. Replay continues until the
function code is finished or until it has scheduled new async work.
Pattern/Feature Description
Sub-orchestrations Orchestrator functions can call activity functions,
but also other orchestrator functions. For example,
you can build a larger orchestration out of a library
of orchestrator functions. Or, you can run multiple
instances of an orchestrator function in parallel.
Durable timers Orchestrations can schedule durable timers to
implement delays or to set up timeout handling
on async actions. Use durable timers in orchestra-
tor functions instead of Thread.Sleep and
Task.Delay (C#) or setTimeout() and
setInterval() (JavaScript).
External events Orchestrator functions can wait for external events
to update an orchestration instance. This Durable
Functions feature often is useful for handling a
human interaction or other external callbacks.
Error handling Orchestrator functions can use the error-handling
features of the programming language. Existing
patterns like try/catch are supported in
orchestration code.
Critical sections Orchestration instances are single-threaded so it
isn't necessary to worry about race conditions
within an orchestration. However, race conditions
are possible when orchestrations interact with
external systems. To mitigate race conditions when
interacting with external systems, orchestrator
functions can define critical sections using a Lock-
Async method in .NET.
Calling HTTP endpoints Orchestrator functions aren't permitted to do I/O.
The typical workaround for this limitation is to
wrap any code that needs to do I/O in an activity
function. Orchestrations that interact with external
systems frequently use activity functions to make
HTTP calls and return the result to the orchestra-
tion.
Passing multiple parameters It isn't possible to pass multiple parameters to an
activity function directly. The recommendation is
to pass in an array of objects or to use ValueTuples
objects in .NET.
MCT USE ONLY. STUDENT USE PROHIBITED 74 Module 2 Implement Azure functions
Timer limitations
When you create a timer that expires at 4:30 pm, the underlying Durable Task Framework enqueues a
message that becomes visible only at 4:30 pm. When running in the Azure Functions Consumption plan,
the newly visible timer message will ensure that the function app gets activated on an appropriate VM.
✔️ Note: Durable timers are currently limited to 7 days. If longer delays are needed, they can be simulat-
ed using the timer APIs in a while loop.
module.exports = df.orchestrator(function*(context) {
for (let i = 0; i < 10; i++) {
const dayOfMonth = context.df.currentUtcDateTime.getDate();
const deadline = moment.utc(context.df.currentUtcDateTime).add(1, 'd');
yield context.df.createTimer(deadline.toDate());
yield context.df.callActivity("SendBillingEvent");
}
});
module.exports = df.orchestrator(function*(context) {
const deadline = moment.utc(context.df.currentUtcDateTime).add(30, "s");
module.exports = df.orchestrator(function*(context) {
const approved = yield context.df.waitForExternalEvent("Approval");
if (approved) {
// approval granted - do the approved action
} else {
// approval denied - send a notification
}
});
Send events
The RaiseEventAsync (.NET) or raiseEvent (JavaScript) method of the orchestration client binding
sends the events that WaitForExternalEvent (.NET) or waitForExternalEvent (JavaScript) waits
MCT USE ONLY. STUDENT USE PROHIBITED 76 Module 2 Implement Azure functions
for. The RaiseEventAsync method takes eventName and eventData as parameters. The event data
must be JSON-serializable.
Below is an example queue-triggered function that sends an “Approval” event to an orchestrator function
instance. The orchestration instance ID comes from the body of the queue message.
const df = require("durable-functions");
Internally, RaiseEventAsync (.NET) or raiseEvent (JavaScript) enqueues a message that gets picked
up by the waiting orchestrator function. If the instance is not waiting on the specified event name, the
event message is added to an in-memory queue. If the orchestration instance later begins listening for
that event name, it will check the queue for event messages.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 77
Lab scenario
Your company has built a desktop software tool that parses a local JavaScript Object Notation (JSON) file
for its configuration settings. During its latest meeting, your team decided to reduce the number of files
that are distributed with your application by serving your default configuration settings from a URL
instead of from a local file. As the new developer on the team, you've been tasked with evaluating
Microsoft Azure Functions as a solution to this problem.
Objectives
After you complete this lab, you will be able to:
●● Create a Functions app.
●● Create various functions by using built-in triggers.
●● Configure function app triggers and input integrations.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
Is the following statement True or False?
Only Functions running in the consumption plan require a general Azure Storage account.
True
False
Review Question 3
You created a Function in the Azure Portal. Which of the following statments regarding the direction
property of the triggers, or bindings, are valid?
(Select all that apply.)
For triggers, the direction is always "in"
For triggers, the direction can be "in" or "out"
Input and output bindings can use "in" and "out"
Some bindings can use "inout"
Review Question 4
Azure Functions uses a component called the scale controller to monitor the rate of events and determine
whether to scale out or scale in.
Is the following statement True or False?
When you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the
newest queue message.
True
False
Review Question 5
Which of the below are valid application patterns that can benefit from durable functions?
(Select all that apply.)
Function chaining
Fan-out/fan-in
Chained lightning
Async HTTP APIs
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 79
Answers
Review Question 5
Which of the following plans for Functions automatically add compute power, if needed, when your code
is running.
(Select all that apply.)
■■ Consumption Plan
App Service Plan
■■ Premium Plan
Virtual Plan
Explanation
Both Consumption and Premium plans automatically add compute power when your code is running. Your
app is scaled out when needed to handle load, and scaled down when code stops running.
Review Question 2
Is the following statement True or False?
Only Functions running in the consumption plan require a general Azure Storage account.
True
■■ False
Explanation
On any plan, a function app requires a general Azure Storage account, which supports Azure Blob, Queue,
Files, and Table storage.
Review Question 3
You created a Function in the Azure Portal. Which of the following statments regarding the direction
property of the triggers, or bindings, are valid?
(Select all that apply.)
■■ For triggers, the direction is always "in"
For triggers, the direction can be "in" or "out"
■■ Input and output bindings can use "in" and "out"
■■ Some bindings can use "inout"
Explanation
All triggers and bindings have a direction property in the function.json file:
MCT USE ONLY. STUDENT USE PROHIBITED 80 Module 2 Implement Azure functions
Review Question 4
Azure Functions uses a component called the scale controller to monitor the rate of events and deter-
mine whether to scale out or scale in.
Is the following statement True or False?
When you're using an Azure Queue storage trigger, it scales based on the queue length and the age of
the newest queue message.
True
■■ False
Explanation
When using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest
queue message.
Review Question 5
Which of the below are valid application patterns that can benefit from durable functions?
(Select all that apply.)
■■ Function chaining
■■ Fan-out/fan-in
Chained lightning
■■ Async HTTP APIs
Explanation
The following are application patterns that can benefit from Durable Functions:
MCT USE ONLY. STUDENT USE PROHIBITED
Module 3 Develop solutions that use blob
storage
●● 4
Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS) (preview) are available only
for standard general-purpose v2 storage accounts. For more information about ZRS, see Zone-redun-
dant storage (ZRS): Highly available Azure Storage applications.
●● 5
Premium performance for general-purpose v2 and general-purpose v1 accounts is available for disk
and page blob only.
General-purpose v2 accounts
General-purpose v2 storage accounts support the latest Azure Storage features and deliver the lowest
per-gigabyte capacity prices for Azure Storage. General-purpose v2 storage accounts offer multiple
access tiers for storing data based on your usage patterns.
Note: Microsoft recommends using a general-purpose v2 storage account for most scenarios. You can
easily upgrade a general-purpose v1 or Blob storage account to a general-purpose v2 account with no
downtime and without the need to copy data.
General-purpose v1 accounts
While general-purpose v2 accounts are recommended in most cases, general-purpose v1 accounts are
best suited to these scenarios:
●● Your applications require the Azure classic deployment model.
●● Your applications are transaction-intensive or use significant geo-replication bandwidth, but do not
require large capacity.
●● You use a version of the Storage Services REST API that is earlier than 2014-02-14 or a client library
with a version lower than 4.x.
Storage accounts
A storage account provides a unique namespace in Azure for your data. Every object that you store in
Azure Storage has an address that includes your unique account name. The combination of the account
name and the Azure Storage blob endpoint forms the base address for the objects in your storage
account.
For example, if your storage account is named mystorageaccount, then the default endpoint for Blob
storage is:
http://mystorageaccount.blob.core.windows.net
Containers
A container organizes a set of blobs, similar to a directory in a file system. A storage account can include
an unlimited number of containers, and a container can store an unlimited number of blobs. The contain-
er name must be lowercase.
Blobs
Azure Storage supports three types of blobs:
●● Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data
that can be managed individually.
●● Append blobs are made up of blocks like block blobs, but are optimized for append operations.
Append blobs are ideal for scenarios such as logging data from virtual machines.
●● Page blobs store random access files up to 8 TB in size. Page blobs store virtual hard drive (VHD) files
and serve as disks for Azure virtual machines. For more information about page blobs, see Overview
of Azure page blobs
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Blob storage core concepts 85
●● You can assign RBAC roles scoped to the storage account to security principals and use Azure AD
to authorize resource management operations such as key management.
●● Azure AD integration is supported for blob and queue data operations. You can assign RBAC roles
scoped to a subscription, resource group, storage account, or an individual container or queue to
a security principal or a managed identity for Azure resources.
●● Data can be secured in transit between an application and Azure by using Client-Side Encryption,
HTTPS, or SMB 3.0.
●● OS and data disks used by Azure virtual machines can be encrypted using Azure Disk Encryption.
●● Delegated access to the data objects in Azure Storage can be granted using a shared access signature.
The following table compares key management options for Azure Storage encryption.
Field Value
Performance Select Premium.
Account kind Select BlockBlobStorage.
Replication Leave the default setting of Locally-redundant
storage (LRS).
8. Select Review + create to review the storage account settings.
9. Select Create.
4. Create the block blob storage account. See Step 5 in the Create account in the Azure portal instruc-
tions above for the storage account name requirements. Replace the values in quotations, and run the
following commands:
$storageaccount = "new_storage_account_name"
1 https://portal.azure.com
2 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Blob storage core concepts 89
Rules
Each rule definition includes a filter set and an action set. The filter set limits rule actions to a certain set
of objects within a container or objects names. The action set applies the tier or delete actions to the
filtered set of objects.
Sample rule
The following sample rule filters the account to run the actions on objects that exist inside container1
and start with foo.
●● Tier blob to cool tier 30 days after last modification
●● Tier blob to archive tier 90 days after last modification
●● Delete blob 2,555 days (seven years) after last modification
●● Delete blob snapshots 90 days after snapshot creation
{
"rules": [
{
"name": "ruleFoo",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
MCT USE ONLY. STUDENT USE PROHIBITED 92 Module 3 Develop solutions that use blob storage
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "container1/foo" ]
},
"actions": {
"baseBlob": {
"tierToCool": { "daysAfterModificationGreaterThan": 30 },
"tierToArchive": { "daysAfterModificationGreaterThan": 90 },
"delete": { "daysAfterModificationGreaterThan": 2555 }
},
"snapshot": {
"delete": { "daysAfterCreationGreaterThan": 90 }
}
}
}
}
]
}
Rule filters
Filters limit rule actions to a subset of blobs within the storage account. If more than one filter is defined,
a logical AND runs on all filters.
Filters include:
Rule actions
Actions are applied to the filtered blobs when the run condition is met.
Lifecycle management supports tiering and deletion of blobs and deletion of blob snapshots. Define at
least one action for each rule on blobs or blob snapshots.
MCT USE ONLY. STUDENT USE PROHIBITED
Managing the Azure Blob storage lifecycle 93
Azure portal
There are two ways to add a policy through the Azure portal: Azure portal List view, and Azure portal
Code view.
3 https://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 94 Module 3 Develop solutions that use blob storage
3. Select Save.
PowerShell
The following PowerShell script can be used to add a policy to your storage account. The $rgname
variable must be initialized with your resource group name. The $accountName variable must be
initialized with your storage account name.
#Install the latest module if you are running this in a local PowerShell instance
Install-Module -Name Az -Repository PSGallery
#Initialize the following with your resource group and storage account names
$rgname = ""
$accountName = ""
REST APIs
You can also create and manage lifecycle policies by using REST APIs. For information on the operations
see the Management Policies4 reference page.
4 https://docs.microsoft.com/en-us/rest/api/storagerp/managementpolicies
MCT USE ONLY. STUDENT USE PROHIBITED 96 Module 3 Develop solutions that use blob storage
●● High priority (preview): The rehydration request will be prioritized over Standard requests and may
finish in under 1 hour. High priority may take longer than 1 hour, depending on blob size and current
demand. High priority requests are guaranteed to be prioritized over Standard priority requests.
Standard priority is the default rehydration option for archive. High priority is a faster option that will cost
more than Standard priority rehydration and is usually reserved for use in emergency data restoration
situations.
Additional resources
For more details on REST API operations covered here, see:
●● Set Blob Tier: https://docs.microsoft.com/en-us/rest/api/storageservices/set-blob-tier
●● Copy Blob: https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob
MCT USE ONLY. STUDENT USE PROHIBITED 98 Module 3 Develop solutions that use blob storage
Setting up
Perform the following actions to prepare Azure, and your local environment, for the rest of the demo.
Write-Host "`nNote the following resource group, and storage account names, you will use them in
the code examples below.
Resource group: $myResourceGroup
Storage account: $myStorageAcct"
2. Use the following commands to switch to the newly created az204-blobdemo folder and build the app
to verify that all is well.
cd az204-blobdemo
dotnet build
3. While still in the application directory, install the Azure Blob Storage client library for .NET package by
using the dotnet add package command.
dotnet add package Microsoft.Azure.Storage.Blob
Note: Leave the console window open so you can use it to build and run the app later in the demo.
5 https://portal.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED 100 Module 3 Develop solutions that use blob storage
4. Open the Program.cs file in your editor, and replace the contents with the following code. The code
uses the TryParse method to verify if the connection string can be parsed to create a CloudStor-
ageAccount object.
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Blob;
namespace az204_blobdemo
{
class Program
{
public static void Main()
{
Console.WriteLine("Azure Blob Storage Demo\n");
// Run the examples asynchronously, wait for the results before proceeding
ProcessAsync().GetAwaiter().GetResult();
}
}
}
}
❗️ Important: The ProcessAsync method above contains directions on where to place the
example code for the demo.
5. Set the storageConnectionString variable to the value you copied from the portal.
6. Build and run the application to verify your connection string is valid by using the dotnet build
and dotnet run commands in your console window.
Create a container
To create the container, first create an instance of the CloudBlobClient object, which points to Blob
storage in your storage account. Next, create an instance of the CloudBlobContainer object, then
create the container. The code below calls the CreateAsync method to create the container. A GUID
value is appended to the container name to ensure that it is unique. In a production environment, it's
often preferable to use the CreateIfNotExistsAsync method to create a container only if it does not
already exist.
// Create a container called 'quickstartblobs' and
// append a GUID value to it to make the name unique.
CloudBlobContainer cloudBlobContainer =
cloudBlobClient.GetContainerReference("demoblobs" +
Guid.NewGuid().ToString());
await cloudBlobContainer.CreateAsync();
// Get a reference to the blob address, then upload the file to the blob.
// Use the value of localFileName for the blob name.
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(localFileName);
await cloudBlockBlob.UploadFromFileAsync(sourceFile);
Download blobs
Download the blob created previously to your local file system by using the
Download
To
FileAsync
method. The example code adds a suffix of “_DOWNLOADED” to the blob name so that you can see both
files in local file system.
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Blob storage 103
// Download the blob to a local file, using the reference created earlier.
// Append the string "_DOWNLOADED" before the .txt extension so that you
// can see both files in MyDocuments.
string destinationFile = sourceFile.Replace(".txt", "_DOWNLOADED.txt");
Console.WriteLine("Downloading blob to {0}", destinationFile);
await cloudBlockBlob.DownloadToFileAsync(destinationFile, FileMode.Create);
Delete a container
The following code cleans up the resources the app created by deleting the entire container using Cloud
BlobContainer. DeleteAsync. You can also delete the local files if you like.
// Clean up the resources created by the app
Console.WriteLine("Press the 'Enter' key to delete the example files " +
"and example container.");
Console.ReadLine();
// Clean up resources. This includes the container and the two temp files.
Console.WriteLine("Deleting the container");
if (cloudBlobContainer != null)
{
await cloudBlobContainer.DeleteIfExistsAsync();
}
Console.WriteLine("Deleting the source, and downloaded files\r\n");
File.Delete(sourceFile);
File.Delete(destinationFile);
There are many prompts in the app to allow you to take the time to see what's happening in the portal
after each step.
Beginning with version 2009-09-19, metadata names must adhere to the naming rules for C# identifiers.
Names are case-insensitive. Note that metadata names preserve the case with which they were created,
but are case-insensitive when set or read. If two or more metadata headers with the same name are
submitted for a resource, the Blob service returns status code 400 (Bad Request).
The metadata consists of name/value pairs. The total size of all metadata pairs can be up to 8KB in size.
Metadata name/value pairs are valid HTTP headers, and so they adhere to all restrictions governing HTTP
headers.
Operations on Metadata
Metadata on a blob or container resource can be retrieved or set directly, without returning or altering
the content of the resource.
Note that metadata values can only be read or written in full; partial updates are not supported. Setting
metadata on a resource overwrites any existing metadata values for that resource.
To reference a specific blob container, you can use the GetContainerReference method of the
CloudBlobClient class:
CloudBlobContainer container = client.GetContainerReference("images");
MCT USE ONLY. STUDENT USE PROHIBITED 106 Module 3 Develop solutions that use blob storage
After you have a reference to the container, you can ensure that the container exists. This will create the
container if it does not already exist in the Azure storage account:
container.CreateIfNotExists();
Retrieving properties
With a hydrated reference, you can perform actions such as fetching the properties (metadata) of the
container by using the FetchAttributesAsync method of the CloudBlobContainer class:
await container.FetchAttributesAsync();
After the method is invoked, the local variable is hydrated with values for various container metadata.
This metadata can be accessed by using the Properties property of the CloudBlobContainer class,
which is of type BlobContainerProperties:
container.Properties
This class has properties that can be set to change the container, including (but not limited to) those in
the following table.
Property Description
ETag This is a standard HTTP header that gives a value
that is unchanged unless a property of the
container is changed. This value can be used to
implement optimistic concurrency with the blob
containers.
LastModified This property indicates when the container was
last modified.
PublicAccess This property indicates the level of public access
that is allowed on the container. Valid values
include Blob, Container, Off, and Unknown.
HasImmutabilityPolicy This property indicates whether the container has
an immutability policy. An immutability policy will
help ensure that blobs are stored for a minimum
amount of retention time.
HasLegalHold This property indicates whether the container has
an active legal hold. A legal hold will help ensure
that blobs remain unchanged until the hold is
removed.
Setting properties
Using the existing CloudBlobContainer variable (named container), you can set and retrieve custom
metadata for the container instance. This metadata is hydrated when you call the FetchAttributes or
FetchAttributesAsync method on your blob or container to populate the Metadata collection.
The following code example sets metadata on a container. In this example, we use the collection's Add
method to set a metadata value:
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Blob storage 107
container.Metadata.Add("docType", "textDocuments");
In the next example, we set the metadata value by using implicit key/value syntax:
container.Metadata["category"] = "guidance";
To persist the newly set metadata, you must call the SetMetadataAsync method of the CloudBlobCon-
tainer class:
await container.SetMetadataAsync();
MCT USE ONLY. STUDENT USE PROHIBITED 108 Module 3 Develop solutions that use blob storage
Lab scenario
You're preparing to host a web application in Microsoft Azure that uses a combination of raster and
vector graphics. As a development group, your team has decided to store any multimedia content in
Azure Storage and manage it in an automated fashion by using C# code in Microsoft .NET. Before you
begin this significant milestone, you have decided to take some time to learn the newest version of the .
NET SDK that's used to access Storage by creating a simple application to manage and enumerate blobs
and containers.
Objectives
After you complete this lab, you will be able to:
●● Create containers and upload blobs by using the Azure portal.
●● Enumerate blobs and containers by using the Microsoft Azure Storage SDK for .NET.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
There are three access tiers for block blob data: hot, cold, and archive. New storage accounts are created in
which tier by default?
Hot
Cold
Archive
Review Question 3
Which of the following classes provides a point of access to the blob service in your code?
CloudBlockBlob
CloudStorageAccount
CloudBlobClient
CloudBlobContainer
Review Question 4
True or false, all data written to Azure Storage is automatically encrypted using SSL.
True
False
Review Question 5
Which of the following redundancy options will protect your data in the event of a region-wide outage?
Locally redundant storage (LRS)
Read-access geo-redundant storage (RA-GRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS)
MCT USE ONLY. STUDENT USE PROHIBITED 110 Module 3 Develop solutions that use blob storage
Answers
Review Question 1
Which of the following types of blobs are designed to store text and binary data?
Page blob
■■ Block blob
Append blob
Data blob
Explanation
Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data that
can be managed individually
Review Question 2
There are three access tiers for block blob data: hot, cold, and archive. New storage accounts are created
in which tier by default?
■■ Hot
Cold
Archive
Explanation
The Hot access tier, which is optimized for frequent access of objects in the storage account. New storage
accounts are created in the hot tier by default
Review Question 3
Which of the following classes provides a point of access to the blob service in your code?
CloudBlockBlob
CloudStorageAccount
■■ CloudBlobClient
CloudBlobContainer
Explanation
Review Question 4
True or false, all data written to Azure Storage is automatically encrypted using SSL.
True
■■ False
Explanation
All data (including metadata) written to Azure Storage is automatically encrypted using Storage Service
Encryption (SSE).
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 111
Review Question 5
Which of the following redundancy options will protect your data in the event of a region-wide outage?
Locally redundant storage (LRS)
■■ Read-access geo-redundant storage (RA-GRS)
Zone-redundant storage (ZRS)
■■ Geo-redundant storage (GRS)
Explanation
GRS and RA-GRS will protect your data in case of a region-wide outage. LRS provides protection at the node
level within a data center. ZRS provides protection at the data center level (zonal or non-zonal).
MCT USE ONLY. STUDENT USE PROHIBITED
Module 4 Develop solutions that use Cosmos
DB storage
With Azure Cosmos DB, developers can choose from five well-defined consistency models on the consist-
ency spectrum. From strongest to more relaxed, the models include strong, bounded staleness, session,
consistent prefix, and eventual consistency. The models are well-defined and intuitive and can be used for
specific real-world scenarios. Each model provides availability and performance tradeoffs and is backed
by the SLAs. The following image shows the different consistency levels as a spectrum.
The consistency levels are region-agnostic and are guaranteed for all operations regardless of the region
from which the reads and writes are served, the number of regions associated with your Azure Cosmos
account, or whether your account is configured with a single or multiple write regions.
Read consistency applies to a single read operation scoped within a partition-key range or a logical
partition. The read operation can be issued by a remote client or a stored procedure.
1 https://github.com/Azure/azure-cosmos-tla
MCT USE ONLY. STUDENT USE PROHIBITED 116 Module 4 Develop solutions that use Cosmos DB storage
●● Consistent prefix: Updates that are returned contain some prefix of all the updates, with no gaps.
Consistent prefix consistency level guarantees that reads never see out-of-order writes.
●● Eventual: There's no ordering guarantee for reads. In the absence of any further writes, the replicas
eventually converge.
2 https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels-across-apis#cassandra-mapping
3 https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels-across-apis#mongo-mapping
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Cosmos DB overview 117
MongoDB API
Azure Cosmos DB automatically indexes data without requiring you to deal with schema and index
management. It is multi-model and supports document, key-value, graph, and columnar data models. By
default, you can interact with Cosmos DB using SQL API. Additionally, the Cosmos DB service implements
wire protocols for common NoSQL APIs including Cassandra, MongoDB, Gremlin, and Azure Table
Storage. This allows you to use your familiar NoSQL client drivers and tools to interact with your Cosmos
database.
By default, new accounts created using Azure Cosmos DB's API for MongoDB are compatible with version
3.6 of the MongoDB wire protocol. Any MongoDB client driver that understands this protocol version
should be able to natively connect to Cosmos DB.
Table API
The Table API in Azure Cosmos DB is a key-value database service built to provide premium capabilities
(for example, automatic indexing, guaranteed low latency, and global distribution) to existing Azure Table
storage applications without making any app changes.
MCT USE ONLY. STUDENT USE PROHIBITED 118 Module 4 Develop solutions that use Cosmos DB storage
Gremlin API
The Azure Cosmos DB Gremlin API is used to store and operate with graph data on a fully managed
database service designed for any scale. Some of the features offered by the Gremlin API are: elastically
scalable throughput and storage, multi-region replication, fast queries and transversals, fully managed
graph database, automatic indexing, compatibility with Apache TinkerPop, and tunable consistency levels.
SQL API
The SQL API in Azure Cosmos DB is a JavaScript and JavaScript Object Notation (JSON) native API based
on the Azure Cosmos DB database engine. The SQL API also provides query capabilities rooted in the
familiar SQL query language. Using SQL, you can query for documents based on their identifiers or make
deeper queries based on properties of the document, complex objects, or even the existence of specific
properties. The SQL API supports the execution of JavaScript logic within the database in the form of
stored procedures, triggers, and user-defined functions.
You provision the number of RUs for your application on a per-second basis in increments of 100 RUs per
second. To scale the provisioned throughput for your application, you can increase or decrease the
number of RUs at any time.
You can provision throughput at two distinct granularities:
●● Containers
●● Databases
MCT USE ONLY. STUDENT USE PROHIBITED 120 Module 4 Develop solutions that use Cosmos DB storage
Azure Cosmos SQL API Cassandra API Azure Cosmos Gremlin API Table API
entity DB API for
MongoDB
Azure Cosmos Database Keyspace Database Database NA
database
✔️ Note: With Table API accounts, when you create your first table, a default database is automatically
created in your Azure Cosmos account.
Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Read Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Create new Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Update Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Azure Cosmos SQL API Cassandra API Azure Cosmos Gremlin API Table API
entity DB API for
MongoDB
Azure Cosmos Container Table Collection Graph Table
container
Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Delete a Yes Yes Yes Yes NA NA
container
Cosmos entity SQL API Cassandra API Azure Cosmos Gremlin API Table API
DB API for
MongoDB
Azure Cosmos Document Row Document Node or edge Item
item
Operations on items
Azure Cosmos items support the following operations. You can use any of the Azure Cosmos APIs to
perform the operations.
Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Insert, No Yes Yes Yes Yes Yes
Replace,
Delete,
Upsert, Read
Logical partitions
A logical partition consists of a set of items that have the same partition key. For example, in a container
where all items contain a City property, you can use City as the partition key for the container. Groups
of items that have specific values for City, such as London, Paris, and NYC, form distinct logical
partitions. You don't have to worry about deleting a partition when the underlying data is deleted.
In Azure Cosmos DB, a container is the fundamental unit of scalability. Data that's added to the container
and the throughput that you provision on the container are automatically (horizontally) partitioned across
a set of logical partitions. Data and throughput are partitioned based on the partition key you specify for
the Azure Cosmos container.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Cosmos DB data structure 123
Physical partitions
An Azure Cosmos container is scaled by distributing data and throughput across a large number of
logical partitions. Internally, one or more logical partitions are mapped to a physical partition that
consists of a set of replicas, also referred to as a replica set. Each replica set hosts an instance of the Azure
Cosmos database engine. A replica set makes the data stored within the physical partition durable, highly
available, and consistent. A physical partition supports the maximum amount of storage and request
units (RUs). Each replica that makes up the physical partition inherits the partition's storage quota. All
replicas of a physical partition collectively support the throughput that's allocated to the physical parti-
tion.
The following image shows how logical partitions are mapped to physical partitions that are distributed
globally:
Throughput provisioned for a container is divided evenly among physical partitions. A partition key
design that doesn't distribute the throughput requests evenly might create “hot” partitions. Hot partitions
might result in rate-limiting and in inefficient use of the provisioned throughput, and higher costs.
Unlike logical partitions, physical partitions are an internal implementation of the system. You can't
control the size, placement, or count of physical partitions, and you can't control the mapping between
logical partitions and physical partitions. However, you can control the number of logical partitions and
the distribution of data, workload and throughput by choosing the right logical partition key.
●● Azure Cosmos containers have a minimum throughput of 400 request units per second (RU/s). When
throughput is provisioned on a database, minimum RUs per container is 100 request units per second
(RU/s). Requests to the same partition key can't exceed the throughput that's allocated to a partition.
If requests exceed the allocated throughput, requests are rate-limited. So, it's important to pick a
partition key that doesn't result in “hot spots” within your application.
●● Choose a partition key that has a wide range of values and access patterns that are evenly spread
across logical partitions. This helps spread the data and the activity in your container across the set of
logical partitions, so that resources for data storage and throughput can be distributed across the
logical partitions.
●● Choose a partition key that spreads the workload evenly across all partitions and evenly over time.
Your choice of partition key should balance the need for efficient partition queries and transactions
against the goal of distributing items across multiple partitions to achieve scalability.
●● Candidates for partition keys might include properties that appear frequently as a filter in your
queries. Queries can be efficiently routed by including the partition key in the filter predicate.
One option is to set /deviceId or /date as the partition key. Another option is to concatenate these
two values into a synthetic partitionKey property that's used as the partition key.
{
"deviceId": "abc-123",
"date": 2018,
"partitionKey": "abc-123-2018"
}
In real-time scenarios, you can have thousands of items in a database. Instead of adding the synthetic key
manually, define client-side logic to concatenate values and insert the synthetic key into the items in your
Cosmos containers.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Cosmos DB data structure 125
Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Enumerate Yes Yes Yes (data- Yes NA NA
all databases base is
mapped to a
keyspace)
Read Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Cosmos DB resources and data 127
Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Create new Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Update Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Cosmos entity SQL API Cassandra API Azure Cosmos Gremlin API Table API
DB API for
MongoDB
Azure Cosmos Document Row Document Node or edge Item
item
MCT USE ONLY. STUDENT USE PROHIBITED 128 Module 4 Develop solutions that use Cosmos DB storage
Prerequisites
This demo is performed in the Azure Portal.
Login to Azure
1. Login to the Azure Portal. https://portal.azure.com
2. Select Add.
3. On the Create Azure Cosmos DB Account page, enter the basic settings for the new Azure Cosmos
account.
●● Subscription: Select the subscription for your pass.
●● Resource Group: Select Create new, then enter az204-cosmos-rg.
●● Account Name: Enter a unique name to identify your Azure Cosmos
account. The name can only contain lowercase letters, numbers, and the hyphen (-) character. It
must be between 3-31 characters in length.
●● API: Select Core (SQL) to create a document database and query by using SQL syntax.
●● Location: Use the location that is closest to your users to give them the fastest access to the data.
4. Select Review + create. You can skip the Network and Tags sections.
5. Review the account settings, and then select Create. It takes a few minutes to create the account. Wait
for the portal page to display Your deployment is complete.
6. Select Go to resource to go to the Azure Cosmos DB account page.
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Cosmos DB resources and data 129
2. Add the following structure to the document on the right side of the Documents pane:
{
"id": "1",
"category": "personal",
"name": "groceries",
"description": "Pick up apples and strawberries.",
"isComplete": false
MCT USE ONLY. STUDENT USE PROHIBITED 130 Module 4 Develop solutions that use Cosmos DB storage
3. Select Save.
4. Select New Document again, and create and save another document with a unique id, and any other
properties and values you want. Your documents can have any structure, because Azure Cosmos DB
doesn't impose any schema on your data.
❗️ Important: Don't delete the Azure Cosmos DB account or the az204-cosmos-rg resource group just
yet, we'll use it for another demo later in this lesson.
CosmosClient
Creates a new CosmosClient with a connection string. CosmosClient is thread-safe. Its recommend-
ed to maintain a single instance of CosmosClient per lifetime of the application which enables efficient
connection management and performance.
CosmosClient client = new CosmosClient(endpoint, key);
Database examples
Create a database
The CosmosClient.CreateDatabaseIfNotExistsAsync checks if a database exists, and if it
doesn't, creates it. Only the database id is used to verify if there is an existing database.
// An object containing relevant information about the response
DatabaseResponse databaseResponse = await client.CreateDatabaseIfNotExistsAsync(databaseId, 10000);
// A client side reference object that allows additional operations like ReadAsync
Database database = databaseResponse;
Read a database by ID
Reads a database from the Azure Cosmos service as an asynchronous operation.
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Cosmos DB resources and data 131
Delete a database
Delete a Database as an asynchronous operation.
await database.DeleteAsync();
Container examples
Create a container
The Database.CreateContainerIfNotExistsAsync method checks if a container exists, and if it
doesn't, it creates it. Only the container id is used to verify if there is an existing container.
// Set throughput to the minimum value of 400 RU/s
ContainerResponse simpleContainer = await database.CreateContainerIfNotExistsAsync(
id: containerId,
partitionKeyPath: partitionKey,
throughput: 400);
Get a container by ID
Container container = database.GetContainer(containerId);
ContainerProperties containerProperties = await container.ReadContainerAsync();
Delete a container
Delete a Container as an asynchronous operation.
await database.GetContainer(containerId).DeleteContainerAsync();
Item examples
Create an item
Use the Container.CreateItemAsync method to create an item. The method requires a JSON
serializable object that must contain an id property, and a partitionKey.
ItemResponse<SalesOrder> response = await container.CreateItemAsync(salesOrder, new PartitionKey(-
salesOrder.AccountNumber));
MCT USE ONLY. STUDENT USE PROHIBITED 132 Module 4 Develop solutions that use Cosmos DB storage
Read an item
Use the Container.ReadItemAsync method to read an item. The method requires type to serialize
the item to along with an id property, and a partitionKey.
string id = "[id]";
string accountNumber = "[partition-key]";
ItemResponse<SalesOrder> response = await container.ReadItemAsync(id, new PartitionKey(account-
Number));
Query an item
The Container.GetItemQueryIterator method creates a query for items under a container in an
Azure Cosmos database using a SQL statement with parameterized values. It returns a FeedIterator.
QueryDefinition query = new QueryDefinition(
"select * from sales s where s.AccountNumber = @AccountInput ")
.WithParameter("@AccountInput", "Account1");
Additional resources
●● The azure-cosmos-dotnet-v34 GitHub repository includes the latest .NET sample solutions to
perform CRUD and other common operations on Azure Cosmos DB resources.
●● Visit this article Azure Cosmos DB.NET V3 SDK (Microsoft.Azure.Cosmos) examples for the SQL
API5 for direct links to specific examples in the GitHub repository.
Prerequisites
This demo is performed in the Visual Studio code on the virtual machine.
4 https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage
5 https://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-dotnet-v3sdk-samples
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Cosmos DB resources and data 133
cd az204-cosmosdemo
2. In Program.cs, replace <your endpoint URL> with the value of URI. Replace <your primary
key> with the value of PRIMARY KEY. You get these values from the browser window you left open
above.
3. Below the Main method, add a new asynchronous task called GetStartedDemoAsync, which instanti-
ates our new CosmosClient.
public async Task CosmosDemoAsync()
{
// Create a new instance of the Cosmos Client
this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
}
4. Add the following code to the Main method to run the CosmosDemoAsync asynchronous task. The
Main method catches exceptions and writes them to the console.
public static async Task Main(string[] args)
{
try
{
Console.WriteLine("Beginning operations...\n");
Program p = new Program();
await p.CosmosDemoAsync();
}
catch (CosmosException de)
{
Exception baseException = de.GetBaseException();
Console.WriteLine("{0} error occurred: {1}", de.StatusCode, de);
}
catch (Exception e)
{
Console.WriteLine("Error: {0}", e);
MCT USE ONLY. STUDENT USE PROHIBITED
Working with Azure Cosmos DB resources and data 135
}
finally
{
Console.WriteLine("End of demo, press any key to exit.");
Console.ReadKey();
}
}
5. Save your work and, in a terminal in VS Code, run the dotnet run command.
The console displays the message: End of demo, press any key to exit. This message confirms that
your application made a connection to Azure Cosmos DB.
Create a database
1. Copy and paste the CreateDatabaseAsync method below your CosmosDemoAsync method.
CreateDatabaseAsync creates a new database with ID az204Database if it doesn't already exist,
that has the ID specified from the databaseId field.
private async Task CreateDatabaseAsync()
{
// Create a new database
this.database = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(databaseId);
Console.WriteLine("Created Database: {0}\n", this.database.Id);
}
2. Copy and paste the code below where you instantiate the CosmosClient to call the CreateDataba-
seAsync method you just added.
// Runs the CreateDatabaseAsync method
await this.CreateDatabaseAsync();
3. Save your work and, in a terminal in VS Code, run the dotnet run command. The console displays
the message: Created Database: az204Database
Create a container
1. Copy and paste the CreateContainerAsync method below your CreateDatabaseAsync
method.
private async Task CreateContainerAsync()
{
// Create a new container
this.container = await this.database.CreateContainerIfNotExistsAsync(containerId, "/LastName");
Console.WriteLine("Created Container: {0}\n", this.container.Id);
}
2. Copy and paste the code below where you instantiated the CosmosClient to call the CreateCon-
tainer method you just added.
// Run the CreateContainerAsync method
await this.CreateContainerAsync();
MCT USE ONLY. STUDENT USE PROHIBITED 136 Module 4 Develop solutions that use Cosmos DB storage
3. Save your work and, in a terminal in VS Code, run the dotnet run command. The console displays
the message: Created Container: az204Container
✔️ Note: You can verify the results by returning to your browser and selecting Browse in the Con-
tainers section in the left navigation. You may need to select Refresh.
Wrapping up
You can now safely delete the az204-cosmos-rg resource group from your account.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 137
Lab scenario
You have been assigned the task of updating your company’s existing retail web application to use more
than one data service in Microsoft Azure. Your company’s goal is to take advantage of the best data
service for each application component. After conducting thorough research, you decide to migrate your
inventory database from Azure SQL Database to Azure Cosmos DB.
Objectives
After you complete this lab, you will be able to:
●● Create instances of various database services by using the Azure portal.
●● Write C# code to connect to SQL Database.
●● Write C# code to connect to Azure Cosmos DB.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
The cost of all database operations is abstracted and normalized by Azure Cosmos DB and is expressed by
which of the options below?
Input/Output Operations Per Second (IOPS)
CPU usage
Request Units (RU)
Read requests
Review Question 3
Azure Cosmos DB allows developers to choose among the five well-defined consistency models: strong,
bounded staleness, session, consistent prefix and eventual. Which of statements below describing these
models are true?
(Select all that apply.)
When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a latest committed value.
When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a previous write.
When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the latest committed value of the write operation.
When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the previous committed value of the write operation.
Review Question 4
Partition keys should have many distinct values. If such a property doesn’t exist in your data, you can
construct a synthetic partition key. Which of the following methods below create a synthetic key?
Partition key with random suffix
Partition key with pre-calculated suffixes
Concatenate multiple properties of an item
None of the above
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 139
Answers
Review Question 1
Which of the below options would contain triggers and stored procedures in the Azure Comsos DB
hierarchy?
Database Accounts
Databases
■■ Containers
Items
Explanation
Stored procedures, user-defined functions, triggers, etc., are stored at the container level.
Review Question 2
The cost of all database operations is abstracted and normalized by Azure Cosmos DB and is expressed
by which of the options below?
Input/Output Operations Per Second (IOPS)
CPU usage
■■ Request Units (RU)
Read requests
Explanation
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units
(or RUs, for short).
Review Question 3
Azure Cosmos DB allows developers to choose among the five well-defined consistency models: strong,
bounded staleness, session, consistent prefix and eventual. Which of statements below describing these
models are true?
(Select all that apply.)
When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a latest committed value.
■■ When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a previous write.
■■ When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the latest committed value of the write operation.
When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the previous committed value of the write operation.
Explanation
MCT USE ONLY. STUDENT USE PROHIBITED 140 Module 4 Develop solutions that use Cosmos DB storage
Review Question 4
Partition keys should have many distinct values. If such a property doesn’t exist in your data, you can
construct a synthetic partition key. Which of the following methods below create a synthetic key?
Partition key with random suffix
Partition key with pre-calculated suffixes
■■ Concatenate multiple properties of an item
None of the above
Explanation
You can form a partition key by concatenating multiple property values into a single artificial partitionKey
property. These keys are referred to as synthetic keys.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 5 Implement IaaS solutions
compliance requirements. IaaS is useful for handling unpredictable demand and steadily growing
storage needs. It can also simplify planning and management of backup and recovery systems.
●● Web apps. IaaS provides all the infrastructure to support web apps, including storage, web and
application servers, and networking resources. Organizations can quickly deploy web apps on IaaS
and easily scale infrastructure up and down when demand for the apps is unpredictable.
●● High-performance computing. High-performance computing (HPC) on supercomputers, computer
grids, or computer clusters helps solve complex problems involving millions of variables or calcula-
tions. Examples include earthquake and protein folding simulations, climate and weather predictions,
financial modeling, and evaluating product designs.
●● Big data analysis. Big data is a popular term for massive data sets that contain potentially valuable
patterns, trends, and associations. Mining data sets to locate or tease out these hidden patterns
requires a huge amount of processing power, which IaaS economically provides.
●● Extended Datacenter. Add capacity to your datacenter by adding virtual machines in Azure instead
of incurring the costs of physically adding hardware or space to your physical location. Connect your
physical network to the Azure cloud network seamlessly.
Naming
A virtual machine has a name assigned to it and it has a computer name configured as part of the
operating system. The name of a VM can be up to 15 characters.
If you use Azure to create the operating system disk, the computer name and the virtual machine name
are the same. If you upload and use your own image that contains a previously configured operating
system and use it to create a virtual machine, the names can be different. We recommend that when you
upload your own image file, you make the computer name in the operating system and the virtual
machine name the same.
Locations
All resources created in Azure are distributed across multiple geographical regions around the world.
Usually, the region is called location when you create a VM. For a VM, the location specifies where the
virtual hard disks are stored.
This table shows some of the ways you can get a list of available locations.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning VMs in Azure 143
Method Description
Azure portal Select a location from the list when you create a
VM.
Azure PowerShell Use the Get-AzLocation command.
REST API Use the List locations operation.
Azure CLI Use the az account list-locations opera-
tion.
VM size
The size of the VM that you use is determined by the workload that you want to run. The size that you
choose then determines factors such as processing power, memory, and storage capacity. Azure offers a
wide variety of sizes to support many types of uses.
Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure
charges only for the minutes used. Storage is priced and charged separately.
VM Limits
Your subscription has default quota limits in place that could impact the deployment of many VMs for
your project. The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by
filing a support ticket requesting an increase
Extensions
VM extensions give your VM additional capabilities through post deployment configuration and auto-
mated tasks.
These common tasks can be accomplished using extensions:
●● Run custom scripts – The Custom Script Extension helps you configure workloads on the VM by
running your script when the VM is provisioned.
●● Deploy and manage configurations – The PowerShell Desired State Configuration (DSC) Extension
helps you set up DSC on a VM to manage configurations and environments.
●● Collect diagnostics data – The Azure Diagnostics Extension helps you configure the VM to collect
diagnostics data that can be used to monitor the health of your application.
1 https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines
MCT USE ONLY. STUDENT USE PROHIBITED 144 Module 5 Implement IaaS solutions
Related resources
The resources in this table are used by the VM and need to exist or be created when the VM is created.
2 https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
MCT USE ONLY. STUDENT USE PROHIBITED 146 Module 5 Implement IaaS solutions
An availability set is composed of two additional groupings that protect against hardware failures and
allow updates to safely be applied - fault domains (FDs) and update domains (UDs). You can read more
about how to manage the availability of Linux VMs or Windows VMs.
Fault domains
A fault domain is a logical group of underlying hardware that share a common power source and net-
work switch, similar to a rack within an on-premises datacenter. As you create VMs within an availability
set, the Azure platform automatically distributes your VMs across these fault domains. This approach
limits the impact of potential physical hardware failures, network outages, or power interruptions.
Update domains
An update domain is a logical group of underlying hardware that can undergo maintenance or be
rebooted at the same time. As you create VMs within an availability set, the Azure platform automatically
distributes your VMs across these update domains. This approach ensures that at least one instance of
your application always remains running as the Azure platform undergoes periodic maintenance. The
order of update domains being rebooted may not proceed sequentially during planned maintenance, but
only one update domain is rebooted at a time.
Availability zones
Availability zones, an alternative to availability sets, expand the level of control you have to maintain the
availability of the applications and data on your VMs. An Availability Zone is a physically separate zone
within an Azure region. There are three Availability Zones per supported Azure region. Each Availability
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning VMs in Azure 147
Zone has a distinct power source, network, and cooling. By architecting your solutions to use replicated
VMs in zones, you can protect your apps and data from the loss of a datacenter. If one zone is compro-
mised, then replicated apps and data are instantly available in another zone.
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources. To
do so, select the resource group for the virtual machine, select Delete, then confirm the name of the
resource group to delete.
MCT USE ONLY. STUDENT USE PROHIBITED 148 Module 5 Implement IaaS solutions
New-AzVm `
-ResourceGroupName $myResourceGroup `
-Name $myVM `
-Location $myLocation `
-adminUsername $vmAdmin `
-adminPassword $vmPassword
3 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning VMs in Azure 149
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources.
Replace <myResourceGroup> with the name you used earlier, or you can delete it through the portal.
Remove-AzResourceGroup -Name "<myResourceGroup>"
Terminology
If you're new to Azure Resource Manager, there are some terms you might not be familiar with.
●● Resource - A manageable item that is available through Azure. Some common resources are a virtual
machine, storage account, web app, database, and virtual network, but there are many more.
●● Resource group - A container that holds related resources for an Azure solution. The resource group
can include all the resources for the solution, or only those resources that you want to manage as a
group. You decide how you want to allocate resources to resource groups based on what makes the
most sense for your organization.
●● Resource provider - A service that supplies the resources you can deploy and manage through
Resource Manager. Each resource provider offers operations for working with the resources that are
deployed. Some common resource providers are Microsoft.Compute, which supplies the virtual
machine resource, Microsoft.Storage, which supplies the storage account resource, and Microsoft.
Web, which supplies resources related to web apps.
●● Resource Manager template - A JavaScript Object Notation (JSON) file that defines one or more
resources to deploy to a resource group. It also defines the dependencies between the deployed
resources. The template can be used to deploy the resources consistently and repeatedly.
●● Declarative syntax - Syntax that lets you state “Here is what I intend to create” without having to
write the sequence of programming commands to create it. The Resource Manager template is an
example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to
Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
Create and deploy ARM templates 151
Understand scope
Azure provides four levels of management scope: management groups, subscriptions, resource groups,
and resources. The following image shows an example of these layers.
You apply management settings at any of these levels of scope. The level you select determines how
widely the setting is applied. Lower levels inherit settings from higher levels. For example, when you apply
a policy to the subscription, the policy is applied to all resource groups and resources in your subscrip-
tion. When you apply a policy on the resource group, that policy is applied the resource group and all its
resources. However, another resource group doesn't have that policy assignment.
You can deploy templates to management groups, subscriptions, or resource groups.
4 https://docs.microsoft.com/en-us/azure/templates/
MCT USE ONLY. STUDENT USE PROHIBITED 152 Module 5 Implement IaaS solutions
"kind": "Storage",
"properties": {
}
}
]
It converts the definition to the following REST API operation, which is sent to the Microsoft.Storage
resource provider:
PUT
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/
providers/Microsoft.Storage/storageAccounts/mystorageaccount?api-version=2016-01-01
REQUEST BODY
{
"location": "westus",
"properties": {
}
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage"
}
But, you don't have to define your entire infrastructure in a single template. Often, it makes sense to
divide your deployment requirements into a set of targeted, purpose-specific templates. You can easily
reuse these templates for different solutions. To deploy a particular solution, you create a master tem-
MCT USE ONLY. STUDENT USE PROHIBITED
Create and deploy ARM templates 153
plate that links all the required templates. The following image shows how to deploy a three tier solution
through a parent template that includes three nested templates.
If you envision your tiers having separate lifecycles, you can deploy your three tiers to separate resource
groups. The resources can still be linked to resources in other resource groups.
Azure Resource Manager analyzes dependencies to ensure resources are created in the correct order. If
one resource relies on a value from another resource (such as a virtual machine needing a storage
account for disks), you set a dependency. For more information, see Defining dependencies in Azure
Resource Manager templates.
You can also use the template for updates to the infrastructure. For example, you can add a resource to
your solution and add configuration rules for the resources that are already deployed. If the template
specifies creating a resource but that resource already exists, Azure Resource Manager performs an
update instead of creating a new asset. Azure Resource Manager updates the existing asset to the same
state as it would be as new.
Resource Manager provides extensions for scenarios when you need additional operations such as
installing particular software that isn't included in the setup. If you're already using a configuration
management service, like DSC, Chef or Puppet, you can continue working with that service by using
extensions.
Finally, the template becomes part of the source code for your app. You can check it in to your source
code repository and update it as your app evolves. You can edit the template through Visual Studio.
{
"condition": "[equals(parameters('newOrExisting'),'new')]",
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
"apiVersion": "2017-06-01",
"location": "[parameters('location')]",
"sku": {
"name": "[variables('storageAccountType')]"
},
"kind": "Storage",
"properties": {}
}
When the parameter newOrExisting is set to new, the condition evaluates to true. The storage account is
deployed. However, when newOrExisting is set to existing, the condition evaluates to false and the
storage account isn't deployed.
Runtime functions
If you use a reference or list function with a resource that is conditionally deployed, the function is
evaluated even if the resource isn't deployed. You get an error if the function refers to a resource that
doesn't exist.
Use the if function to make sure the function is only evaluated for conditions when the resource is
deployed.
Additional resources
●● Azure Resource Manager template functions
●● https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-tem-
plate-functions
Complete mode
In complete mode, Resource Manager deletes resources that exist in the resource group but aren't
specified in the template.
If your template includes a resource that isn't deployed because condition evaluates to false, the result
depends on which REST API version you use to deploy the template. If you use a version earlier than
MCT USE ONLY. STUDENT USE PROHIBITED
Create and deploy ARM templates 155
2019-05-10, the resource isn't deleted. With 2019-05-10 or later, the resource is deleted. The latest
versions of Azure PowerShell and Azure CLI delete the resource.
Be careful using complete mode with copy loops. Any resources that aren't specified in the template
after resolving the copy loop are deleted.
Incremental mode
In incremental mode, Resource Manager leaves unchanged resources that exist in the resource group
but aren't specified in the template.
However, when redeploying an existing resource in incremental mode, the outcome is a different. Specify
all properties for the resource, not just the ones you're updating. A common misunderstanding is to think
properties that aren't specified are left unchanged. If you don't specify certain properties, Resource
Manager interprets the update as overwriting those values.
Example result
To illustrate the difference between incremental and complete modes, consider the following table.
To set the deployment mode when deploying with Azure CLI, use the mode parameter.
az group deployment create \
--name ExampleDeployment \
--mode Complete \
--resource-group ExampleGroup \
--template-file storage.json \
--parameters storageAccountType=Standard_GRS
MCT USE ONLY. STUDENT USE PROHIBITED 156 Module 5 Implement IaaS solutions
Azure requires that each Azure service has a unique name. The deployment fails if you enter a storage
account name that already exists. To avoid this issue, you can use the template function uniques-
tring() to generate a unique storage account name.
1. In the Azure portal, select Create a resource.
2. In Search the Marketplace, type template deployment, and then press ENTER.
3. Select Template deployment (deploy using custom templates).
4. Select Create.
5. Select Build your own template to open the editor.
6. Select Load file, and then select the template.json file you downloaded in the last section.
7. Make the following three changes to the template:
●● Remove the storageAccountName parameter from the parameters element.
●● Add one variable called storageAccountName as shown below to the variables element. The
example below will generate a unique Storage Account name:
"storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"
●● Update the name element of the Microsoft.Storage/storageAccounts resource to use the newly
defined variable instead of the parameter:
"name": "[variables('storageAccountName')]",
8. Select Save.
9. In the BASICS section of the form that appears select the resource group you created in the last
section.
10. In the SETTINGS section of the form enter the values from the parameters you wrote down in Step 8
of the previous section. Here is a screenshot of a sample deployment:
MCT USE ONLY. STUDENT USE PROHIBITED 158 Module 5 Implement IaaS solutions
11. Accept the terms and conditions and then select Purchase.
12. Select the bell icon (notifications) from the top of the screen to see the deployment status. Wait until
the deployment is completed.
13. Select Go to resource group from the notification pane. You can see the deployment status was
successful, and there is only one storage account in the resource group. The storage account name is
a unique string generated by the template.
Clean up resources
When the Azure resources are no longer needed, clean up the resources you deployed by deleting the
resource group.
templates in Visual Studio Code without the extension, but the extension provides autocomplete options
that simplify template development.
It's often easier, and better, to begin building your ARM template based off one of the existing Quickstart
templates available on the Azure Quickstart Templates5 site.
This Demo is based on the Create a standard storage account6 template.
Prerequisites
You will need:
●● Visual Studio Code. You can download a copy here: https://code.visualstudio.com/.
●● Resource Manager Tools extension.
Follow these steps to install the Resource Manager Tools extension:
1. Open Visual Studio Code.
2. Press CTRL+SHIFT+X to open the Extensions pane.
3. Search for Azure Resource Manager Tools, and then select Install.
4. Select Reload to finish the extension installation.
5 https://azure.microsoft.com/resources/templates/
6 https://azure.microsoft.com/resources/templates/101-storage-account-create/
MCT USE ONLY. STUDENT USE PROHIBITED 160 Module 5 Implement IaaS solutions
},
"storageUri": {
"type": "string",
"value": "[reference(variables('storageAccountName')).primaryEndpoints.blob]"
}
}
If you copied and pasted the code inside Visual Studio Code, try to retype the value element to
experience the IntelliSense capability of the Resource Manager Tools extension.
Select the file you saved in the previous section. The default name is azuredeploy.json. The template
file must be accessible from the shell. You can use the ls command and the cat command to verify the
file was uploaded successfully.
4. From the Cloud shell, run the following commands.
$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
7 https://shell.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED
Create and deploy ARM templates 161
Update the template file name if you save the file to a name other than azuredeploy.json.
The following screenshot shows a sample deployment:
The storage account name and the storage URL in the outputs section are highlighted on the screen-
shot.
Clean up resources
When the Azure resources are no longer needed, clean up the resources you deployed by deleting the
resource group.
MCT USE ONLY. STUDENT USE PROHIBITED 162 Module 5 Implement IaaS solutions
Docker
Docker is a containerization platform used to develop, ship, and run containers. Docker doesn't use a
hypervisor, and you can run Docker on your desktop or laptop if you're developing and testing applica-
tions. The desktop version of Docker supports Linux, Windows, and macOS. For production systems,
Docker is available for server environments, including many variants of Linux and Microsoft Windows
Server 2016 and above.
The Docker platform consists of several components that we use to build, run, and manage our contain-
erized applications.
Docker Engine
The Docker Engine consists of several components configured as a client-server implementation where
the client and server run simultaneously on the same host. The client communicates with the server using
a REST API, which allows the client to also communicate with a remote server instance.
There are several objects that you'll create and configure to support your container deployments. These
include networks, storage volumes, plugins, and other service objects. We won't cover all of these objects
here, but it's good to keep in mind that these objects are items that we can create and deploy as needed.
Docker Hub
Docker Hub is a Software-as-a-Service (SaaS) Docker container registry. Docker registries are repositories
that we use to store and distribute the container images we create. Docker Hub is the default public
registry Docker uses for image management.
Keep in mind that you can create and use a private Docker registry or use one of the many cloud provider
options available. For example, you can use Azure Container Registry to store Docker containers to use in
several Azure container enabled services.
Container images
A container image is a portable package that contains software. It's this image that, when run, becomes
our container. The container is the in-memory instance of an image.
A container image is immutable. Once you've built an image, the image can't be changed. The only way
to change an image is to create a new image. This feature is our guarantee that the image we use in
production is the same image used in development and QA.
Host OS
The host OS is the OS on which the Docker engine runs. Docker containers running on Linux share the
host OS kernel and don't require a container OS as long as the binary can access the OS kernel directly.
However, Windows containers need a container OS. The container depends on the OS kernel to manage
services such as the file system, network management, process scheduling, and memory management.
Container OS
The container OS is the OS that is part of the packaged image. We have the flexibility to include different
versions of Linux or Windows OSs in a container. This flexibility allows us to access specific OS features or
install additional software our applications may use.
The container OS is isolated from the host OS and is the environment in which we deploy and run our
application. Combined with the image's immutability, this isolation means the environment for our
application running in development is the same as in production.
MCT USE ONLY. STUDENT USE PROHIBITED 164 Module 5 Implement IaaS solutions
Dockerfile overview
A Dockerfile is a text file that contains the instructions we use to build and run a Docker image. The
following aspects of the image are defined:
●● The base or parent image we use to create the new image
●● Commands to update the base OS and install additional software
●● Build artifacts to include, such as a developed application
●● Services to expose, such a storage and network configuration
●● Command to run when the container is launched
Let's map these aspects to an example Dockerfile. Suppose we're creating a Docker image for an ASP.NET
Core website. The Dockerfile may look like the following example.
# Step 1: Specify the parent image for the new image
FROM ubuntu:18.04
RUN apt -y update && apt install -y wget nginx software-properties-common apt-transport-https \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb
-O packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& add-apt-repository universe \
&& apt -y update \
&& apt install -y dotnet-sdk-3.0
# STEP 8: Define the entry point of the process that runs in the container
ENTRYPOINT ["dotnet", "website.dll"]
We're not going to cover the Dockerfile file specification here or the detail of each command in our
above example. However, notice that there are several commands in this file that allow us to manipulate
the structure of the image.
Recall, we mentioned earlier that Docker images make use of unionfs. Each of these steps creates a
cached container image as we build the final container image. These temporary images are layered on
top of the previous and presented as single image once all steps complete.
Finally, notice the last step, step 8. The ENTRYPOINT in the file indicates which process will execute once
we run a container from an image.
Additional resources
●● Dockerfile reference
●● https://docs.docker.com/engine/reference/builder/
We aren't going to cover all the client commands and command flags here, but we'll look at some of the
most used commands.
✔️ Note: For a full list of client commands visit https://docs.docker.com/engine/reference/command-
line/docker/
Building an image
We use the docker build command to build Docker images. Let's assume we use the Dockerfile defini-
tion from earlier to build an image. Here is an example that shows the build command.
docker build -t temp-ubuntu .
Notice the steps listed in the output. When each step executes, a new layer gets added to the image
we're building.
Also, notice that we execute a number of commands to install software and manage configuration. For
example, in step 2, we run the apt -y update and apt install -y commands to update the OS.
MCT USE ONLY. STUDENT USE PROHIBITED
Create container images for solutions 167
These commands execute in a running container that is created for that step. Once the command has run,
the intermediate container is removed. The underlying cached image is kept on the build host and not
automatically deleted. This optimization ensures that later builds reuse these images to speed up build
times.
Image tags
An image tag is a text string that is used to version an image.
In the example build from earlier, notice the last build message that reads "Successfully tagged
temp-ubuntu: latest". When building an image, we name and optionally tag the image using the -t
command flag. In our example, we named the image using -t temp-ubuntu, while the resulting image
name was tagged temp-ubuntu: latest. An image is labeled with the latest tag if you don't specify a tag.
A single image can have multiple tags assigned to it. By convention, the most recent version of an image
is assigned the latest tag and a tag that describes the image version number. When you release a new
version of an image, you can reassign the latest tag to reference the new image.
Here is another example. Suppose you want to use the .NET Core samples Docker images. Here we have
four platforms versions that we can choose from:
●● mcr.microsoft.com/dotnet/core/samples:dotnetapp
●● mcr.microsoft.com/dotnet/core/samples:aspnetapp
●● mcr.microsoft.com/dotnet/core/samples:wcfservice
●● mcr.microsoft.com/dotnet/core/samples:wcfclient
You can't remove an image if the image is still in use by a container. The docker rmi command returns
an error message, which lists the container relying on the image.
Docker containers
A Docker image contains the application and environment required by the application to run, and a
container is a running instance of the image.
Volumes
A volume is stored on the host filesystem at a specific folder location. Volumes are stored within directo-
ries on the host filesystem. Docker will mount and manage the volumes in the container. Once mounted,
these volumes are isolated from the host machine. Multiple containers can simultaneously use the same
volumes. Volumes also don't get removed automatically when a container stops using the volume.
Docker creates and manages the new volume by using the docker volume create command. This
command can form part of our Dockerfile definition, which means that we can create volumes as part of
the container creation process. Docker will create the volume if it doesn't exist when you try to mount the
volume into a container the first time.
Bind mounts
A bind mount is conceptually the same as a volume, however, instead of using a specific folder, you can
mount any file or folder on the host, as long as the host can change the contents of these mounts. Just
like volumes, the bind mount is created if we mount it, and it doesn't yet exist on the host.
Bind mounts have limited functionality compared to volumes, and even though they're more performant,
they depend on the host having a specific folder structure in place.
Volumes are considered the preferred data storage strategy to use with containers.
--publish 80:8080
Any client browsing to the Docker host IP and port 8080 can access the app.
Prerequisites
●● You'll need a local installation of Docker
●● https://www.docker.com/products/docker-desktop
5. Open a web browser and go to the page for the sample web app at http://localhost:8080.
The COMMAND field shows the container started by running the command dotnet aspnetapp.dll. This
command invokes the .NET Core runtime to start the code in the aspnetapp.dll (the code for the
sample web app). The PORTS field indicates port 80 in the image was mapped to port 8080 on your
computer. The STATUS field shows the application is still running. Make a note of the container's
NAME.
2. Stop the Docker container. Specify the container name for the web app in the following command, in
place of <NAME>.
docker container stop <NAME>
3. Verify that the container is no longer running. The following command shows the status of the
container as Exited. The -a flag shows the status of all containers, not just those that are still running.
docker ps -a
4. Return to the web browser and refresh the page for the sample web app. It should fail with a Connec-
tion Refused error.
MCT USE ONLY. STUDENT USE PROHIBITED 172 Module 5 Implement IaaS solutions
2. Verify that the container has been removed with the following command. The command should no
longer list the container.
docker ps -a
5. List the images again to verify that the image for the microsoft/dotnet-samples web app has disap-
peared.
docker image list
Prerequisites
●● A local installation of Docker
●● https://www.docker.com/products/docker-desktop
●● A local installation of Git
●● https://desktop.github.com/
✔️ Note: The sample web app used in this demo implements a web API for a hotel reservations web site.
The web API exposes HTTP POST and GET operations that create and retrieve customer's bookings. The
data is not persisted and queries return sample data.
3. In this directory, create a new file named Dockerfile with no file extension and open it in a text editor.
echo "" > Dockerfile
notepad Dockerfile
4. Add the following commands to the Dockerfile. Each section is explained in the table below.
#1
FROM mcr.microsoft.com/dotnet/core/sdk:2.2
WORKDIR /src
COPY ["HotelReservationSystem/HotelReservationSystem.csproj", "HotelReservationSystem/"]
COPY ["HotelReservationSystemTypes/HotelReservationSystemTypes.csproj", "HotelReservationSys-
temTypes/"]
RUN dotnet restore "HotelReservationSystem/HotelReservationSystem.csproj"
#2
COPY . .
WORKDIR "/src/HotelReservationSystem"
RUN dotnet build "HotelReservationSystem.csproj" -c Release -o /app
#3
RUN dotnet publish "HotelReservationSystem.csproj" -c Release -o /app
#4
EXPOSE 80
WORKDIR /app
ENTRYPOINT ["dotnet", "HotelReservationSystem.dll"]
✔️ Note: A warning about file and directory permissions will be displayed when the process com-
pletes. You can ignore these warnings for the purposes of this exercise.
2. Run the following command to verify that the image has been created and stored in the local registry.
docker image list
The image will have the name reservationsystem. You'll also see an image named microsoft/
dotnet. This image contains the .NET Core SDK and was downloaded when the reservationsystem
image was built using the Dockerfile.
2. Start a web browser and navigate to http://localhost:8080/api/reservations/1. You should see a JSON
document containing the data for reservation number 1 returned by the web app. You can replace the
“1” with any reservation number, and you'll see the corresponding reservation details.
3. Examine the status of the container using the following command.
docker ps -a
Use cases
Pull images from an Azure container registry to various deployment targets:
●● Scalable orchestration systems that manage containerized applications across clusters of hosts,
including Kubernetes, DC/OS, and Docker Swarm.
●● Azure services that support building and running applications at scale, including Azure Kubernetes
Service (AKS), App Service, Batch, Service Fabric, and others.
Developers can also push to a container registry as part of a container development workflow. For
example, target a container registry from a continuous integration and delivery tool such as Azure
Pipelines or Jenkins.
Key features
●● Registry SKUs - Create one or more container registries in your Azure subscription. Registries are
available in three SKUs: Basic, Standard, and Premium, each of which supports webhook integration,
registry authentication with Azure Active Directory, and delete functionality.
You control access to a container registry using an Azure identity, an Azure Active Directory-backed
service principal, or a provided admin account. Log in to the registry using the Azure CLI or the
standard docker login command.
●● Supported images and artifacts - Grouped in a repository, each image is a read-only snapshot of a
Docker-compatible container. Azure container registries can include both Windows and Linux images.
You control image names for all your container deployments. Use standard Docker commands to push
images into a repository, or pull an image from a repository. In addition to Docker container images,
Azure Container Registry stores related content formats such as Helm charts and images built to the
Open Container Initiative (OCI) Image Format Specification8.
●● Azure Container Registry Tasks - Use Azure Container Registry Tasks (ACR Tasks) to streamline
building, testing, pushing, and deploying images in Azure.
ACR Tasks
ACR Tasks is a suite of features within Azure Container Registry. It provides cloud-based container image
building for platforms including Linux, Windows, and ARM, and can automate OS and framework patch-
ing for your Docker containers. ACR Tasks not only extends your “inner-loop” development cycle to the
8 https://github.com/opencontainers/image-spec/blob/master/spec.md
MCT USE ONLY. STUDENT USE PROHIBITED
Publish a container image to Azure Container Registry 177
cloud with on-demand container image builds, but also enables automated builds triggered by source
code updates, updates to a container's base image, or timers. For example, with base image update
triggers, you can automate your OS and application framework patching workflow, maintaining secure
environments while adhering to the principles of immutable containers.
Task scenarios
ACR Tasks supports several scenarios to build and maintain container images and other artifacts.
●● Quick task - Build and push a single container image to a container registry on-demand, in Azure,
without needing a local Docker Engine installation. Think docker build, docker push in the
cloud.
●● Automatically triggered tasks - Enable one or more triggers to build an image:
●● Trigger on source code update
●● Trigger on base image update
●● Trigger on a schedule
●● Multi-step task - Extend the single image build-and-push capability of ACR Tasks with multi-step,
multi-container-based workflows.
Each ACR Task has an associated source code context - the location of a set of source files used to build a
container image or other artifact. Example contexts include a Git repository or a local filesystem.
Additional resources
●● For more information on Azure Container Registry SKUs visit:
●● https://docs.microsoft.com/en-us/azure/container-registry/container-registry-skus
Resource Limit
Repositories No limit
Images No limit
Layers No limit
Tags No limit
Storage 5 TB
Very high numbers of repositories and tags can impact the performance of your registry. Periodically
delete unused repositories, tags, and images as part of your registry maintenance routine. Deleted
registry resources like repositories, images, and tags cannot be recovered after deletion.
Prerequisites
●● A local installation of Docker
●● https://www.docker.com/products/docker-desktop
●● A local installation of Azure CLI
●● https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
✔️ Note: Because the Azure Cloud Shell doesn't include all required Docker components (the dockerd
daemon), you can't use the Cloud Shell for this demo.
2. Create a resource group for the registry, replace <myResourceGroup> and <myLocation> in the
command below with your own values.
az group create --name <myResourceGroup> --location <myLocation>
3. Create a basic container registry. The registry name must be unique within Azure, and contain 5-50
alphanumeric characters. In the following example, myContainerRegistry007 is used. Update this to a
unique value.
MCT USE ONLY. STUDENT USE PROHIBITED
Publish a container image to Azure Container Registry 179
❗️ Important: Throughout the rest of this demo <acrName> is a placeholder for the container registry
name you created. You'll need to use that name throughout the rest of the demo.
2. Download an image, we'll use the hello-world image for the rest of the demo.
docker pull hello-world
3. Tag the image using the docker tag command. Before you can push an image to your registry, you
must tag it with the fully qualified name of your ACR login server. The login server name is in the
format <acrname>.azurecr.io (all lowercase).
docker tag hello-world <acrname>.azurecr.io/hello-world:v1
4. Finally, use docker push to push the image to the ACR instance. This example creates the hel-
lo-world repository, containing the hello-world:v1 image.
docker push <acrname>.azurecr.io/hello-world:v1
5. After pushing the image to your container registry, remove the hello-world:v1 image from your
local Docker environment.
MCT USE ONLY. STUDENT USE PROHIBITED 180 Module 5 Implement IaaS solutions
✔️ Note: This docker rmi command does not remove the image from the hello-world repository
in your Azure container registry, only the local version.
Output:
Result
----------------
hello-world
2. Use the az acr repository show-tags command to list the tags on the hello-world repository.
az acr repository show-tags --name <acrName> --repository hello-world --output table
Output:
Result
--------
v1
Example output:
Unable to find image 'mycontainerregistry007.azurecr.io/hello-world:v1'
locally
v1: Pulling from hello-world
Digest: sha256:662dd8e65ef7ccf13f417962c2f77567d3b132f12c95909de6c85ac-
3c326a345
Status: Downloaded newer image for mycontainerregistry007.azurecr.io/
hello-world:v1
[...]
MCT USE ONLY. STUDENT USE PROHIBITED
Publish a container image to Azure Container Registry 181
Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group,
the container registry, and the container images stored there.
az group delete --name <myResourceGroup>
MCT USE ONLY. STUDENT USE PROHIBITED 182 Module 5 Implement IaaS solutions
Container groups
The top-level resource in Azure Container Instances is the container group. A container group is a collec-
tion of containers that get scheduled on the same host machine. The containers in a container group
share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kuber-
netes.
The following diagram shows an example of a container group that includes multiple containers:
MCT USE ONLY. STUDENT USE PROHIBITED
Create and run container images in Azure Container Instances 183
Deployment
There are two common ways to deploy a multi-container group: use a Resource Manager template or a
YAML file. A Resource Manager template is recommended when you need to deploy additional Azure
service resources (for example, an Azure Files share) when you deploy the container instances. Due to the
YAML format's more concise nature, a YAML file is recommended when your deployment includes only
container instances.
Resource allocation
Azure Container Instances allocates resources such as CPUs, memory, and optionally GPUs (preview) to a
container group by adding the resource requests of the instances in the group. Taking CPU resources as
MCT USE ONLY. STUDENT USE PROHIBITED 184 Module 5 Implement IaaS solutions
an example, if you create a container group with two instances, each requesting 1 CPU, then the contain-
er group is allocated 2 CPUs.
Networking
Container groups share an IP address and a port namespace on that IP address. To enable external clients
to reach a container within the group, you must expose the port on the IP address and from the contain-
er. Because containers within the group share a port namespace, port mapping isn't supported. Contain-
ers within a group can reach each other via localhost on the ports that they have exposed, even if those
ports aren't exposed externally on the group's IP address.
Storage
You can specify external volumes to mount within a container group. You can map those volumes into
specific paths within the individual containers in a group.
Common scenarios
Multi-container groups are useful in cases where you want to divide a single functional task into a small
number of container images. These images can then be delivered by different teams and have separate
resource requirements.
Example usage could include:
●● A container serving a web application and a container pulling the latest content from source control.
●● An application container and a logging container. The logging container collects the logs and metrics
output by the main application and writes them to long-term storage.
●● An application container and a monitoring container. The monitoring container periodically makes a
request to the application to ensure that it's running and responding correctly, and raises an alert if
it's not.
●● A front-end container and a back-end container. The front end might serve a web application, with
the back end running a service to retrieve data.
Prerequisites
●● An Azure subscription, if you don't have an Azure subscription, create a free account before you
begin.
●● https://azure.microsoft.com/free/
MCT USE ONLY. STUDENT USE PROHIBITED
Create and run container images in Azure Container Instances 185
3. Create a new resource group with the name az204-aci-rg so that it will be easier to clean up these
resources when you are finished with the module. Replace <myLocation> with a region near you.
az group create --name az204-aci-rg --location <myLocation>
Create a container
You create a container by providing a name, a Docker image, and an Azure resource group to the az
container create command. You will expose the container to the Internet by specifying a DNS name
label.
1. Create a DNS name to expose your container to the Internet. Your DNS name must be unique, run this
command from Cloud Shell to create a Bash variable that holds a unique name.
DNS_NAME_LABEL=aci-demo-$RANDOM
2. Run the following az container create command to start a container instance. Be sure to
replace the <myLocation> with the region you specified earlier. It will take a few minutes for the
operation to complete.
az container create \
--resource-group az204-aci-rg \
--name mycontainer \
--image microsoft/aci-helloworld \
--ports 80 \
--dns-name-label $DNS_NAME_LABEL \
--location <myLocation>
In the command above, $DNS_NAME_LABEL specifies your DNS name. The image name, microsoft/
aci-helloworld, refers to a Docker image hosted on Docker Hub that runs a basic Node.js web
application.
9 https://portal.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED 186 Module 5 Implement IaaS solutions
--out table
You see your container's fully qualified domain name (FQDN) and its provisioning state. Here's an
example.
FQDN ProvisioningState
-------------------------------------- -------------------
aci-demo.eastus.azurecontainer.io Succeeded
✔️ Note: If your container is in the Creating state, wait a few moments and run the command again
until you see the Succeeded state.
2. From a browser, navigate to your container's FQDN to see it running. You may get a warning that the
site isn't safe.
Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group,
the container registry, and the container images stored there.
az group delete --name az204-aci-rg --no-wait --yes
Run to completion
Azure Container Instances starts the container, and then stops it when its application, or script, exits.
When Azure Container Instances stops a container whose restart policy is Never or OnFailure, the con-
tainer's status is set to Terminated.
Secure values
Objects with secure values are intended to hold sensitive information like passwords or keys for your
application. Using secure values for environment variables is both safer and more flexible than including
it in your container's image.
Environment variables with secure values aren't visible in your container's properties. Their values can be
accessed only from within the container. For example, container properties viewed in the Azure portal or
Azure CLI display only a secure variable's name, not its value.
Set a secure environment variable by specifying the secureValue property instead of the regular
value for the variable's type. The two variables defined in the following YAML demonstrate the two
variable types.
MCT USE ONLY. STUDENT USE PROHIBITED 188 Module 5 Implement IaaS solutions
YAML deployment
Create a secure-env.yaml file with the following snippet.
apiVersion: 2018-10-01
location: eastus
name: securetest
properties:
containers:
- name: mycontainer
properties:
environmentVariables:
- name: 'NOTSECRET'
value: 'my-exposed-value'
- name: 'SECRET'
secureValue: 'my-secret-value'
image: nginx
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups
Run the following command to deploy the container group with YAML:
az container create --resource-group myResourceGroup --file secure-env.yaml
az container create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name hellofiles \
--image mcr.microsoft.com/azuredocs/aci-hellofiles \
--dns-name-label aci-demo \
--ports 80 \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /aci/logs/
Lab scenario
Your organization is seeking a way to automatically create virtual machines (VMs) to run tasks and
immediately terminate. You're tasked with evaluating multiple compute services in Microsoft Azure and
determining which service can help you automatically create VMs and install custom software on those
machines. As a proof of concept, you have decided to try creating VMs from built-in images and contain-
er images so that you can compare the two solutions. To keep your proof of concept simple, you'll create
a special “IP check” application written in .NET that you'll automatically deploy to your machines. Your
proof of concept will evaluate the Azure Container Instances and Azure Virtual Machines services.
Objectives
●● After you complete this lab, you will be able to:
●● Create a VM by using the Azure Command-Line Interface (CLI).
●● Deploy a Docker container image to Azure Container Registry.
●● Deploy a container from a container image in Container Registry by using Container Instances.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
Azure Resource Manager is the deployment and management service for Azure. It provides a management
layer that enables you to create, update, and delete resources in your Azure subscription. Azure provides
four levels of management scope.
Which one of the below is not a valid management scope?
Access control
Resource group
Management group
Subscription
Review Question 3
Docker is a containerization platform used to develop, ship, and run containers. Which of the following uses
the Docker REST API to send instructions to either a local or remote server?
Docker Hub
Docker objects
Docker client
Docker server
Review Question 4
What does the following Docker command do?
docker rmi temp-ubuntu:version-1.0
Removes the container from the registry
Tags the image with a version
Removes the image from the registry
Lists containers using the image
Review Question 5
The top-level resource in Azure Container Instances is the container group. A container group is a collection
of containers that get scheduled on the same host machine. Your solution requires using a multi-container
group, which host OS should you use?
Windows
Linux
MacOS
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 193
Answers
Review Question 1
An availability set is a logical grouping of VMs within a datacenter that allows Azure to understand how
your application is built to provide for redundancy and availability. Is the following statement about
availability sets True or False?
It is recommended that a minimum of three or more VMs are created within an availability set to provide
for a highly available application and to meet the 99.95% Azure SLA.
True
■■ False
Explanation
Two, or more, VMs are necessary to meet the 99.5% Azure SLA.
Review Question 2
Azure Resource Manager is the deployment and management service for Azure. It provides a manage-
ment layer that enables you to create, update, and delete resources in your Azure subscription. Azure
provides four levels of management scope.
Which one of the below is not a valid management scope?
■■ Access control
Resource group
Management group
Subscription
Explanation
The four levels of management scope are: Management group, subscriptions, resource groups, and resourc-
es.
Review Question 3
Docker is a containerization platform used to develop, ship, and run containers. Which of the following
uses the Docker REST API to send instructions to either a local or remote server?
Docker Hub
Docker objects
■■ Docker client
Docker server
Explanation
The Docker client is a command-line application named docker that provides us with a command line
interface (CLI) to interact with a Docker server. The docker command uses the Docker REST API to send
instructions to either a local or remote server and functions as the primary interface we use to manage our
containers.
MCT USE ONLY. STUDENT USE PROHIBITED 194 Module 5 Implement IaaS solutions
Review Question 4
What does the following Docker command do?
docker rmi temp-ubuntu:version-1.0
Removes the container from the registry
Tags the image with a version
■■ Removes the image from the registry
Lists containers using the image
Explanation
You can remove an image from the local docker registry with the "docker rmi" command. Specify the name
or ID of the image to remove.
Review Question 5
The top-level resource in Azure Container Instances is the container group. A container group is a
collection of containers that get scheduled on the same host machine. Your solution requires using a
multi-container group, which host OS should you use?
Windows
■■ Linux
MacOS
Explanation
Multi-container groups currently support only Linux containers. For Windows containers, Azure Container
Instances only supports deployment of a single instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 6 Implement user authentication and
authorization
Application registration
When you register an Azure AD application in the Azure portal, two objects are created in your Azure AD
tenant:
●● An application object, and
●● A service principal object
Application object
An Azure AD application is defined by its one and only application object, which resides in the Azure AD
tenant where the application was registered, known as the application's “home” tenant. The Microsoft
Graph Application entity defines the schema for an application object's properties.
The application object serves as the template from which common and default properties are derived for
use in creating corresponding service principal objects. An application object therefore has a 1:1 relation-
ship with the software application, and a 1:many relationships with its corresponding service principal
object(s).
A service principal must be created in each tenant where the application is used, enabling it to establish
an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant applica-
tion has only one service principal (in its home tenant), created and consented for use during application
registration. A multi-tenant Web application/API also has a service principal created in each tenant where
a user from that tenant has consented to its use.
Prerequisites
This demo is performed in the Azure Portal.
Field Value
Name az204appregdemo
Supported account types Select Accounts in this organizational directory
Redirect URI (optional) Select Public client/native (mobile & desktop)
and enter http://localhost in the box to the
right.
Below are more details on the Supported account types.
Azure AD assigns a unique application (client) ID to your app, and you're taken to your application's
Overview page.
✔️ Note: Leave the app registration in place, we'll be using it in future demos in this module.
MCT USE ONLY. STUDENT USE PROHIBITED
Authentication using the Microsoft Authentication Library 199
Authentication flows
Below are some of the different authentication flows provided by Microsoft Authentication Library
(MSAL). These flows can be used in a variety of different application scenarios.
MCT USE ONLY. STUDENT USE PROHIBITED 200 Module 6 Implement user authentication and authorization
Flow Description
Authorization code Native and web apps securely obtain tokens in the
name of the user
Client credentials Service applications run without user interaction
On-behalf-of The application calls a service/web API, which in
turns calls Microsoft Graph
Implicit Used in browser-based applications
Device code Enables sign-in to a device by using another
device that has a browser
Integrated Windows Windows computers silently acquire an access
token when they are domain joined
Interactive Mobile and desktops applications call Microsoft
Graph in the name of a user
Username/password The application signs in a user by using their
username and password
●● The identity provider URL (named the instance) and the sign-in audience for your application. These
two parameters are collectively known as the authority.
●● The tenant ID if you are writing a line of business application solely for your organization (also named
single-tenant application).
●● The application secret (client secret string) or certificate (of type X509Certificate2) if it's a confidential
client app.
●● For web apps, and sometimes for public client apps (in particular when your app needs to use a
broker), you'll have also set the redirectUri where the identity provider will contact back your applica-
tion with the security tokens.
In the same way, the following code instantiates a confidential application (a Web app located at
https://myapp.azurewebsites.net) handling tokens from users in the Microsoft Azure public
cloud, with their work and school accounts, or their personal Microsoft accounts. The application is
identified with the identity provider by sharing a client secret:
string redirectUri = "https://myapp.azurewebsites.net";
IConfidentialClientApplication app = ConfidentialClientApplicationBuilder.Create(clientId)
.WithClientSecret(clientSecret)
.WithRedirectUri(redirectUri )
.Build();
Builder modifiers
In the code snippets using application builders, a number of .With methods can be applied as modifiers
(for example, .WithAuthority and .WithRedirectUri).
.WithAuthority modifier
The .WithAuthority modifier sets the application default authority to an Azure AD authority, with the
possibility of choosing the Azure Cloud, the audience, the tenant (tenant ID or domain name), or provid-
ing directly the authority URI.
var clientApp = PublicClientApplicationBuilder.Create(client_id)
.WithAuthority(AzureCloudInstance.AzurePublic, tenant_id)
.Build();
.WithRedirectUri modifier
The .WithRedirectUri modifier overrides the default redirect URI. In the case of public client applica-
tions, this will be useful for scenarios involving the broker.
MCT USE ONLY. STUDENT USE PROHIBITED 202 Module 6 Implement user authentication and authorization
Modifier Description
.WithAuthority() 7 overrides Sets the application default authority to an Azure
AD authority, with the possibility of choosing the
Azure Cloud, the audience, the tenant (tenant ID
or domain name), or providing directly the
authority URI.
.WithTenantId(string tenantId) Overrides the tenant ID, or the tenant description.
.WithClientId(string) Overrides the client ID.
.WithRedirectUri(string redirectUri) Overrides the default redirect URI. In the case of
public client applications, this will be useful for
scenarios involving the broker.
.WithComponent(string) Sets the name of the library using MSAL.NET (for
telemetry reasons).
.WithDebugLoggingCallback() If called, the application will call Debug.Write
simply enabling debugging traces.
.WithLogging() If called, the application will call a callback with
debugging traces.
.WithTelemetry(TelemetryCallback Sets the delegate used to send telemetry.
telemetryCallback)
Modifier Description
.WithCertificate(X509Certificate2 cer- Sets the certificate identifying the application with
tificate) Azure AD.
.WithClientSecret(string clientSec- Sets the client secret (app password) identifying
ret) the application with Azure AD.
Prerequisites
This demo is performed in Visual Studio Code. We'll be using information from the app we registered in
the Register an app with the Microsoft Identity platform demo.
Code Description
.Create Creates a PublicClientApplicationBuilder
from a clientID.
.WithAuthority Adds a known Authority corresponding to an
ADFS server. In the code we're specifying the
Public cloud, and using the tenant for the app we
registered.
Acquire a token
When you registered the az204appregdemo app it automatically generated an API permission user.
read for Microsoft Graph. We'll use that permission to acquire a token.
1. Set the permission scope for the token request. Add the following code below the PublicClien-
tApplicationBuilder.
string[] scopes = { "user.read" };
2. Request the token and write the result out to the console.
AuthenticationResult result = await app.AcquireTokenInteractive(scopes).ExecuteAsync();
Console.WriteLine($"Token:\t{result.AccessToken}");
✔️ Note: We'll be reusing this project in the Retrieving profile information by using the Microsoft Graph
SDK demo in the next lesson.
MCT USE ONLY. STUDENT USE PROHIBITED 206 Module 6 Implement user authentication and authorization
Microsoft Graph is fully replacing Azure Active Directory (Azure AD) Graph. For most production apps,
Microsoft Graph can already fully support Azure AD scenarios. In addition, Microsoft Graph supports
many new Azure AD datasets and features that are not available in Azure AD Graph.
●● The Microsoft Graph API offers a single endpoint, https://graph.microsoft.com, to provide
access to rich, people-centric data and insights exposed as resources of Microsoft 365 services. You
can use REST APIs or SDKs to access the endpoint and build apps that support scenarios spanning
across productivity, collaboration, education, security, identity, access, device management, and much
more.
●● Microsoft Graph connectors (preview) work in the incoming direction, delivering data external to the
Microsoft cloud into Microsoft Graph services and applications, to enhance Microsoft 365 experiences
such as Microsoft Search.
●● Microsoft Graph data connect provides a set of tools to streamline secure and scalable delivery of
Microsoft Graph data to popular Azure data stores. This cached data serves as data sources for Azure
development tools that you can use to build intelligent applications.
●● Structure
●● https://graph.microsoft.com/{version}/{resource}?{query-parameters}
●● Basic API
●● https://graph.microsoft.com/v1.0/
●● Beta API
●● https://graph.microsoft.com/beta/
●● Relative resource URLs (not all inclusive):
●● /me
●● /me/messages
●● /me/drive
●● /user
●● /group
Method Description
GET Read data from a resource.
POST Create a new resource, or perform an action.
PATCH Update a resource with new values.
PUT Replace a resource with a new one.
DELETE Remove a resource.
●● For the CRUD methods GET and DELETE, no request body is required.
●● The POST, PATCH, and PUT methods require a request body, usually specified in JSON format, that
contains additional information, such as the values for properties of the resource.
var httpClient = new HttpClient();
●● Microsoft.Graph
●● Microsoft.Graph.Auth
The following code example shows how to create an instance of a Microsoft Graph client with an authen-
tication provider.
// Build a client application.
IPublicClientApplication publicClientApplication = PublicClientApplicationBuilder
.Create("INSERT-CLIENT-APP-ID")
.Build();
// Create an authentication provider by passing in a client application and graph scopes.
DeviceCodeProvider authProvider = new DeviceCodeProvider(publicClientApplication, graphScopes);
// Create a new instance of GraphServiceClient with the authentication provider.
GraphServiceClient graphClient = new GraphServiceClient(authProvider);
.GetAsync();
Prerequisites
This demo is performed in Visual Studio Code. We'll be re-using az204-authdemo app we created in the
Interactive authentication by using MSAL.NET demo.
2. Add code for the Microsoft Graph client. The code below creates the Graph services client and passes
it the authentication provider.
var client = new GraphServiceClient(provider);
3. Add code to request the information from Microsoft Graph and write the DisplayName to the
console.
User me = await client.Me.Request().GetAsync();
Console.WriteLine($"Display Name:\t{me.DisplayName}");
SAS signature
You can sign a SAS in one of two ways:
●● With a user delegation key that was created using Azure Active Directory (Azure AD) credentials. A
user delegation SAS is signed with the user delegation key.
To get the user delegation key and create the SAS, an Azure AD security principal must be assigned a
role-based access control (RBAC) role that includes the Microsoft.Storage/storageAccounts/
blobServices/generateUserDelegationKey action.
●● With the storage account key. Both a service SAS and an account SAS are signed with the storage
account key. To create a SAS that is signed with the account key, an application must have access to
the account key.
SAS token
The SAS token is a string that you generate on the client side, for example by using one of the Azure
Storage client libraries. The SAS token is not tracked by Azure Storage in any way. You can create an
unlimited number of SAS tokens on the client side. After you create a SAS, you can distribute it to client
applications that require access to resources in your storage account.
MCT USE ONLY. STUDENT USE PROHIBITED 212 Module 6 Implement user authentication and authorization
When a client application provides a SAS URI to Azure Storage as part of a request, the service checks the
SAS parameters and signature to verify that it is valid for authorizing the request. If the service verifies
that the signature is valid, then the request is authorized. Otherwise, the request is declined with error
code 403 (Forbidden).
2. A lightweight service authenticates the client as needed and then generates a SAS. Once the client
application receives the SAS, they can access storage account resources directly with the permissions
defined by the SAS and for the interval allowed by the SAS. The SAS mitigates the need for routing all
data through the front-end proxy service.
Many real-world services may use a hybrid of these two approaches. For example, some data might be
processed and validated via the front-end proxy, while other data is saved and/or read directly using SAS.
Additionally, a SAS is required to authorize access to the source object in a copy operation in certain
scenarios:
●● When you copy a blob to another blob that resides in a different storage account, you must use a SAS
to authorize access to the source blob. You can optionally use a SAS to authorize access to the
destination blob as well.
●● When you copy a file to another file that resides in a different storage account, you must use a SAS to
authorize access to the source file. You can optionally use a SAS to authorize access to the destination
file as well.
●● When you copy a blob to a file, or a file to a blob, you must use a SAS to authorize access to the
source object, even if the source and destination objects reside within the same storage account.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 213
As a new employee at your company, you signed in to your Microsoft 365 applications for the first time
and discovered that your profile information isn't accurate. You also noticed that the name and profile
picture when you sign in aren't correct. Rather than change these values manually, you have decided that
this is a good opportunity to learn the Microsoft identity platform and how you can use different libraries
such as the Microsoft Authentication Library (MSAL) and the Microsoft Graph SDK to change these values
in a programmatic manner.
Objectives
After you complete this lab, you will be able to:
●● Create a new application registration in Azure Active Directory (Azure AD).
●● Use the MSAL.NET library to implement the interactive authentication flow.
●● Obtain a token from the Microsoft identity platform by using the MSAL.NET library.
●● Query Microsoft Graph by using the Microsoft Graph SDK and the device code flow.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
What resources/objects are created when you register an Azure AD application in the portal?
Select all that apply.
Application object
App service plan
Service principal object
Hosting plan
Review Question 3
A shared access signature (SAS) is a signed URI that points to one or more storage resources and includes a
token that contains a special set of query parameters.
What are the two ways you can sign an SAS?
With a self-signed certificate
With the storage account key
With a user delegation key
With an HTTPS URI
Review Question 4
The Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential
clients.
Which of the two definitions below describe a public client application?
The app isn't trusted to safely keep application secrets, so it only accesses Web APIs on behalf of the
user. It can't hold configuration-time secrets, so it doesn't have client secrets.
The client ID is exposed through the web browser, but the secret is passed only in the back channel
and never directly exposed.
Review Question 5
With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application builders:
"PublicClientApplicationBuilder" and "ConfidentialClientApplicationBuilder".
True or false, all of the ".With" modifiers can be used for both the Public and Confidential builders.
True
False
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 215
Answers
Review Question 1
In the context of Azure Active Directory (Azure AD) "application" is frequently used as a conceptual term,
referring to not only the application software, but also its Azure AD registration and role in authentica-
tion/authorization “conversations” at runtime.
In that context what role(s) can an app function?
Client role
Resource server role
Both client and resource server role
■■ All of the above
Explanation
By definition, an application can function in these roles:
Review Question 2
What resources/objects are created when you register an Azure AD application in the portal?
Select all that apply.
■■ Application object
App service plan
■■ Service principal object
Hosting plan
Explanation
When you register an Azure AD application in the Azure portal, two objects are created in your Azure AD
tenant:
Review Question 3
A shared access signature (SAS) is a signed URI that points to one or more storage resources and in-
cludes a token that contains a special set of query parameters.
What are the two ways you can sign an SAS?
With a self-signed certificate
■■ With the storage account key
■■ With a user delegation key
With an HTTPS URI
Explanation
You can sign a SAS in one of two ways:
MCT USE ONLY. STUDENT USE PROHIBITED 216 Module 6 Implement user authentication and authorization
Review Question 4
The Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential
clients.
Which of the two definitions below describe a public client application?
■■ The app isn't trusted to safely keep application secrets, so it only accesses Web APIs on behalf of the
user. It can't hold configuration-time secrets, so it doesn't have client secrets.
The client ID is exposed through the web browser, but the secret is passed only in the back channel
and never directly exposed.
Explanation
Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential clients.
Review Question 5
With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application
builders: "PublicClientApplicationBuilder" and "ConfidentialClientApplicationBuilder".
True or false, all of the ".With" modifiers can be used for both the Public and Confidential builders.
True
■■ False
Explanation
Most of the .With modifiers can be used for both Public and Confidential app builders. There are a few that
are specific to the Confidential client: ".WithCertificate()", and ".WithClientSecret()".
MCT USE ONLY. STUDENT USE PROHIBITED
Module 7 Implement secure cloud solutions
●● Monitor access and use: You can monitor activity by enabling logging for your vaults. You can
configure Azure Key Vault to:
●● Archive to a storage account.
●● Stream to an event hub.
●● Send the logs to Azure Monitor logs.
●● Simplified administration of application secrets: Security information must be secured, it must
follow a life cycle, and it must be highly available. Azure Key Vault simplifies the process of meeting
these requirements by:
●● Removing the need for in-house knowledge of Hardware Security Modules
●● Scaling up on short notice to meet your organization’s usage spikes.
●● Replicating the contents of your Key Vault within a region and to a secondary region. Data replica-
tion ensures high availability and takes away the need of any action from the administrator to
trigger the failover.
●● Providing standard Azure administration options via the portal, Azure CLI and PowerShell.
●● Automating certain tasks on certificates that you purchase from Public CAs, such as enrollment
and renewal.
Authentication
To do any operations with Key Vault, you first need to authenticate to it. There are three ways to authenti-
cate to Key Vault:
●● Managed identities for Azure resources: When you deploy an app on a virtual machine in Azure,
you can assign an identity to your virtual machine that has access to Key Vault. You can also assign
identities to other Azure resources. The benefit of this approach is that the app or service isn't
managing the rotation of the first secret. Azure automatically rotates the identity. We recommend this
approach as a best practice.
●● Service principal and certificate: You can use a service principal and an associated certificate that
has access to Key Vault. We don't recommend this approach because the application owner or
developer must rotate the certificate.
●● Service principal and secret: Although you can use a service principal and a secret to authenticate to
Key Vault, we don't recommend it. It's hard to automatically rotate the bootstrap secret that's used to
authenticate to Key Vault.
Request URL
Key management operations use HTTP DELETE, GET, PATCH, PUT and HTTP POST and cryptographic
operations against existing key objects use HTTP POST. Clients that cannot support specific HTTP verbs
may also use HTTP POST using the X-HTTP-REQUEST header to specify the intended verb; requests that
do not normally require a body should include an empty body when using HTTP POST, for example when
using POST instead of DELETE.
To work with objects in the Azure Key Vault, the following are example URLs:
●● To CREATE a key called TESTKEY in a Key Vault use - PUT /keys/TESTKEY?api-version=<api_
version> HTTP/1.1
●● To IMPORT a key called IMPORTEDKEY into a Key Vault use - POST /keys/IMPORTEDKEY/im-
port?api-version=<api_version> HTTP/1.1
●● To GET a secret called MYSECRET in a Key Vault use - GET /secrets/MYSECRET?api-ver-
sion=<api_version> HTTP/1.1
●● To SIGN a digest using a key called TESTKEY in a Key Vault use - POST /keys/TESTKEY/sign?a-
pi-version=<api_version> HTTP/1.1
The authority for a request to a Key Vault is always as follows, https://{keyvault-name}.
vault.azure.net/
Keys are always stored under the /keys path, Secrets are always stored under the /secrets path.
API Version
The Azure Key Vault Service supports protocol versioning to provide compatibility with down-level clients,
although not all capabilities will be available to those clients. Clients must use the api-version query
string parameter to specify the version of the protocol that they support as there is no default.
MCT USE ONLY. STUDENT USE PROHIBITED 220 Module 7 Implement secure cloud solutions
Azure Key Vault protocol versions follow a date numbering scheme using a {YYYY}.{MM}.{DD} format.
Authentication
All requests to Azure Key Vault MUST be authenticated. Azure Key Vault supports Azure Active Directory
access tokens that may be obtained using OAuth2.
Access tokens must be sent to the service using the HTTP Authorization header:
PUT /keys/MYKEY?api-version=<api_version> HTTP/1.1
Authorization: Bearer <access_token>
When an access token is not supplied, or when a token is not accepted by the service, an HTTP 401 error
will be returned to the client and will include the WWW-Authenticate header, for example:
401 Not Authorized
WWW-Authenticate: Bearer authorization="…", resource="…"
Prerequisites
This demo is performed in either the Cloud Shell or a local Azure CLI installation.
●● Cloud Shell: be sure to select PowerShell as the shell.
●● If you need to install Azure CLI locally visit:
●● https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
Login to Azure
1. Choose to either:
●● Launch the Cloud Shell: https://shell.azure.com
●● Or, open a terminal and login to your Azure account using the az login command.
MCT USE ONLY. STUDENT USE PROHIBITED
Manage keys, secrets, and certificates by using the KeyVault API 221
This command will return some JSON. The last line will contain the password in plain text.
"value": "hVFkk965BuUv"
You have created a Key Vault, stored a secret, and retrieved it.
Clean up resources
When you no longer need the resources in this demo use the following command to delete the resource
group and associated Key Vault.
az group delete --name $myResourceGroup --no-wait --yes
MCT USE ONLY. STUDENT USE PROHIBITED 222 Module 7 Implement secure cloud solutions
Terminology
The following terms are used throughout this section of the course:
●● Client ID - a unique identifier generated by Azure AD that is tied to an application and service
principal during its initial provisioning.
●● Principal ID - the object ID of the service principal object for your managed identity that is used to
grant role-based access to an Azure resource.
●● Azure Instance Metadata Service (IMDS) - a REST endpoint accessible to all IaaS VMs created via
the Azure Resource Manager. The endpoint is available at a well-known non-routable IP address
(169.254.169.254) that can be accessed only from within the VM.
1 https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-msi
MCT USE ONLY. STUDENT USE PROHIBITED 224 Module 7 Implement secure cloud solutions
3. Azure Resource Manager configures the identity on the VM by updating the Azure Instance Metadata
Service identity endpoint with the service principal client ID and certificate.
4. After the VM has an identity, use the service principal information to grant the VM access to Azure
resources. To call Azure Resource Manager, use role-based access control (RBAC) in Azure AD to
assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the
specific secret or key in Key Vault.
5. Your code that's running on the VM can request a token from the Azure Instance Metadata service
endpoint, accessible only from within the VM: http://169.254.169.254/metadata/identity/
oauth2/token
●● The resource parameter specifies the service to which the token is sent. To authenticate to Azure
Resource Manager, use resource=https://management.azure.com/.
●● API version parameter specifies the IMDS version, use api-version=2018-02-01 or greater.
6. A call is made to Azure AD to request an access token (as specified in step 5) by using the client ID
and certificate configured in step 3. Azure AD returns a JSON Web Token (JWT) access token.
7. Your code sends the access token on a call to a service that supports Azure AD authentication.
Keys
Keys serve as the name for key-value pairs and are used to store and retrieve corresponding values. It's a
common practice to organize keys into a hierarchical namespace by using a character delimiter, such as /
or :. Use a convention that's best suited for your application. App Configuration treats keys as a whole. It
doesn't parse keys to figure out how their names are structured or enforce any rule on them.
Keys stored in App Configuration are case-sensitive, unicode-based strings. The keys app1 and App1 are
distinct in an App Configuration store. Keep this in mind when you use configuration settings within an
application because some frameworks handle configuration keys case-insensitively.
You can use any unicode character in key names entered into App Configuration except for *, ,, and \.
These characters are reserved. If you need to include a reserved character, you must escape it by using \
{Reserved Character}. There's a combined size limit of 10,000 characters on a key-value pair. This
limit includes all characters in the key, its value, and all associated optional attributes. Within this limit,
you can have many hierarchical levels for keys.
Label keys
Key values in App Configuration can optionally have a label attribute. Labels are used to differentiate key
values with the same key. A key app1 with labels A and B forms two separate keys in an App Configura-
tion store. By default, the label for a key value is empty, or null.
Label provides a convenient way to create variants of a key. A common use of labels is to specify multiple
environments for the same key:
Key = AppName:DbEndpoint & Label = Test
Key = AppName:DbEndpoint & Label = Staging
Key = AppName:DbEndpoint & Label = Production
MCT USE ONLY. STUDENT USE PROHIBITED
Secure app configuration data by using Azure App Configuration 229
Values
Values assigned to keys are also unicode strings. You can use all unicode characters for values. There's an
optional user-defined content type associated with each value. Use this attribute to store information, for
example an encoding scheme, about a value that helps your application to process it properly.
Configuration data stored in an App Configuration store, which includes all keys and values, is encrypted
at rest and in transit. App Configuration isn't a replacement solution for Azure Key Vault. Don't store
application secrets in it.
Basic concepts
Here are several new terms related to feature management:
●● Feature flag: A feature flag is a variable with a binary state of on or off. The feature flag also has an
associated code block. The state of the feature flag triggers whether the code block runs or not.
●● Feature manager: A feature manager is an application package that handles the lifecycle of all the
feature flags in an application. The feature manager typically provides additional functionality, such as
caching feature flags and updating their states.
●● Filter: A filter is a rule for evaluating the state of a feature flag. A user group, a device or browser type,
a geographic location, and a time window are all examples of what a filter can represent.
An effective implementation of feature management consists of at least two components working in
concert:
●● An application that makes use of feature flags.
●● A separate repository that stores the feature flags and their current states.
How these components interact is illustrated in the following examples.
MCT USE ONLY. STUDENT USE PROHIBITED 230 Module 7 Implement secure cloud solutions
In this case, if featureFlag is set to True, the enclosed code block is executed; otherwise, it's skipped.
You can set the value of featureFlag statically, as in the following code example:
bool featureFlag = true;
You can also evaluate the flag's state based on certain rules:
bool featureFlag = isBetaUser();
A slightly more complicated feature flag pattern includes an else statement as well:
if (featureFlag) {
// This following code will run if the featureFlag value is true
} else {
// This following code will run if the featureFlag value is false
}
Lab scenario
Your company has a data-sharing business-to-business (B2B) agreement with another local business in
which you're expected to parse a file that's dropped off nightly. To keep things simple, the second
company has decided to drop the file as a Microsoft Azure Storage blob every night. You're now tasked
with devising a way to access the file and generate a secure URL that any internal system can use to
access the blob without exposing the file to the internet. You have decided to use Azure Key Vault to
store the credentials for the storage account and Azure Functions to write the code necessary to access
the file without storing credentials in plaintext or exposing the file to the internet.
Objectives
●● After you complete this lab, you will be able to:
●● Create an Azure key vault and store secrets in the key vault.
●● Create a server-assigned managed identity for an Azure App Service instance.
●● Create a Key Vault access policy for an Azure Active Directory identity or application.
●● Use the Storage .NET SDK to download a blob.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
Below are some characteristics of a managed identity:
●● Created as a stand-alone Azure resource
●● Independent life-cycle, must be explicitly deleted
●● Can be shared
Review Question 3
The Azure Key Vault Service supports protocol versioning to provide compatibility with down-level clients,
although not all capabilities will be available to those clients.
True or False: If you don't specify an API version using the "api_version" parameter Key Vault will default to
the current version.
True
False
Review Question 4
Azure App Configuration provides a service to centrally manage application settings and feature flags.
Azure App Configuration stores configuration data as key-value pairs.
Which of the below are valid uses of the label attribute in key pairs?
(Select all that apply.)
Create multiple versions of a key value
Specify multiple environments for the same key
Assign a value to a key
Store application secrets
Review Question 5
Which of the options below matches the following definition?
A security identity that user-created apps, services, and automation tools use to access specific Azure
resources.
Vault owner
Vault consumer
Azure tenant ID
Service principal
MCT USE ONLY. STUDENT USE PROHIBITED 234 Module 7 Implement secure cloud solutions
Answers
Review Question 1
Azure Key Vault is a tool for securely storing and accessing secrets. A secret is anything that you want to
tightly control access to, such as API keys, passwords, or certificates. A vault is logical group of secrets.
True or false, Azure Key Vault requires that you always pass credentials through your code to access
stored secrets.
True
■■ False
Explanation
You can use a managed identity to authenticate to Key Vault, or any service that supports Azure AD
authentication, without having any credentials in your code.
Review Question 2
Below are some characteristics of a managed identity:
Review Question 4
Azure App Configuration provides a service to centrally manage application settings and feature flags.
Azure App Configuration stores configuration data as key-value pairs.
Which of the below are valid uses of the label attribute in key pairs?
(Select all that apply.)
■■ Create multiple versions of a key value
■■ Specify multiple environments for the same key
Assign a value to a key
Store application secrets
Explanation
Labels add a layer of metadata to the key-value pairs. You can use them to differentiate multiple versions of
the same key, including environments. Labels are not values. You should never store application secrets in
App Configuration.
Review Question 5
Which of the options below matches the following definition?
A security identity that user-created apps, services, and automation tools use to access specific Azure
resources.
Vault owner
Vault consumer
Azure tenant ID
■■ Service principal
Explanation
An Azure service principal is a security identity that user-created apps, services, and automation tools use to
access specific Azure resources.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 8 Implement API Management
●● The Developer portal serves as the main web presence for developers, where they can:
●● Read API documentation.
●● Try out an API via the interactive console.
●● Create an account and subscribe to get API keys.
●● Access analytics on their own usage.
Products
Products are how APIs are surfaced to developers. Products in API Management have one or more APIs,
and are configured with a title, description, and terms of use. Products can be Open or Protected.
Protected products must be subscribed to before they can be used, while open products can be used
without a subscription. Subscription approval is configured at the product level and can either require
administrator approval, or be auto-approved.
Groups
Groups are used to manage the visibility of products to developers. API Management has the following
immutable system groups:
●● Administrators - Azure subscription administrators are members of this group. Administrators
manage API Management service instances, creating the APIs, operations, and products that are used
by developers.
●● Developers - Authenticated developer portal users fall into this group. Developers are the customers
that build applications using your APIs. Developers are granted access to the developer portal and
build applications that call the operations of an API.
●● Guests - Unauthenticated developer portal users, such as prospective customers visiting the develop-
er portal of an API Management instance fall into this group. They can be granted certain read-only
access, such as the ability to view APIs but not call them.
In addition to these system groups, administrators can create custom groups or leverage external groups
in associated Azure Active Directory tenants.
Developers
Developers represent the user accounts in an API Management service instance. Developers can be
created or invited to join by administrators, or they can sign up from the Developer portal. Each develop-
er is a member of one or more groups, and can subscribe to the products that grant visibility to those
groups.
Policies
Policies are a powerful capability of API Management that allow the Azure portal to change the behavior
of the API through configuration. Policies are a collection of statements that are executed sequentially on
the request or response of an API. Popular statements include format conversion from XML to JSON and
call rate limiting to restrict the number of incoming calls from a developer, and many other policies are
available.
MCT USE ONLY. STUDENT USE PROHIBITED
API Management overview 239
Prerequisites
This demo is performed in the Cloud Shell.
●● Cloud Shell: be sure to select PowerShell as the shell.
Login to Azure
1. Choose to either:
●● Launch the Cloud Shell: https://shell.azure.com
●● Or, login to the Azure portal1 and open open the cloud shell there.
2. Create an APIM instance. The name of your APIM instance needs to be unique. The first line in the
example below generates a unique name. You also need to supply an email address.
$myEmail = Read-Host -Prompt "Enter an email address: "
$myAPIMname="az204-apim-" + $(get-random -minimum 10000 -maximum 100000)
az apim create -n $myAPIMname -l $myLocation `
--publisher-email $myEmail `
-g az204-apim-rg `
--publisher-name AZ204-APIM-Demo `
--sku-name Consumption
✔️ Note: Azure will send a notification to the email addres supplied above when the resource has
been provisioned.
1 https://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 240 Module 8 Implement API Management
Prerequisites
This demo is performed in the Azure Portal.
2. On the API Management screen, select the API Management instance you created in the Create an
APIM instance by using Azure CLI demo.
You can set the API values during creation or later by going to the Settings tab. The red star next to a
field indicates that the field is required. Use the values from the table below to fill out the form.
Prerequisites
This demo is performed in the Azure Portal.
2. On the API Management screen, select the API Management instance you created in the Create an
APIM instance by using Azure CLI demo.
If there is an error during the processing of a request, any remaining steps in the inbound, backend, or
outbound sections are skipped and execution jumps to the statements in the on-error section. By
placing policy statements in the on-error section you can review the error by using the context.
LastError property, inspect and customize the error response using the set-body policy, and config-
ure what happens if an error occurs.
Examples
<policies>
<inbound>
<cross-domain />
<base />
<find-and-replace from="xyz" to="abc" />
</inbound>
</policies>
In the example policy definition above, the cross-domain statement would execute before any higher
policies which would in turn, be followed by the find-and-replace policy.
Additional resources
●● Policy reference - for a full list of policy statements and their settings visit:
●● https://docs.microsoft.com/azure/api-management/api-management-policies
●● Policy samples - for more code examples visit:
●● https://docs.microsoft.com/azure/api-management/policy-samples
Prerequisites
This demo is performed in the Azure Portal.
2. On the API Management screen, select the API Management instance you created in the Create an
APIM instance by using Azure CLI demo.
3. Select APIs in the navigation pane.
4. Select the Demo Conference API.
3. Press the Send button, at the bottom of the screen. The response includes the response headers
below:
x-aspnet-version: 4.0.30319
x-powered-by: ASP.NET
5. Select Save.
6. Inspect the Outbound process section and note there are two set-header policies listed.
Control flow
The choose policy applies enclosed policy statements based on the outcome of evaluation of Boolean
expressions, similar to an if-then-else or a switch construct in a programming language.
<choose>
<when condition="Boolean expression | Boolean constant">
<!— one or more policy statements to be applied if the above condition is true -->
</when>
<when condition="Boolean expression | Boolean constant">
<!— one or more policy statements to be applied if the above condition is true -->
</when>
<otherwise>
<!— one or more policy statements to be applied if none of the above conditions are true -->
</otherwise>
</choose>
The control flow policy must contain at least one <when/> element. The <otherwise/> element is
optional. Conditions in <when/> elements are evaluated in order of their appearance within the policy.
Policy statement(s) enclosed within the first <when/> element with condition attribute equals true will be
applied. Policies enclosed within the <otherwise/> element, if present, will be applied if all of the
<when/> element condition attributes are false.
Forward request
The forward-request policy forwards the incoming request to the backend service specified in the
request context. The backend service URL is specified in the API settings and can be changed using the
set backend service policy.
Removing this policy results in the request not being forwarded to the backend service and the policies in
the outbound section are evaluated immediately upon the successful completion of the policies in the
inbound section.
<forward-request timeout="time in seconds" follow-redirects="true | false"/>
Limit concurrency
The limit-concurrency policy prevents enclosed policies from executing by more than the specified
number of requests at any time. Upon exceeding that number, new requests will fail immediately with a
429 Too Many Requests status code.
<limit-concurrency key="expression" max-count="number">
<!— nested policy statements -->
</limit-concurrency>
<log-to-eventhub logger-id="id of the logger entity" partition-id="index of the partition where messag-
es are sent" partition-key="value used for partition assignment">
Expression returning a string to be logged
</log-to-eventhub>
Mock response
The mock-response, as the name implies, is used to mock APIs and operations. It aborts normal
pipeline execution and returns a mocked response to the caller. The policy always tries to return respons-
es of highest fidelity. It prefers response content examples, whenever available. It generates sample
responses from schemas, when schemas are provided and examples are not. If neither examples or
schemas are found, responses with no content are returned.
<mock-response status-code="code" content-type="media type"/>
Retry
The retry policy executes its child policies once and then retries their execution until the retry condi-
tion becomes false or retry count is exhausted.
<retry>
condition="boolean expression or literal"
count="number of retry attempts"
interval="retry interval in seconds"
max-interval="maximum retry interval in seconds"
delta="retry interval delta in seconds"
first-fast-retry="boolean expression or literal">
<!-- One or more child policies. No restrictions -->
</retry>
Return response
The return-response policy aborts pipeline execution and returns either a default or custom response
to the caller. Default response is 200 OK with no body. Custom response can be specified via a context
variable or policy statements. When both are provided, the response contained within the context
variable is modified by the policy statements before being returned to the caller.
<return-response response-variable-name="existing context variable">
<set-header/>
<set-body/>
<set-status/>
</return-response>
MCT USE ONLY. STUDENT USE PROHIBITED
Securing your APIs 251
Scope Details
All APIs Applies to every API accessible from the gateway
Single API This scope applies to a single imported API and all
of its endpoints
Product A product is a collection of one or more APIs that
you configure in API Management. You can assign
APIs to more than one product. Products can have
different access rules, usage quotas, and terms of
use.
Applications that call a protected API must include the key in every request.
You can regenerate these subscription keys at any time, for example, if you suspect that a key has been
shared with unauthorized users.
Every subscription has two keys, a primary and a secondary. Having two keys makes it easier when you do
need to regenerate a key. For example, if you want to change the primary key and avoid downtime, use
the secondary key in your apps.
MCT USE ONLY. STUDENT USE PROHIBITED 252 Module 8 Implement API Management
For products where subscriptions are enabled, clients must supply a key when making calls to APIs in that
product. Developers can obtain a key by submitting a subscription request. If you approve the request,
you must send them the subscription key securely, for example, in an encrypted message. This step is a
core part of the API Management workflow.
Here's how you can pass a key in the request header using curl:
curl --header "Ocp-Apim-Subscription-Key: <key string>" https://<apim gateway>.azure-api.net/api/path
Here's an example curl command that passes a key in the URL as a query string:
curl https://<apim gateway>.azure-api.net/api/path?subscription-key=<key string>
If the key is not passed in the header, or as a query string in the URL, you'll get a 401 Access Denied
response from the API gateway.
MCT USE ONLY. STUDENT USE PROHIBITED
Securing your APIs 253
Property Reason
Certificate Authority (CA) Only allow certificates signed by a particular CA
Thumbprint Allow certificates containing a specified thumb-
print
Subject Only allow certificates with a specified subject
Expiration Date Only allow certificates that have not expired
These properties are not mutually exclusive and they can be mixed together to form your own policy
requirements. For instance, you can specify that the certificate passed in the request is signed by a certain
certificate authority and hasn't expired.
Client certificates are signed to ensure that they are not tampered with. When a partner sends you a
certificate, verify that it comes from them and not an imposter. There are two common ways to verify a
certificate:
●● Check who issued the certificate. If the issuer was a certificate authority that you trust, you can use the
certificate. You can configure the trusted certificate authorities in the Azure portal to automate this
process.
●● If the certificate is issued by the partner, verify that it came from them. For example, if they deliver the
certificate in person, you can be sure of its authenticity. These are known as self-signed certificates.
</choose>
Lab scenario
The developers in your company have successfully adopted and used the https://httpbin.org/ website
to test various clients that issue HTTP requests. Your company would like to use one of the publicly
available containers on Docker Hub to host the httpbin web application in an enterprise-managed
environment with a few caveats. First, developers who are issuing Representational State Transfer (REST)
queries should receive standard headers that are used throughout the company's applications. Second,
developers should be able to get responses by using JavaScript Object Notation (JSON) even if the API
that's used behind the scenes doesn't support the data format. You're tasked with using Microsoft Azure
API Management to create a proxy tier in front of the httpbin web application to implement your compa-
ny's policies.
Objectives
After you complete this lab, you will be able to:
●● Create a web application from a Docker Hub container image.
●● Create an API Management account.
●● Configure an API as a proxy for another Azure service with header and payload manipulation.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher
to change the behavior of the API through configuration.
Which of the options below accurately reflects how policies are applied?
On inbound requests
On outbound responses
On the backend
All of the above
Review Question 3
A control flow policy applies policy statements based on the results of the evaluation of Boolean expressions.
True or False: A control flow policy must contain at least one "otherwise" element.
True
False
Review Question 4
The "return-response" policy aborts pipeline execution and returns either a default or custom response to
the caller.
What is the default response code?
100
400 OK
200 OK
None of the above
MCT USE ONLY. STUDENT USE PROHIBITED 258 Module 8 Implement API Management
Answers
Review Question 1
API Management helps organizations publish APIs to external, partner, and internal developers to unlock
the potential of their data and services.
Which of the below options is used to set up policies like quotas?
API gateway
Developer portal
■■ Azure portal
Product definition
Explanation
The Azure portal is the administrative interface where you set up your API program. Use it to:
Review Question 2
In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher
to change the behavior of the API through configuration.
Which of the options below accurately reflects how policies are applied?
On inbound requests
On outbound responses
On the backend
■■ All of the above
Explanation
A configuration can be divided into inbound, backend, outbound, and on-error. The series of specified policy
statements is executes in order for a request and a response.
Review Question 3
A control flow policy applies policy statements based on the results of the evaluation of Boolean expres-
sions.
True or False: A control flow policy must contain at least one "otherwise" element.
True
■■ False
Explanation
The control flow policy must contain at least one "when" element. The "otherwise" element is optional.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 259
Review Question 4
The "return-response" policy aborts pipeline execution and returns either a default or custom response to
the caller.
What is the default response code?
100
400 OK
■■ 200 OK
None of the above
Explanation
The default response is "200 OK" with no body.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 9 Develop App Service Logic Apps
matches this criteria, the trigger fires and runs the workflow's actions. Here, these actions include XML
transformation, data updates, decision branching, and email notifications.
You can build your logic apps visually with the Logic Apps Designer, available in the Azure portal through
your browser and in Visual Studio and Visual Studio Code. For more custom logic apps, you can create or
edit logic app definitions in JavaScript Object Notation (JSON) by working in “code view” mode. You can
also use Azure PowerShell commands and Azure Resource Manager templates for select tasks. Logic apps
deploy and run in the cloud on Azure.
the Core label run in the same ISE as your logic apps.
Logic apps, built-in triggers, and built-in actions that run in your
ISE use a pricing plan different from the consumption-based pricing plan.
●● Managed connectors: Deployed and managed by Microsoft, these connectors provide triggers and
actions for accessing cloud services, on-premises systems, or both, including Office 365, Azure Blob
Storage, SQL Server, Dynamics, Salesforce, SharePoint, and more. Some connectors specifically
support business-to-business (B2B) communication scenarios and require an integration account
that's linked to your logic app. Before using certain connectors, you might have to first create connec-
tions, which are managed by Azure Logic Apps.
You can also identify connectors by using these categories, although some connectors can cross multiple
categories. For example, SAP is an Enterprise connector and an on-premises connector:
Category Description
Managed API connectors Create logic apps that use services such as Azure
Blob Storage, Office 365, Dynamics, Power BI,
OneDrive, Salesforce, SharePoint Online, and many
more.
On-premises connectors After you install and set up the on-premises data
gateway, these connectors help your logic apps
access on-premises systems such as SQL Server,
SharePoint Server, Oracle DB, file shares, and
others.
Integration account connectors Available when you create and pay for an integra-
tion account, these connectors transform and
validate XML, encode and decode flat files, and
process business-to-business (B2B) messages with
AS2, EDIFACT, and X12 protocols.
the specific event happened, the trigger creates and runs a new instance
of your logic app, which can now use the data that's passed as input.
●● Push trigger: This trigger waits and listens for new data or
for an event to happen. When new data is available or when the
event happens, the trigger creates and runs new instance of your
logic app, which can now use the data that's passed as input.
Connector configuration
Each connector's triggers and actions provide their own properties for you to configure. Many connectors
also require that you first create a connection to the target service or system and provide authentication
credentials or other configuration details before you can use a trigger or action in your logic app. For
example, you must authorize a connection to a Twitter account for accessing data or to post on your
behalf.
Schedule triggers
You can start your logic app workflow by using the Recurrence trigger or Sliding Window trigger, which
isn't associated with any specific service or system. These triggers start and run your workflow based on
your specified recurrence where you select the interval and frequency. You can also set the start date and
time as well as the time zone. Each time that a trigger fires, Logic Apps creates and runs a new workflow
instance for your logic app.
Here are the differences between these triggers:
●● Recurrence: Runs your workflow at regular time intervals based on your specified schedule. If recur-
rences are missed, the Recurrence trigger doesn't process the missed recurrences but restarts recur-
rences with the next scheduled interval.
●● Sliding Window: Runs your workflow at regular time intervals that handle data in continuous chunks.
If recurrences are missed, the Sliding Window trigger goes back and processes the missed recurrenc-
es.
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Logic Apps overview 265
Schedule actions
After any action in your logic app workflow, you can use the Delay and Delay Until actions to make your
workflow wait before the next action runs.
●● Delay: Wait to run the next action for the specified number of time units, such as seconds, minutes,
hours, days, weeks, or months.
●● Delay until: Wait to run the next action until the specified date and time.
Prerequisites
This demo is performed in the Azure Portal.
●● To get a free account visit: https://azure.microsoft.com/free/
●● An email account from an email provider that's supported by Logic Apps, such as Office 365 Outlook,
Outlook.com, or Gmail. For other providers, review the connectors list here1. This quickstart uses an
Office 365 Outlook account. If you use a different email account, the general steps stay the same, but
your UI might slightly differ.
Login to Azure
1. Login to the Azure portal https://portal.azure.com.
Property Value
Name az204-logicapp-demo
Subscription <Azure-subscription-name>
Resource group az204-logicapp-demo-rg
Location <Azure-region>
Log Analytics Off
1 https://docs.microsoft.com/connectors/
MCT USE ONLY. STUDENT USE PROHIBITED 266 Module 9 Develop App Service Logic Apps
Field Value
To For testing purposes, you can use your email
address.
Subject Logic App Demo Update
Body Select Feed published on from the Add dynamic
content list.
7. Save your logic app.
Lab scenario
Your organization keeps a collection of JSON files that it uses to configure third-party products in a
Server Message Block (SMB) file share in Microsoft Azure. As part of a regular auditing practice, the
operations team would like to call a simple HTTP endpoint and retrieve the current list of configuration
files. You have decided to implement this functionality using a no-code solution based on Azure API
Management service and Logic Apps.
Objectives
●● After you complete this lab, you will be able to:
●● Create a Logic App workflow.
●● Manage products and APIs in a Logic App.
●● Use Azure API Management as a proxy for a Logic App.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
The following sentence describes a general type of trigger.
This trigger waits and listens for new data or for an event to happen.
What type of trigger is described above?
Polling trigger
Recurrence trigger
Push trigger
None of the above
Review Question 3
The following sentence describes a general category of connectors.
Create logic apps that use services such as Azure Blob Storage, Office 365, Dynamics, Power BI, OneDrive,
Salesforce, SharePoint Online, and many more.
Which general category is described above?
Integration account connectors
On-premises connectors
Managed API connectors
Azure AD connector
MCT USE ONLY. STUDENT USE PROHIBITED 272 Module 9 Develop App Service Logic Apps
Answers
Review Question 1
Connectors provide quick access from Azure Logic Apps to events, data, and actions across other apps,
services, systems, protocols, and platforms.
Which of the following features do connectors provide?
■■ Trigger
Pulse
■■ Action
None of these
Explanation
Connectors can provide triggers, actions, or both. A trigger is the first step in any logic app, usually specify-
ing the event that fires the trigger and starts running your logic app.
Review Question 2
The following sentence describes a general type of trigger.
This trigger waits and listens for new data or for an event to happen.
What type of trigger is described above?
Polling trigger
Recurrence trigger
■■ Push trigger
None of the above
Explanation
A push trigger waits and listens for new data or for an event to happen. When new data is available or
when the event happens, the trigger creates and runs new instance of your logic app, which can now use
the data that's passed as input.
Review Question 3
The following sentence describes a general category of connectors.
Create logic apps that use services such as Azure Blob Storage, Office 365, Dynamics, Power BI, OneDrive,
Salesforce, SharePoint Online, and many more.
Which general category is described above?
Integration account connectors
On-premises connectors
■■ Managed API connectors
Azure AD connector
Explanation
Managed API connectors are geared toward logic apps that use services such as Azure Blob Storage, Office
365, Dynamics, Power BI, OneDrive, Salesforce, SharePoint Online, and many more.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 10 Develop event-based solutions
Events
An event is the smallest amount of information that fully describes something that happened in the
system. Every event has common information like: source of the event, time the event took place, and
unique identifier. Every event also has specific information that is only relevant to the specific type of
event. For example, an event about a new file being created in Azure Storage has details about the file,
such as the lastTimeModified value. Or, an Event Hubs event has the URL of the Capture file.
An event of size up to 64 KB is covered by General Availability (GA) Service Level Agreement (SLA). The
support for an event of size up to 1 MB is currently in preview. Events over 64 KB are charged in 64-KB
increments.
Event sources
An event source is where the event happens. Each event source is related to one or more event types. For
example, Azure Storage is the event source for blob created events. IoT Hub is the event source for
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Event Grid 275
device created events. Your application is the event source for custom events that you define. Event
sources are responsible for sending events to Event Grid.
Topics
The event grid topic provides an endpoint where the source sends events. The publisher creates the
event grid topic, and decides whether an event source needs one topic or more than one topic. A topic is
used for a collection of related events. To respond to certain types of events, subscribers decide which
topics to subscribe to.
System topics are built-in topics provided by Azure services. You don't see system topics in your Azure
subscription because the publisher owns the topics, but you can subscribe to them. To subscribe, you
provide information about the resource you want to receive events from. As long as you have access to
the resource, you can subscribe to its events.
Custom topics are application and third-party topics. When you create or are assigned access to a custom
topic, you see that custom topic in your subscription.
Event subscriptions
A subscription tells Event Grid which events on a topic you're interested in receiving. When creating the
subscription, you provide an endpoint for handling the event. You can filter the events that are sent to
the endpoint. You can filter by event type, or subject pattern. Set an expiration for event subscriptions
that are only needed for a limited time and you don't want to worry about cleaning up those subscrip-
tions.
Event handlers
From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes
some further action to process the event. Event Grid supports several handler types. You can use a
supported Azure service or your own webhook as the handler. Depending on the type of handler, Event
Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event
handlers, the event is retried until the handler returns a status code of 200 – OK. For Azure Storage
Queue, the events are retried until the Queue service successfully processes the message push into the
queue.
Event schema
The following example shows the properties that are used by all event publishers:
MCT USE ONLY. STUDENT USE PROHIBITED 276 Module 10 Develop event-based solutions
[
{
"topic": string,
"subject": string,
"id": string,
"eventType": string,
"eventTime": string,
"data":{
object-unique-to-each-publisher
},
"dataVersion": string,
"metadataVersion": string
}
]
Event properties
All events have the same following top-level data:
broad set of events. Those subscribers get events with subjects like /A/B/C or /A/D/E. Other subscrib-
ers can filter by /A/B to get a narrower set of events.
Sometimes your subject needs more detail about what happened. For example, the Storage Accounts
publisher provides the subject /blobServices/default/containers/<container-name>/
blobs/<file> when a file is added to a container. A subscriber could filter by the path /blobServic-
es/default/containers/testcontainer to get all events for that container but not other contain-
ers in the storage account. A subscriber could also filter or route by the suffix .txt to only work with text
files.
1 https://zapier.com/
2 https://ifttt.com/
MCT USE ONLY. STUDENT USE PROHIBITED 278 Module 10 Develop event-based solutions
Event subscription
To subscribe to an event, you must prove that you have access to the event source and handler. Proving
that you own a WebHook was covered in the preceding section. If you're using an event handler that isn't
a WebHook (such as an event hub or queue storage), you need write access to that resource. This
permissions check prevents an unauthorized user from sending events to your resource.
You must have the Microsoft.EventGrid/EventSubscriptions/Write permission on the resource that is
the event source. You need this permission because you're writing a new subscription at the scope of the
resource. The required resource differs based on whether you're subscribing to a system topic or custom
topic.
Key authentication
Key authentication is the simplest form of authentication. Use the format: aeg-sas-key: <your key>
SAS tokens
SAS tokens for Event Grid include the resource, an expiration time, and a signature. The format of the SAS
token is: r={resource}&e={expiration}&s={signature}.
The resource is the path for the event grid topic to which you're sending events. For example, a valid
resource path is: https://<yourtopic>.<region>.eventgrid.azure.net/eventGrid/api/
events
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Event Grid 279
Operation types
Event Grid supports the following actions:
●● Microsoft.EventGrid/*/read
●● Microsoft.EventGrid/*/write
●● Microsoft.EventGrid/*/delete
●● Microsoft.EventGrid/eventSubscriptions/getFullUrl/action
●● Microsoft.EventGrid/topics/listKeys/action
●● Microsoft.EventGrid/topics/regenerateKey/action
The last three operations return potentially secret information, which gets filtered out of normal read
operations. It's recommended that you restrict access to these operations.
Built-in roles
Event Grid provides two built-in roles for managing event subscriptions. They are important when
implementing event domains because they give users the permissions they need to subscribe to topics in
your event domain. These roles are focused on event subscriptions and don't grant access for actions
such as creating topics.
●● EventGrid EventSubscription Contributor: manage Event Grid subscription operations
●● EventGrid EventSubscription Reader: read Event Grid subscriptions
"Microsoft.Resources.ResourceWriteSuccess"
]
}
Subject filtering
For simple filtering by subject, specify a starting or ending value for the subject. For example, you can
specify the subject ends with .txt to only get events related to uploading a text file to storage account.
Or, you can filter the subject begins with /blobServices/default/containers/testcontainer
to get all events for that container but not other containers in the storage account.
The JSON syntax for filtering by subject is:
"filter": {
"subjectBeginsWith": "/blobServices/default/containers/mycontainer/log",
"subjectEndsWith": ".jpg"
}
Advanced filtering
To filter by values in the data fields and specify the comparison operator, use the advanced filtering
option. In advanced filtering, you specify the:
●● operator type - The type of comparison.
●● key - The field in the event data that you're using for filtering. It can be a number, boolean, or string.
●● value or values - The value or values to compare to the key.
The JSON syntax for using advanced filters is:
"filter": {
"advancedFilters": [
{
"operatorType": "NumberGreaterThanOrEquals",
"key": "Data.Key1",
"value": 5
},
{
"operatorType": "StringContains",
"key": "Subject",
"values": ["container1", "container2"]
}
]
}
Prerequisites
This demo is performed in the Cloud Shell.
Login to Azure
1. Login in to the Azure Portal: https://portal.azure.com and launch the Cloud Shell. Be sure to select
Bash as the shell.
2. Create a resource group, replace <myRegion> with a location that makes sense for you.
myLocation=<myRegion>
az group create -n az204-egdemo-rg -l $myLocation
It can take a few minutes for the registration to complete. To check the status run the command
below.
az provider show --namespace Microsoft.EventGrid --query "registrationState"
mySiteName="az204-egsite-${rNum}"
mySiteURL="https://${mySiteName}.azurewebsites.net"
az group deployment create \
-g az204-egdemo-rg \
--template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/
master/azuredeploy.json" \
--parameters siteName=$mySiteName hostingPlanName=viewerhost
echo "Your web app URL: ${mySiteURL}"
2. Navigate to the URL generated at the end of the script above to ensure the web app is running. You
should see the site with no messages currently displayed.
✔️ Note: Leave the browser running, it is used to show updates.
2. View your web app again, and notice that a subscription validation event has been sent to it. Select
the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can
verify that it wants to receive event data. The web app includes code to validate the subscription.
2. Create event data to send. Typically, an application or Azure service would send the event data, we're
creating data for the purposes of the demo.
event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/motorcy-
cles", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "make": "Contoso", "model":
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Event Grid 283
4. View your web app to see the event you just sent. Select the eye icon to expand the event data. Event
Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web
app includes code to validate the subscription.
{
"id": "29078",
"eventType": "recordInserted",
"subject": "myapp/vehicles/motorcycles",
"eventTime": "2019-12-02T22:23:03+00:00",
"data": {
"make": "Contoso",
"model": "Northwind"
},
"dataVersion": "1.0",
"metadataVersion": "1",
"topic": "/subscriptions/{subscription-id}/resourceGroups/az204-egdemo-rg/providers/Microsoft.
EventGrid/topics/az204-egtopic-589377852"
}
MCT USE ONLY. STUDENT USE PROHIBITED 284 Module 10 Develop event-based solutions
Feature Description
Fully managed PaaS Event Hubs is a fully managed Platform-as-a-Ser-
vice (PaaS) with little configuration or manage-
ment overhead, so you focus on your business
solutions. Event Hubs for Apache Kafka ecosys-
tems gives you the PaaS Kafka experience without
having to manage, configure, or run your clusters.
Real-time and batch processing Event Hubs uses a partitioned consumer model,
enabling multiple applications to process the
stream concurrently and letting you control the
speed of processing.
Scalable Scaling options, like Auto-inflate, scale the number
of throughput units to meet your usage needs.
Rich ecosystem Event Hubs for Apache Kafka ecosystems enables
Apache Kafka (1.0 and later) clients and applica-
tions to talk to Event Hubs. You do not need to set
up, configure, and manage your own Kafka
clusters.
●● Event receivers: Any entity that reads event data from an event hub. All Event Hubs consumers
connect via the AMQP 1.0 session. The Event Hubs service delivers events through a session as they
become available. All Kafka consumers connect via the Kafka protocol 1.0 and later.
Capture windowing
Event Hubs Capture enables you to set up a window to control capturing. This window is a minimum size
and time configuration with a “first wins policy,” meaning that the first trigger encountered causes a
capture operation. Each partition captures independently and writes a completed block blob at the time
of capture, named for the time at which the capture interval was encountered. The storage naming
convention is as follows:
{Namespace}/{EventHub}/{PartitionId}/{Year}/{Month}/{Day}/{Hour}/{Minute}/
{Second}
Note that the date values are padded with zeroes; an example filename might be:
https://mystorageaccount.blob.core.windows.net/mycontainer/mynamespace/
myeventhub/0/2017/12/08/03/03/17.avro
Once configured, Event Hubs Capture runs automatically when you send your first event, and continues
running. To make it easier for your downstream processing to know that the process is working, Event
Hubs writes empty files when there is no data. This process provides a predictable cadence and marker
that can feed your batch processors.
✔️ Note: All Event Hubs clusters are Kafka-enabled by default and support Kafka endpoints that can be
used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect
your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
Client authentication
The Event Hubs security model is based on a combination of Shared Access Signature (SAS) tokens and
event publishers. An event publisher defines a virtual endpoint for an event hub. The publisher can only
be used to send messages to an event hub. It is not possible to receive messages from a publisher.
Typically, an event hub employs one publisher per client. All messages that are sent to any of the publish-
ers of an event hub are enqueued within that event hub. Publishers enable fine-grained access control
and throttling.
Each Event Hubs client is assigned a unique token, which is uploaded to the client. The tokens are
produced such that each unique token grants access to a different unique publisher. A client that pos-
sesses a token can only send to one publisher, but no other publisher. If multiple clients share the same
token, then each of them shares a publisher.
All tokens are signed with a SAS key. Typically, all tokens are signed with the same key. Clients are not
aware of the key; this prevents other clients from manufacturing tokens.
mended that you produce a key that grants send permissions to the specific event hub. For the remain-
der of this topic, it is assumed that you named this key EventHubSendKey.
The following example creates a send-only key when creating the event hub:
// Create namespace manager.
string serviceNamespace = "YOUR_NAMESPACE";
string namespaceManageKeyName = "RootManageSharedAccessKey";
string namespaceManageKey = "YOUR_ROOT_MANAGE_SHARED_ACCESS_KEY";
Uri uri = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, string.Empty);
TokenProvider td = TokenProvider.CreateSharedAccessSignatureTokenProvider(namespaceManageKey-
Name, namespaceManageKey);
NamespaceManager nm = new NamespaceManager(namespaceUri, namespaceManageTokenProvider);
// Create event hub with a SAS rule that enables sending to that event hub
EventHubDescription ed = new EventHubDescription("MY_EVENT_HUB") { PartitionCount = 32 };
string eventHubSendKeyName = "EventHubSendKey";
string eventHubSendKey = SharedAccessAuthorizationRule.GenerateRandomKey();
SharedAccessAuthorizationRule eventHubSendRule = new SharedAccessAuthorizationRule(eventHub-
SendKeyName, eventHubSendKey, new[] { AccessRights.Send });
ed.Authorization.Add(eventHubSendRule);
nm.CreateEventHub(ed);
Generate tokens
You can generate tokens using the SAS key. You must produce only one token per client. Tokens can then
be produced using the following method. All tokens are generated using the EventHubSendKey key.
The ‘resource’ parameter corresponds to the URI endpoint of the service(event hub in this case).
public static string SharedAccessSignatureTokenProvider.GetSharedAccessSignature(string keyName,
string sharedAccessKey, string resource, TimeSpan tokenTimeToLive)
The token expiration time is specified in seconds from Jan 1, 1970. Typically, the tokens have a lifespan
that resembles or exceeds the lifespan of the client. If the client has the capability to obtain a new token,
tokens with a shorter lifespan can be used.
Sending data
Once the tokens have been created, each client is provisioned with its own unique token.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Event Hubs 289
When the client sends data into an event hub, it tags its send request with the token. To prevent an
attacker from eavesdropping and stealing the token, the communication between the client and the
event hub must occur over an encrypted channel.
Blacklisting clients
If a token is stolen by an attacker, the attacker can impersonate the client whose token has been stolen.
Blacklisting a client renders that client unusable until it receives a new token that uses a different publish-
er.
Event publishers
You send events to an event hub either using HTTP POST or via an AMQP 1.0 connection. The choice of
which to use and when depends on the specific scenario being addressed. AMQP 1.0 connections are
metered as brokered connections in Service Bus and are more appropriate in scenarios with frequent
higher message volumes and lower latency requirements, as they provide a persistent messaging chan-
nel.
When using the .NET managed APIs, the primary constructs for publishing data to Event Hubs are the
EventHubClient and EventData classes. EventHubClient provides the AMQP communication
channel over which events are sent to the event hub. The EventData class represents an event, and is
used to publish messages to an event hub. This class includes the body, some metadata, and header
information about the event. Other properties are added to the EventData object as it passes through
an event hub.
The .NET classes that support Event Hubs are provided in the Microsoft.Azure.EventHubs NuGet
package.
MCT USE ONLY. STUDENT USE PROHIBITED 290 Module 10 Develop event-based solutions
};
eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
Event serialization
The EventData class has two overloaded constructors that take a variety of parameters, bytes or a byte
array, that represent the event data payload. When using JSON with EventData, you can use Encod-
ing.UTF8.GetBytes() to retrieve the byte array for a JSON-encoded string. For example:
for (var i = 0; i < numMessagesToSend; i++)
{
var message = $"Message {i}";
Console.WriteLine($"Sending message: {message}");
await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
}
Partition key
When sending event data, you can specify a value that is hashed to produce a partition assignment. You
specify the partition using the PartitionSender.PartitionID property. However, the decision to
use partitions implies a choice between availability and consistency.
Availability considerations
Using a partition key is optional, and you should consider carefully whether or not to use one. If you
don't specify a partition key when publishing an event, a round-robin assignment is used. In many cases,
using a partition key is a good choice if event ordering is important. When you use a partition key, these
partitions require availability on a single node, and outages can occur over time.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Event Hubs 291
Another consideration is handling delays in processing events. In some cases, it might be better to drop
data and retry than to try to keep up with processing, which can potentially cause further downstream
processing delays.
Given these availability considerations, in these scenarios you might choose one of the following error
handling strategies:
●● Stop (stop reading from Event Hubs until things are fixed)
●● Drop (messages aren’t important, drop them)
●● Retry (retry the messages as you see fit)
Event consumers
The EventProcessorHost class processes data from Event Hubs. You should use this implementation
when building event readers on the .NET platform. EventProcessorHost provides a thread-safe,
multi-process, safe runtime environment for event processor implementations that also provides check-
pointing and partition lease management.
To use the EventProcessorHost class, you can implement IEventProcessor. This interface contains
four methods:
●● OpenAsync
●● CloseAsync
●● ProcessEventsAsync
●● ProcessErrorAsync
To start event processing, instantiate EventProcessorHost, providing the appropriate parameters for
your event hub. For example:
var eventProcessorHost = new EventProcessorHost(
EventHubName,
PartitionReceiver.DefaultConsumerGroupName,
EventHubConnectionString,
StorageConnectionString,
StorageContainerName);
At this point, the host attempts to acquire a lease on every partition in the event hub using a “greedy”
algorithm. These leases last for a given timeframe and must then be renewed. As new nodes, worker
instances in this case, come online, they place lease reservations and over time the load shifts between
nodes as each attempts to acquire more leases.
Publisher revocation
In addition to the advanced run-time features of EventProcessorHost, Event Hubs enables publisher
revocation in order to block specific publishers from sending event to an event hub. These features are
useful if a publisher token has been compromised, or a software update is causing them to behave
inappropriately. In these situations, the publisher's identity, which is part of their SAS token, can be
blocked from publishing events.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Notification Hubs 293
Security claims
Similar to other entities, Notification Hub operations are allowed for three security claims:
●● Listen: Create/Update, Read, and Delete single registrations
●● Send: Send messages to the Notification Hub
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Notification Hubs 295
●● Manage: CRUDs on Notification Hubs (including updating PNS credentials, and security keys), and
read registrations based on tags
Notification Hubs accept claims granted by Microsoft Azure Access Control tokens, and by signature
tokens generated with shared keys configured directly on the Notification Hub.
Notification Hubs accepts SAS tokens generated with shared keys configured directly on the hub.
It is not possible to send a notification to more than one namespace. Namespaces are logical containers
for Notification Hubs and are not involved in sending notifications.
Registration management
The topic describes registrations at a high level, then introduces the two main patterns for registering
devices: registering from the device directly to the notification hub, and registering through an applica-
tion backend.
Device registration
Device registration with a Notification Hub is accomplished using a Registration or Installation.
Registrations
A registration associates the Platform Notification Service (PNS) handle for a device with tags and
possibly a template. The PNS handle could be a ChannelURI, device token, or FCM registration id. Tags
are used to route notifications to the correct set of device handles. For more information, see Routing
and Tag Expressions. Templates are used to implement per-registration transformation.
Note: Azure Notification Hubs supports a maximum of 60 tags per registration.
Installations
An Installation is an enhanced registration that includes a bag of push related properties. It is the latest
and best approach to registering your devices. However, it is not supported by client-side .NET SDK
(Notification Hub SDK for backend operations) as of yet. This means if you are registering from the client
device itself, you would have to use the Notification Hubs REST API approach to support installations. If
you are using a backend service, you should be able to use Notification Hub SDK for backend operations.
The following are some key advantages to using installations:
●● Creating or updating an installation is fully idempotent. So you can retry it without any concerns
about duplicate registrations.
●● The installation model supports a special tag format ($InstallationId:{INSTALLATION_ID})
that enables sending a notification directly to the specific device.
●● Using installations also enables you to do partial registration updates. The partial update of an
installation is requested with a PATCH method using the JSON-Patch standard.
Registrations and installations must contain a valid PNS handle for each device/channel. Because PNS
handles can only be obtained in a client app on the device, one pattern is to register directly on that
device with the client app. On the other hand, security considerations and business logic related to tags
might require you to manage device registration in the app back-end.
MCT USE ONLY. STUDENT USE PROHIBITED 296 Module 10 Develop event-based solutions
Templates
If you want to use Templates, the device installation also holds all templates associated with that device in
a JSON format. The template names help target different templates for the same device.
Each template name maps to a template body and an optional set of tags. Moreover, each platform can
have additional template properties. For Windows Store (using WNS) and Windows Phone 8 (using
MPNS), an additional set of headers can be part of the template. In the case of APNs, you can set an
expiry property to either a constant or to a template expression.
The device first retrieves the PNS handle from the PNS, then registers with the notification hub directly.
After the registration is successful, the app backend can send a notification targeting that registration.
We'll provide more more information about how to send notifications in the next topic in the lesson.
In this case, you use only Listen rights to access your notification hubs from the device.
To send a similar toast message on a Windows Store application, the XML payload is as follows:
<toast>
<visual>
<binding template=\"ToastText01\">
<text id=\"1\">Hello!</text>
</binding>
</visual>
</toast>
You can create similar payloads for MPNS (Windows Phone) and FCM (Android) platforms.
This requirement forces the app backend to produce different payloads for each platform, and effectively
makes the backend responsible for part of the presentation layer of the app. Some concerns include
localization and graphical layouts (especially for Windows Store apps that include notifications for various
types of tiles).
The Notification Hubs template feature enables a client app to create special registrations, called tem-
plate registrations, which include, in addition to the set of tags, a template. The Notification Hubs
template feature enables a client app to associate devices with templates whether you are working with
Installations (preferred) or Registrations. Given the preceding payload examples, the only platform-inde-
pendent information is the actual alert message (Hello!). A template is a set of instructions for the
Notification Hub on how to format a platform-independent message for the registration of that specific
client app. In the preceding example, the platform-independent message is a single property: message
= Hello!.
The template for the iOS client app registration is as follows:
{"aps": {"alert": "$(message)"}}
The corresponding template for the Windows Store client app is:
<toast>
<visual>
<binding template=\"ToastText01\">
<text id=\"1\">$(message)</text>
</binding>
</visual>
</toast>
Notice that the actual message is substituted for the expression $(message). This expression instructs
the Notification Hub, whenever it sends a message to this particular registration, to build a message that
follows it and switches in the common value.
If you are working with Installation model, the installation “templates” key holds a JSON of multiple
templates. If you are working with Registration model, the client application can create multiple registra-
MCT USE ONLY. STUDENT USE PROHIBITED 298 Module 10 Develop event-based solutions
tions in order to use multiple templates; for example, a template for alert messages and a template for
tile updates. Client applications can also mix native registrations (registrations with no template) and
template registrations.
The Notification Hub sends one notification for each template without considering whether they belong
to the same client app. This behavior can be used to translate platform-independent notifications into
more notifications. For example, the same platform-independent message to the Notification Hub can be
seamlessly translated in a toast alert and a tile update, without requiring the backend to be aware of it.
Some platforms (for example, iOS) might collapse multiple notifications to the same device if they are
sent in a short period of time.
Expression Description
$(prop) Reference to an event property with the given
name. Property names are not case-sensitive. This
expression resolves into the property’s
text value or into an empty string if the property is
not
present.
$(prop, n) As above, but the text is explicitly clipped at n
characters, for example $(title, 20) clips the
contents of the title property at 20
characters.
.(prop, n) As above, but the text is suffixed with three dots
as it is clipped. The total size of the clipped
string and the suffix does not exceed n
characters. .(title, 20) with an input property of
“This is the
title line” results in This is the title...
%(prop) Similar to $(name) except that the output is
URI-encoded.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Notification Hubs 299
Expression Description
#(prop) Used in JSON templates (for example, for iOS and
Android templates).
For example,
‘badge: ‘#(name)’ becomes ‘badge’ : 40
(and not ‘40‘).
‘text’ or “text” A literal. Literals contain arbitrary text enclosed in
single or double quotes.
expr1 + expr2 The concatenation operator joining two expres-
sions into a single string.
The expressions can be any of the preceding forms.
When using concatenation, the entire expression must be surrounded with {}. For example, {$(prop)
+ ‘ - ’ + $(prop2)}.
For example, the following template is not a valid XML template:
<tile>
<visual>
<binding $(property)>
<text id="1">Seattle, WA</text>
</binding>
</visual>
</tile>
As explained earlier, when using concatenation, expressions must be wrapped in curly brackets. For
example:
<tile>
<visual>
<binding template="ToastText01">
<text id="1">{'Hi, ' + $(name)}</text>
</binding>
</visual>
</tile>
MCT USE ONLY. STUDENT USE PROHIBITED 300 Module 10 Develop event-based solutions
Next
●● In the next topic we'll be covering routing and tag expressions.
Tags
A tag can be any string, up to 120 characters, containing alphanumeric and the following non-alphanu-
meric characters: ‘_’, ‘@’, ‘#’, ‘.’, ‘:’, ‘-’. The following example shows an application from which you can
receive toast notifications about specific music groups. In this scenario, a simple way to route notifica-
tions is to label registrations with tags that represent the different artists, as in the following picture:
You can send notifications to tags using the send notifications methods of the Microsoft.Azure.
NotificationHubs.NotificationHubClient class in the Microsoft Azure Notification Hubs SDK. You
can also use Node.js, or the Push Notifications REST APIs.
Tags do not have to be pre-provisioned and can refer to multiple app-specific concepts. For example,
users of this example application can comment on bands and want to receive toasts, not only for the
comments on their favorite bands, but also for all comments from their friends, regardless of the band on
which they are commenting.
Tag expressions
There are cases in which a notification has to target a set of registrations that is identified not by a single
tag, but by a Boolean expression on tags.
Consider a sports application that sends a reminder to everyone in Anytown about a game between the
HomeTeam and VisitingTeam. If the client app registers tags about interest in teams and location, then
the notification should be targeted to everyone in Anytown who is interested in either the HomeTeam or
the VisitingTeam. This condition can be expressed with the following Boolean expression:
(follows_HomeTeam || follows_VisitingTeam) && location_Anytown )
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Notification Hubs 301
Tag expressions can contain all Boolean operators, such as AND (&&), OR (||), and NOT (!). They can also
contain parentheses. Tag expressions are limited to 20 tags if they contain only ORs; otherwise they are
limited to 6 tags.
MCT USE ONLY. STUDENT USE PROHIBITED 302 Module 10 Develop event-based solutions
Lab scenario
Your company builds a human resources (HR) system used by various customers around the world. While
the system works fine today, your development managers have decided to begin re-architecting the solu-
tion by decoupling application components. This decision was driven by a desire to make any future
development simpler through modularity. As the developer who manages component communication,
you have decided to introduce Microsoft Azure Event Grid as your solution-wide messaging platform.
Objectives
After you complete this lab, you will be able to:
●● Create an Event Grid topic.
●● Use the Azure Event Grid viewer to subscribe to a topic and illustrate published messages.
●● Publish a message from a Microsoft .NET application.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
Azure Event Grid has three types of authentication. Select the valid types of authentication from the list
below.
Custom topic publishing
Event Grid login
Event subscription
WebHook event delivery
Review Question 3
Azure Event Hubs is a big data streaming platform and event ingestion service. What type of service is it?
IaaS
SaaS
PaaS
Review Question 4
In .NET programming which of the below is the primary class for interacting with Event Hubs?
Microsoft.Azure.EventHubs.EventHubClient
Microsoft.Azure.EventHubs
Microsoft.Azure.Events
Microsoft.Azure.EventHubClient
Review Question 5
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send
notifications to any platform.
In the list below, which two methods accomplish a device registration?
Registration
Service authentication
Installation
Broadcast
MCT USE ONLY. STUDENT USE PROHIBITED 304 Module 10 Develop event-based solutions
Answers
Review Question 1
Regarding Azure Event Grid, which of the below represents the smallest amount of information that fully
describes something happening in the system?
Event subscription
Topics
Event handlers
■■ Events
Explanation
An event is the smallest amount of information that fully describes something that happened in the system.
Every event has common information like: source of the event, time the event took place, and unique
identifier.
Review Question 2
Azure Event Grid has three types of authentication. Select the valid types of authentication from the list
below.
■■ Custom topic publishing
Event Grid login
■■ Event subscription
■■ WebHook event delivery
Explanation
Azure Event Grid has three types of authentication:
Review Question 3
Azure Event Hubs is a big data streaming platform and event ingestion service. What type of service is it?
IaaS
SaaS
■■ PaaS
Explanation
Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management
overhead.
Review Question 4
In .NET programming which of the below is the primary class for interacting with Event Hubs?
■■ Microsoft.Azure.EventHubs.EventHubClient
Microsoft.Azure.EventHubs
Microsoft.Azure.Events
Microsoft.Azure.EventHubClient
Explanation
The primary class for interacting with Event Hubs is Microsoft.Azure.EventHubs.EventHubClient. You can
instantiate this class using the CreateFromConnectionString method.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 305
Review Question 5
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send
notifications to any platform.
In the list below, which two methods accomplish a device registration?
■■ Registration
Service authentication
■■ Installation
Broadcast
Explanation
Device registration with a Notification Hub is accomplished using a Registration or Installation.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 11 Develop message-based solutions
Namespaces
A namespace is a container for all messaging components. Multiple queues and topics can be in a single
namespace, and namespaces often serve as application containers.
Queues
Messages are sent to and received from queues. Queues store messages until the receiving application is
available to receive and process them.
Messages in queues are ordered and timestamped on arrival. Once accepted, the message is held safely
in redundant storage. Messages are delivered in pull mode, only delivering messages when requested.
MCT USE ONLY. STUDENT USE PROHIBITED 308 Module 11 Develop message-based solutions
Topics
You can also use topics to send and receive messages. While a queue is often used for point-to-point
communication, topics are useful in publish/subscribe scenarios.
Topics can have multiple, independent subscriptions. A subscriber to a topic can receive a copy of each
message sent to that topic. Subscriptions are named entities. Subscriptions persist, but can expire or
autodelete.
You may not want individual subscriptions to receive all messages sent to a topic. If so, you can use rules
and filters to define conditions that trigger optional actions. You can filter specified messages and set or
modify message properties.
Advanced features
Service Bus includes advanced features that enable you to solve more complex messaging problems. The
following table describes several of these features.
Feature Description
Message sessions To create a first-in, first-out (FIFO) guarantee in
Service Bus, use sessions. Message sessions enable
joint and ordered handling of unbounded se-
quences of related messages.
Autoforwarding The autoforwarding feature chains a queue or sub-
scription to another queue or topic that is in the
same namespace.
Dead-letter queue Service Bus supports a dead-letter queue (DLQ). A
DLQ holds messages that can't be delivered to any
receiver. Service Bus lets you remove messages
from the DLQ and inspect them.
Scheduled delivery You can submit messages to a queue or topic for
delayed processing. You can schedule a job to
become available for processing by a system at a
certain time.
Message deferral A queue or subscription client can defer retrieval
of a message until a later time. The message
remains in the queue or subscription, but it's set
aside.
Batching Client-side batching enables a queue or topic
client to delay sending a message for a certain
period of time.
Transactions A transaction groups two or more operations
together into an execution scope. Service Bus
supports grouping operations against a single
messaging entity within the scope of a single
transaction. A message entity can be a queue,
topic, or subscription.
Filtering and actions Subscribers can define which messages they want
to receive from a topic. These messages are
specified in the form of one or more named
subscription rules.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Service Bus 309
Feature Description
Autodelete on idle Autodelete on idle enables you to specify an idle
interval after which a queue is automatically
deleted. The minimum duration is 5 minutes.
Duplicate detection An error could cause the client to have a doubt
about the outcome of a send operation. Duplicate
detection enables the sender to resend the same
message, or for the queue or topic to discard any
duplicate copies.
Security protocols Service Bus supports security protocols such as
Shared Access Signatures (SAS), Role Based Access
Control (RBAC) and Managed identities for Azure
resources.
Geo-disaster recovery When Azure regions or datacenters experience
downtime, Geo-disaster recovery enables data
processing to continue operating in a different
region or datacenter.
Security Service Bus supports standard AMQP 1.0 and
HTTP/REST protocols.
Integration
Service Bus fully integrates with the following Azure services:
●● Event Grid
●● Logic Apps
●● Azure Functions
●● Dynamics 365
●● Azure Stream Analytics
Queues
Queues offer First In, First Out (FIFO) message delivery to one or more competing consumers. That is,
receivers typically receive and process messages in the order in which they were added to the queue, and
only one message consumer receives and processes each message. A key benefit of using queues is to
achieve “temporal decoupling” of application components.
A related benefit is “load leveling,” which enables producers and consumers to send and receive messag-
es at different rates. In many applications, the system load varies over time; however, the processing time
required for each unit of work is typically constant. Intermediating message producers and consumers
with a queue means that the consuming application only has to be provisioned to be able to handle aver-
age load instead of peak load.
MCT USE ONLY. STUDENT USE PROHIBITED 310 Module 11 Develop message-based solutions
Using queues to intermediate between message producers and consumers provides an inherent loose
coupling between the components. Because producers and consumers are not aware of each other, a
consumer can be upgraded without having any effect on the producer.
Create queues
You create queues using the Azure portal, PowerShell, CLI, or Resource Manager templates. You then
send and receive messages using a QueueClient object.
Receive modes
You can specify two different modes in which Service Bus receives messages: ReceiveAndDelete or
PeekLock.
In the ReceiveAndDelete mode, the receive operation is single-shot; that is, when Service Bus receives
the request, it marks the message as being consumed and returns it to the application. ReceiveAndDelete
mode is the simplest model and works best for scenarios in which the application can tolerate not
processing a message if a failure occurs.
In PeekLock mode, the receive operation becomes two-stage, which makes it possible to support
applications that cannot tolerate missing messages. When Service Bus receives the request, it finds the
next message to be consumed, locks it to prevent other consumers from receiving it, and then returns it
to the application. After the application finishes processing the message (or stores it reliably for future
processing), it completes the second stage of the receive process by calling CompleteAsync on the
received message. When Service Bus sees the CompleteAsync call, it marks the message as being
consumed.
If the application is unable to process the message for some reason, it can call the AbandonAsync
method on the received message (instead of CompleteAsync). This method enables Service Bus to
unlock the message and make it available to be received again, either by the same consumer or by
another competing consumer. Secondly, there is a timeout associated with the lock and if the application
fails to process the message before the lock timeout expires (for example, if the application crashes), then
Service Bus unlocks the message and makes it available to be received again (essentially performing an
AbandonAsync operation by default).
In the event that the application crashes after processing the message, but before the CompleteAsync
request is issued, the message is redelivered to the application when it restarts. This process is often
called At Least Once processing; that is, each message is processed at least once. However, in certain
situations the same message may be redelivered. If the scenario cannot tolerate duplicate processing,
then additional logic is required in the application to detect duplicates, which can be achieved based
upon the MessageId property of the message, which remains constant across delivery attempts. This
feature is known as Exactly Once processing.
are sent to the topic. Messages are received from a subscription identically to the way they are received
from a queue.
Payload serialization
When in transit or stored inside of Service Bus, the payload is always an opaque, binary block. The
ContentType property enables applications to describe the payload, with the suggested format for the
property values being a MIME content-type description according to IETF RFC2045; for example, appli-
cation/json;charset=utf-8.
Unlike the Java or .NET Standard variants, the .NET Framework version of the Service Bus API supports
creating BrokeredMessage instances by passing arbitrary .NET objects into the constructor.
When using the legacy SBMP protocol, those objects are then serialized with the default binary serializer,
or with a serializer that is externally supplied. When using the AMQP protocol, the object is serialized into
an AMQP object. The receiver can retrieve those objects with the GetBody() method, supplying the
expected type. With AMQP, the objects are serialized into an AMQP graph of ArrayList and IDic-
tionary<string,object> objects, and any AMQP client can decode them.
While this hidden serialization magic is convenient, applications should take explicit control of object
serialization and turn their object graphs into streams before including them into a message, and do the
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Service Bus 313
reverse on the receiver side. This yields interoperable results. It should also be noted that while AMQP
has a powerful binary encoding model, it is tied to the AMQP messaging ecosystem and HTTP clients will
have trouble decoding such payloads.
Prerequisites
This demo is performed in the Cloud Shell, and in Visual Studio Code. The code examples below rely on
the Microsoft.Azure.ServiceBus NuGet package.
Login to Azure
1. Login in to the Azure Portal: https://portal.azure.com and launch the Cloud Shell. Be sure to select
Bash as the shell.
2. Create a resource group, replace <myRegion> with a location that makes sense for you. Copy the first
line by itself and edit the value.
myLocation=<myRegion>
myResourceGroup="az204-svcbusdemo-rg"
az group create -n $myResourceGroup -l $myLocation
--namespace-name $namespaceName \
--name RootManageSharedAccessKey \
--query primaryConnectionString --output tsv)
echo $connectionString
After the last command runs, copy and paste the connection string to a temporary location such as
Notepad. You will need it in the next step.
3. Within the Program class, declare the following variables. Set the ServiceBusConnectionString
variable to the connection string that you obtained when creating the namespace:
const string ServiceBusConnectionString = "<your_connection_string>";
const string QueueName = "az204-queue";
static IQueueClient queueClient;
4. Replace the default contents of Main() with the following line of code:
public static async Task Main(string[] args)
{
const int numberOfMessages = 10;
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after sending all the messages.");
Console.Write-
Line("======================================================");
// Send messages.
await SendMessagesAsync(numberOfMessages);
Console.ReadKey();
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Service Bus 315
await queueClient.CloseAsync();
}
5. Directly after Main(), add the following asynchronous MainAsync() method that calls the send
messages method:
static async Task MainAsync()
{
const int numberOfMessages = 10;
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after sending all the messages.");
Console.Write-
Line("======================================================");
// Send messages.
await SendMessagesAsync(numberOfMessages);
Console.ReadKey();
await queueClient.CloseAsync();
}
6. Directly after the MainAsync() method, add the following SendMessagesAsync() method that
performs the work of sending the number of messages specified by numberOfMessagesToSend
(currently set to 10):
static async Task SendMessagesAsync(int numberOfMessagesToSend)
{
try
{
for (var i = 0; i < numberOfMessagesToSend; i++)
{
// Create a new message to send to the queue.
string messageBody = $"Message {i}";
var message = new Message(Encoding.UTF8.GetBytes(messageBody));
7. Save the file and run the following commands in the terminal.
dotnet build
dotnet run
8. Login to the Azure Portal and navigate to the az204-queue you created earlier and select Overview to
show the Essentials screen.
Notice that the Active Message Count value for the queue is now 10. Each time you run the sender
application without retrieving the messages (as described in the next section), this value increases by 10.
3. Within the Program class, declare the following variables. Set the ServiceBusConnectionString
variable to the connection string that you obtained when creating the namespace:
const string ServiceBusConnectionString = "<your_connection_string>";
const string QueueName = "az204-queue";
static IQueueClient queueClient;
Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
Console.Write-
Line("======================================================");
Console.ReadKey();
await queueClient.CloseAsync();
}
5. Directly after Main(), add the following asynchronous MainAsync() method that calls the Regis-
terOnMessageHandlerAndReceiveMessages() method:
static async Task MainAsync()
{
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
Console.Write-
Line("======================================================");
Console.ReadKey();
await queueClient.CloseAsync();
}
6. Directly after the MainAsync() method, add the following method that registers the message
handler and receives the messages sent by the sender application:
static void RegisterOnMessageHandlerAndReceiveMessages()
{
// Configure the message handler options in terms of exception handling, number of concurrent
messages to deliver, etc.
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
// Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for sim-
plicity.
// Set it according to how many messages the application wants to process in parallel.
MaxConcurrentCalls = 1,
// Indicates whether the message pump should automatically complete the messages after
returning from user callback.
// False below indicates the complete operation is handled by the user callback as in Process-
MessagesAsync().
AutoComplete = false
};
7. Directly after the previous method, add the following ProcessMessagesAsync() method to
process the received messages:
static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
// Process the message.
Console.WriteLine($"Received message: SequenceNumber:{message.SystemProperties.Sequence-
Number} Body:{Encoding.UTF8.GetString(message.Body)}");
// Note: Use the cancellationToken passed as necessary to determine if the queueClient has already
been closed.
// If queueClient has already been closed, you can choose to not call CompleteAsync() or Aban-
donAsync() etc.
// to avoid unnecessary exceptions.
}
8. Finally, add the following method to handle any exceptions that might occur:
// Use this handler to examine the exceptions received on the message pump.
static Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
{
Console.WriteLine($"Message handler encountered an exception {exceptionReceivedEventArgs.
Exception}.");
var context = exceptionReceivedEventArgs.ExceptionReceivedContext;
Console.WriteLine("Exception context for troubleshooting:");
Console.WriteLine($"- Endpoint: {context.Endpoint}");
Console.WriteLine($"- Entity Path: {context.EntityPath}");
Console.WriteLine($"- Executing Action: {context.Action}");
return Task.CompletedTask;
}
9. Save the file and run the following commands in the terminal.
●● dotnet build
●● dotnet run
10. Check the portal again. Notice that the Active Message Count value is now 0. You may need to
refresh the portal page.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement solutions that use Azure Queue Storage queues 319
●● URL format: Queues are addressable using the following URL format:
https://<storage account>.queue.core.windows.net/<queue>
The following URL addresses a queue in the diagram:
https://myaccount.queue.core.windows.net/images-to-download
●● Storage account: All access to Azure Storage is done through a storage account.
●● Queue: A queue contains a set of messages. All messages must be in a queue. Note that the queue
name must be all lowercase.
●● Message: A message, in any format, of up to 64 KB. The maximum time that a message can remain in
the queue is seven days.
●● Microsoft Azure Storage Queue Library for .NET: This client library enables working with the Microsoft
Azure Storage Queue service for storing messages that may be accessed by a client.
●● Microsoft Azure Configuration Manager library for .NET: This package provides a class for parsing a
connection string in a configuration file, regardless of where your application is running.
Create a queue
This example shows how to create a queue if it does not already exist:
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Display message.
Console.WriteLine(peekedMessage.AsString);
// Get the message from the queue and update the message contents.
CloudQueueMessage message = queue.GetMessage();
message.SetMessageContent2("Updated contents.", false);
queue.UpdateMessage(message,
TimeSpan.FromSeconds(60.0), // Make it invisible for another 60 seconds.
MessageUpdateFields.Content | MessageUpdateFields.Visibility);
MCT USE ONLY. STUDENT USE PROHIBITED 322 Module 11 Develop message-based solutions
//Process the message in less than 30 seconds, and then delete the message
queue.DeleteMessage(retrievedMessage);
Delete a queue
To delete a queue and all the messages contained in it, call the Delete method on the queue object.
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
Lab scenario
You're studying various ways to communicate between isolated service components in Microsoft Azure,
and you have decided to evaluate the Azure Storage service and its Queue service offering. As part of this
evaluation, you'll build a prototype application in .NET that can send and receive messages so that you
can measure the complexity involved in using this service. To help you with your evaluation, you've also
decided to use Azure Storage Explorer as the queue message producer/consumer throughout your tests.
Objectives
After you complete this lab, you will be able to:
●● Add Azure.Storage libraries from NuGet.
●● Create a queue in .NET.
●● Produce a new message in the queue by using .NET.
●● Consume a message from the queue by using .NET.
●● Manage a queue by using Storage Explorer.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
In Azure Service Bus which of the following receive modes is best for scenarios in which the application can
tolerate not processing a message if a failure occurs?
PeekLock
ReceiveAndDelete
All of the above
None of the above
Review Question 3
In Azure Queue storage, what is the maximum time that a message can remain in queue?
5 days
6 days
7 days
10 days
Review Question 4
In Azure Queue storage, which two methods below are used to de-queue a message?
GetMessage
UpdateMessage
DeleteMessage
PeekMessage
MCT USE ONLY. STUDENT USE PROHIBITED 326 Module 11 Develop message-based solutions
Answers
Review Question 1
Which of the following advanced features of Azure Service Bus is used to guarantee a first-in, first-out
(FIFO) guarantee?
Transactions
Scheduled delivery
■■ Message sessions
Batching
Explanation
To create a first-in, first-out (FIFO) guarantee in Service Bus, use sessions. Message sessions enable joint and
ordered handling of unbounded sequences of related messages.
Review Question 2
In Azure Service Bus which of the following receive modes is best for scenarios in which the application
can tolerate not processing a message if a failure occurs?
PeekLock
■■ ReceiveAndDelete
All of the above
None of the above
Explanation
ReceiveAndDelete mode is the simplest model and works best for scenarios in which the application can
tolerate not processing a message if a failure occurs.
Review Question 3
In Azure Queue storage, what is the maximum time that a message can remain in queue?
5 days
6 days
■■ 7 days
10 days
Explanation
The maximum time that a message can remain in the queue is seven days.
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 327
Review Question 4
In Azure Queue storage, which two methods below are used to de-queue a message?
■■ GetMessage
UpdateMessage
■■ DeleteMessage
PeekMessage
Explanation
Your code de-queues a message from a queue in two steps. When you call GetMessage, you get the next
message in a queue. A message returned from GetMessage becomes invisible to any other code reading
messages from this queue. By default, this message stays invisible for 30 seconds. To finish removing the
message from the queue, you must also call DeleteMessage.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 12 Monitor and optimize Azure solu-
tions
1 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-sources
MCT USE ONLY. STUDENT USE PROHIBITED 332 Module 12 Monitor and optimize Azure solutions
In addition, you can pull in telemetry from the host environments such as performance counters, Azure
diagnostics, or Docker logs. You can also set up web tests that periodically send synthetic requests to
your web service.
All these telemetry streams are integrated into Azure Monitor. In the Azure portal, you can apply power-
ful analytic and search tools to the raw data.
The impact on your app's performance is very small. Tracking calls are non-blocking, and are batched and
sent in a separate thread.
●● Custom events and metrics that you write yourself in the client or server code, to track business
events such as items sold or games won.
MCT USE ONLY. STUDENT USE PROHIBITED 334 Module 12 Monitor and optimize Azure solutions
For a snippet you can copy and paste please visit https://docs.microsoft.com/en-us/azure/az-
ure-monitor/app/javascript#snippet-based-setup.
Telemetry initializers
Telemetry initializers are used to modify the contents of collected telemetry before being sent from the
user's browser. They can also be used to prevent certain telemetry from being sent, by returning false.
MCT USE ONLY. STUDENT USE PROHIBITED 338 Module 12 Monitor and optimize Azure solutions
Multiple telemetry initializers can be added to your Application Insights instance, and they are executed
in order of adding them.
The input argument to addTelemetryInitializer is a callback that takes a ITelemetryItem as an
argument and returns a boolean or void. If returning false, the telemetry item is not sent, else it
proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
Prerequisites
This demo is performed in the Cloud Shell, and in Visual Studio Code. The code examples below rely on
the Microsoft.ApplicationInsights.AspNetCore NuGet package.
Login to Azure
1. Login in to the Azure Portal: https://portal.azure.com and launch the Cloud Shell. Be sure to select
PowerShell as the shell.
2. Create a resource group and Application Insights instance with a location that makes sense for you.
$myLocation = Read-Host -Prompt "Enter the region (i.e. westus): "
$myResourceGroup = "az204appinsights-rg"
$myAppInsights = "az204appinsights"
4. Install the Application Insights SDK NuGet package for ASP.NET Core by running the following
command in a VS Code terminal.
dotnet add package Microsoft.ApplicationInsights.AspNetCore --version 2.8.2
"LogLevel": {
"Default": "Warning"
}
}
}
4. Open a new browser window and navigate to http://localhost:5000 to view your web app.
5. Set the browser window for the app side-by-side with the portal showing the Live Metrics Stream.
Notice the incoming requests on the Live Metrics Stream as you navigate around the web app.
6. In Visual Studio Code type ctrl-c to close the application.
2. Insert the HtmlHelper in the _Layout.cshtml file at the end of the <head> section but before
any other script. If you want to report any custom JavaScript telemetry from the page, inject it after
this snippet:
@Html.Raw(JavaScriptSnippet.FullScript)
</head>
The .cshtml file names referenced earlier are from a default MVC application template. Ultimately, if
you want to properly enable client-side monitoring for your application, the JavaScript snippet must
appear in the <head> section of each page of your application that you want to monitor. You can
accomplish this goal for this application template by adding the JavaScript snippet to _Layout.
cshtml.
Clean up resources
When you're finished, delete the resource group created earlier in the demo.
MCT USE ONLY. STUDENT USE PROHIBITED
Analyzing and troubleshooting apps 341
●● Custom Track Availability Tests: If you decide to create a custom application to run availability tests,
the TrackAvailability() method can be used to send the results to Application Insights.
You can create up to 100 availability tests per Application Insights resource.
Create a test
●● URL: The URL can be any web page you want to test, but it must be visible from the public internet.
The URL can include a query string. So, for example, you can exercise your database a little. If the URL
resolves to a redirect, we follow it up to 10 redirects.
●● Parse dependent requests: Test requests images, scripts, style files, and other files that are part of
the web page under test. The recorded response time includes the time taken to get these files. The
test fails if any of these resources cannot be successfully downloaded within the timeout for the whole
test.
●● Enable retries: If a test fails, it is retried after a short interval. A failure is reported only if three
successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is
temporarily suspended until the next success. This rule is applied independently at each test location.
We recommend this option. On average, about 80% of failures disappear on retry.
●● Test frequency: Sets how often the test is run from each test location. With a default frequency of five
minutes and five test locations, your site is tested on average every minute.
MCT USE ONLY. STUDENT USE PROHIBITED
Analyzing and troubleshooting apps 343
●● Test locations Are the places from where our servers send web requests to your URL. Our minimum
number of recommended test locations is five in order to insure that you can distinguish problems
in your website from network issues. You can select up to 16 locations.
Note: We strongly recommend testing from multiple locations with a minimum of five locations. This
is to prevent false alarms that may result from transient issues with a specific location. In addition we
have found that the optimal configuration is to have the number of test locations be equal to the alert
location threshold + 2. Enabling the “Parse dependent requests” option results in a stricter check. The
test could fail for cases which may not be noticeable when manually browsing the site.
Success criteria
●● Test timeout: Decrease this value to be alerted about slow responses. The test is counted as a failure
if the responses from your site have not been received within this period. If you selected Parse
dependent requests, then all the images, style files, scripts, and other dependent resources must
have been received within this period.
●● HTTP response: The returned status code that is counted as a success. 200 is the code that indicates
that a normal web page has been returned.
●● Content match: A string, like “Welcome!” We test that an exact case-sensitive match occurs in every
response. It must be a plain string, without wildcards. Don't forget that if your page content changes
you might have to update it. Only English characters are supported with content match
Alerts
●● Near-realtime (Preview): We recommend using Near-realtime alerts. Configuring this type of alert is
done after your availability test is created.
●● Classic: We no longer recommended using classic alerts for new availability tests.
●● Alert location threshold: We recommend a minimum of 3/5 locations. The optimal relationship
between alert location threshold and the number of test locations is alert location threshold =
number of test locations - 2, with a minimum of five test locations.
What is a Component?
Components are independently deployable parts of your distributed/microservices application. Develop-
ers and operations teams have code-level visibility or access to telemetry generated by these application
components.
●● Components are different from “observed” external dependencies such as SQL, EventHub etc. which
your team/organization may not have access to (code or telemetry).
●● Components run on any number of server/role/container instances.
MCT USE ONLY. STUDENT USE PROHIBITED 344 Module 12 Monitor and optimize Azure solutions
●● Components can be separate Application Insights instrumentation keys (even if subscriptions are
different) or different roles reporting to a single Application Insights instrumentation key. The preview
map experience shows the components regardless of how they are set up.
One of the key objectives with this experience is to be able to visualize complex topologies with hun-
dreds of components.
Click on any component to see related insights and go to the performance and failure triage experience
for that component.
namespace CustomInitializer.Telemetry
{
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
//set custom role name here
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
}
}
MCT USE ONLY. STUDENT USE PROHIBITED 346 Module 12 Monitor and optimize Azure solutions
1. The application invokes an operation on a hosted service. The request fails, and the service host
responds with HTTP response code 500 (internal server error).
2. The application waits for a short interval and tries again. The request still fails with HTTP response
code 500.
3. The application waits for a longer interval and tries again. The request succeeds with HTTP response
code 200 (OK).
The application should wrap all attempts to access a remote service in code that implements a retry
policy matching one of the strategies listed above. Requests sent to different services can be subject to
different policies. Some vendors provide libraries that implement retry policies, where the application can
specify the maximum number of retries, the amount of time between retry attempts, and other parame-
ters.
An application should log the details of faults and failing operations. This information is useful to opera-
tors. If a service is frequently unavailable or busy, it's often because the service has exhausted its resourc-
es. You can reduce the frequency of these faults by scaling out the service. For example, if a database
service is continually overloaded, it might be beneficial to partition the database and spread the load
across multiple servers.
{
Trace.TraceError("Operation Exception");
currentRetry++;
if (currentRetry > this.retryCount || !IsTransient(ex))
{
throw;
}
}
await Task.Delay(delay);
}
}
The statement that invokes this method is contained in a try/catch block wrapped in a for loop. The for
loop exits if the call to the TransientOperationAsync method succeeds without throwing an excep-
tion. If the TransientOperationAsync method fails, the catch block examines the reason for the
failure. If it's believed to be a transient error, the code waits for a short delay before retrying the opera-
tion.
The for loop also tracks the number of times that the operation has been attempted, and if the code fails
three times, the exception is assumed to be more long lasting. If the exception isn't transient or it's long
lasting, the catch handler will throw an exception. This exception exists in the for loop and should be
caught by the code that invokes the OperationWithBasicRetryAsync method.
Lab scenario
You have created an API for your next big startup venture. Even though you want to get to market
quickly, you have witnessed other ventures fail when they don’t plan for growth and have too few
resources or too many users. To plan for this, you have decided to take advantage of the scale-out
features of Microsoft Azure App Service, the telemetry features of Application Insights, and the perfor-
mance-testing features of Azure DevOps.
Objectives
After you complete this lab, you will be able to:
●● Create an Application Insights resource.
●● Integrate Application Insights telemetry tracking into an ASP.NET web app and a resource built using
the Web Apps feature of Azure App Service.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
True or False, the URL ping test in Application Insights uses ICMP to check a site's availability.
True
False
Review Question 3
In the cloud, transient faults aren't uncommon, and an application should be designed to handle them
elegantly and transparently.
Which of the following are valid strategies for handling transient errors? (Check all that apply.)
Retry after a delay
Cancel
Retry
Hope for the best
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 351
Answers
Review Question 1
The Application Insights .NET and .NET Core SDKs ship with two built-in channels.
Which channel below has the capability to store data on a local disk?
InMemoryChannel
■■ ServerTelemetryChannel
LocalStorageChannel
None of the above
Explanation
The ServerTelemetryChannel has retry policies and the capability to store data on a local disk.
Review Question 2
True or False, the URL ping test in Application Insights uses ICMP to check a site's availability.
True
■■ False
Explanation
This test is not making any use of ICMP (Internet Control Message Protocol) to check your site's availability.
Instead it uses more advanced HTTP request functionality to validate whether an endpoint is responding.
Review Question 3
In the cloud, transient faults aren't uncommon, and an application should be designed to handle them
elegantly and transparently.
Which of the following are valid strategies for handling transient errors? (Check all that apply.)
■■ Retry after a delay
■■ Cancel
■■ Retry
Hope for the best
Explanation
Retry after a delay, Retry, and Cancel are all valid strategies for handling transient faults.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 13 Integrate caching and content de-
livery within solutions
Each data value is associated to a key which can be used to lookup the value from the cache. Redis works
best with smaller values (100k or less), so consider chopping up bigger data into multiple keys. Storing
larger values is possible (up to 500 MB), but increases network latency and can cause caching and
out-of-memory issues if the cache isn't configured to expire old values.
Summary
A database is great for storing large amounts of data, but there is an inherent latency when looking up
data. You send a query. The server interprets the query, looks up the data, and returns it. Servers also
have capacity limits for handling requests. If too many requests are made, data retrieval will likely slow
down. Caching will store frequently requested data in memory that can be returned faster than querying
a database, which should lower latency and increase performance. Azure Cache for Redis gives you access
to a secure, dedicated, and scalable Redis cache, hosted in Azure, and managed by Microsoft.
Name
The Redis cache will need a globally unique name. The name has to be unique within Azure because it is
used to generate a public-facing URL to connect and communicate with the service.
The name must be between 1 and 63 characters, composed of numbers, letters, and the ‘-’ character. The
cache name can't start or end with the '-' character, and consecutive ‘-’ characters aren't valid.
Resource Group
The Azure Cache for Redis is a managed resource and needs a resource group owner. You can either
create a new resource group, or use an existing one in a subscription you are part of.
Location
You will need to decide where the Redis cache will be physically located by selecting an Azure region. You
should always place your cache instance and your application in the same region. Connecting to a cache
in a different region can significantly increase latency and reduce reliability. If you are connecting to the
cache outside of Azure, then select a location close to where the application consuming the data is
running.
Important: Put the Redis cache as close to the data consumer as you can.
Pricing tier
As mentioned in the last unit, there are three pricing tiers available for an Azure Cache for Redis.
●● Basic: Basic cache ideal for development/testing. Is limited to a single server, 53 GB of memory, and
20,000 connections. There is no SLA for this service tier.
●● Standard: Production cache which supports replication and includes an 99.99% SLA. It supports two
servers (master/slave), and has the same memory/connection limits as the Basic tier.
●● Premium: Enterprise tier which builds on the Standard tier and includes persistence, clustering, and
scale-out cache support. This is the highest performing tier with up to 530 GB of memory and 40,000
simultaneous connections.
MCT USE ONLY. STUDENT USE PROHIBITED 356 Module 13 Integrate caching and content delivery within solutions
You can control the amount of cache memory available on each tier - this is selected by choosing a cache
level from C0-C6 for Basic/Standard and P0-P4 for Premium. Check the pricing page1 for full details.
Tip: Microsoft recommends you always use Standard or Premium Tier for production systems. The Basic
Tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches
are really meant for simple dev/test scenarios since they have a shared CPU core and very little memory.
The Premium tier allows you to persist data in two ways to provide disaster recovery:
1. RDB persistence takes a periodic snapshot and can rebuild the cache using the snapshot in case of
failure.
2. AOF persistence saves every write operation to a log that is saved at least once per second. This
creates bigger files than RDB but has less data loss.
There are several other settings which are only available to the Premium tier.
Clustering support
With a premium tier Redis cache, you can implement clustering to automatically split your dataset among
multiple nodes. To implement clustering, you specify the number of shards to a maximum of 10. The cost
incurred is the cost of the original node, multiplied by the number of shards.
Command Description
ping Ping the server. Returns "PONG".
set [key] [value] Sets a key/value in the cache. Returns "OK" on
success.
get [key] Gets a value from the cache.
exists [key] Returns '1' if the key exists in the cache, '0' if it
doesn't.
type [key] Returns the type associated to the value for the
given key.
incr [key] Increment the given value associated with key by
'1'. The value must be an integer or double value.
This returns the new value.
1 https://azure.microsoft.com/pricing/details/cache/
MCT USE ONLY. STUDENT USE PROHIBITED
Develop for Azure Cache for Redis 357
Command Description
incrby [key] [amount] Increment the given value associated with key by
the specified amount. The value must be an
integer or double value. This returns the new
value.
del [key] Deletes the value associated with the key.
flushdb Delete all keys and values in the database.
Redis has a command-line tool (redis-cli) you can use to experiment directly with these commands, an
example is shown below.
> set somekey somevalue
OK
> get somekey
"somevalue"
> exists somekey
(string) 1
> del somekey
(string) 1
> exists somekey
(string) 0
You can pass this string to StackExchange.Redis to create a connection the server.
Notice that there are two additional parameters at the end:
●● ssl - ensures that communication is encrypted.
●● abortConnection - allows a connection to be created even if the server is unavailable at that mo-
ment.
There are several other optional parameters2 you can append to the string to configure the client library.
2 https://github.com/StackExchange/StackExchange.Redis/blob/master/docs/Configuration.md#configuration-options
MCT USE ONLY. STUDENT USE PROHIBITED
Develop for Azure Cache for Redis 359
✔️ Tip: The connection string should be protected in your application. If the application is hosted on
Azure, consider using an Azure Key Vault to store the value.
Creating a connection
The main connection object in StackExchange.Redis is the StackExchange.Redis.Connection-
Multiplexer class. This object abstracts the process of connecting to a Redis server (or group of
servers). It's optimized to manage connections efficiently and intended to be kept around while you need
access to the cache.
You create a ConnectionMultiplexer instance using the static ConnectionMultiplexer.Con-
nect or ConnectionMultiplexer.ConnectAsync method, passing in either a connection string or a
ConfigurationOptions object.
Here's a simple example:
using StackExchange.Redis;
...
var connectionString = "[cache-name].redis.cache.windows.net:6380,password=[pass-
word-here],ssl=True,abortConnect=False";
var redisConnection = ConnectionMultiplexer.Connect(connectionString);
// ^^^ store and re-use this!!!
Once you have a ConnectionMultiplexer, there are 3 primary things you might want to do:
1. Access a Redis Database. This is what we will focus on here.
2. Make use of the publisher/subscript features of Redis. This is outside the scope of this module.
3. Access an individual server for maintenance or monitoring purposes.
Tip: The object returned from GetDatabase is a lightweight object, and does not need to be stored.
Only the ConnectionMultiplexer needs to be kept alive.
Once you have a IDatabase object, you can execute methods to interact with the cache. All methods
have synchronous and asynchronous versions which return Task objects to make them compatible with
the async and await keywords.
Here is an example of storing a key/value in the cache:
bool wasSet = db.StringSet("favorite:flavor", "i-love-rocky-road");
The StringSet method returns a bool indicating whether the value was set (true) or not (false). We
can then retrieve the value with the StringGet method:
string value = db.StringGet("favorite:flavor");
Console.WriteLine(value); // displays: ""i-love-rocky-road""
MCT USE ONLY. STUDENT USE PROHIBITED 360 Module 13 Integrate caching and content delivery within solutions
db.StringSet(key, value);
StackExchange.Redis represents keys using the RedisKey type. This class has implicit conversions to
and from both string and byte[], allowing both text and binary keys to be used without any compli-
cation. Values are represented by the RedisValuetype. As with RedisKey, there are implicit conver-
sions in place to allow you to pass string or byte[].
Method Description
CreateBatch Creates a group of operations that will be sent to
the server as a single unit, but not necessarily
processed as a unit.
CreateTransaction Creates a group of operations that will be sent to
the server as a single unit and processed on the
server as a single unit.
KeyDelete Delete the key/value.
KeyExists Returns whether the given key exists in cache.
KeyExpire Sets a time-to-live (TTL) expiration on a key.
KeyRename Renames a key.
KeyTimeToLive Returns the TTL for a key.
KeyType Returns the string representation of the type of
the value stored at key. The different types that
can be returned are: string, list, set, zset and hash.
3 https://github.com/StackExchange/StackExchange.Redis/blob/master/src/StackExchange.Redis/Interfaces/IDatabase.cs
MCT USE ONLY. STUDENT USE PROHIBITED
Develop for Azure Cache for Redis 361
The Execute and ExecuteAsync methods return a RedisResult object which is a data holder that
includes two properties:
●● Type which returns a string indicating the type of the result - “STRING”, "INTEGER", etc.
●● IsNull a true/false value to detect when the result is null.
You can then use ToString() on the RedisResult to get the actual return value.
You can use Execute to perform any supported commands - for example, we can get all the clients
connected to the cache (“CLIENT LIST”):
var result = await db.ExecuteAsync("client", "list");
Console.WriteLine($"Type = {result.Type}\r\nResult = {result}");
public GameStat(string sport, DateTimeOffset datePlayed, string game, string[] teams, IEnumera-
ble<(string team, int score)> results)
{
Id = Guid.NewGuid().ToString();
Sport = sport;
DatePlayed = datePlayed;
Game = game;
Teams = teams.ToList();
Results = results.ToList();
}
MCT USE ONLY. STUDENT USE PROHIBITED 362 Module 13 Integrate caching and content delivery within solutions
We could use the Newtonsoft.Json library to turn an instance of this object into a string:
var stat = new GameStat("Soccer", new DateTime(2019, 7, 16), "Local Game",
new[] { "Team 1", "Team 2" },
new[] { ("Team 1", 2), ("Team 2", 1) });
We could retrieve it and turn it back into an object using the reverse process:
var result = db.StringGet("event:2019-local-game");
var stat = Newtonsoft.Json.JsonConvert.DeserializeObject<GameStat>(result.ToString());
Console.WriteLine(stat.Sport); // displays "Soccer"
Prerequisites
This demo is performed in Visual Studio Code (VS Code).
md az204-redisdemo
2. Create a Redis Cache instance by using the az redis create command. The instance name needs
to be unique and the script below will attempt to generate one for you. This command will take a few
minutes to complete.
$redisname = "az204redis" + $(get-random -minimum 10000 -maximum 100000)
az redis create -l $myLocation -g az204-redisdemo-rg -n $redisname --sku Basic --vm-size c0
3. Open the Azure Portal (https://portal.azure.com) and copy the connection string to the new Redis
Cache instance.
●● Navigate to the new Redis Cache.
●● Select Access keys in the Settings section of the Navigation Pane.
●● Copy the Primary connection string (StackExchange.Redis) value and save to Notepad.
3. In the Program.cs file add the using statement below at the top.
using StackExchange.Redis;
using System.Threading.Tasks;
4. Let's have the Main method run asynchronously by changing it to the following:
MCT USE ONLY. STUDENT USE PROHIBITED 364 Module 13 Integrate caching and content delivery within solutions
5. Connect to the cache by replacing the existing code in the Main method with the following code. Set
the connectionString variable to the value you copied from the portal.
string connectionString = "YOUR_CONNECTION_STRING";
✔️ Note: The connection to Azure Cache for Redis is managed by the ConnectionMultiplexer
class. This class should be shared and reused throughout your client application. We do not want to
create a new connection for each operation. Instead, we want to store it off as a field in our class and
reuse it for each operation. Here we are only going to use it in the Main method, but in a production
application, it should be stored in a class field, or a singleton.
2. Call StringSetAsync on the IDatabase object to set the key “test:key” to the value "some value".
The return value from StringSetAsync is a bool indicating whether the key was added. Append
the code below to what you entered in Step 1 of this section.
bool setValue = await db.StringSetAsync("test:key", "100");
Console.WriteLine($"SET: {setValue}");
Other operations
Let's add a few additional methods to the code.
1. Execute “PING” to test the server connection. It should respond with "PONG". Append the following
code to the using block.
var result = await db.ExecuteAsync("ping");
Console.WriteLine($"PING = {result.Type} : {result}");
2. Execute “FLUSHDB” to clear the database values. It should respond with "OK". Append the following
code to the using block.
result = await db.ExecuteAsync("flushdb");
Console.WriteLine($"FLUSHDB = {result.Type} : {result}");
Clean up resources
When you're finished with the demo you can clean up the resources by deleting the resource group
created earlier. The following command can be run in the VS Code terminal.
az group delete -n az204-redisdemo-rg --no-wait --yes
MCT USE ONLY. STUDENT USE PROHIBITED 366 Module 13 Integrate caching and content delivery within solutions
1. A user (Alice) requests a file (also called an asset) by using a URL with a special domain name, such as
<endpoint name>.azureedge.net. This name can be an endpoint hostname or a custom domain. The
DNS routes the request to the best performing POP location, which is usually the POP that is geo-
graphically closest to the user.
2. If no edge servers in the POP have the file in their cache, the POP requests the file from the origin
server. The origin server can be an Azure Web App, Azure Cloud Service, Azure Storage account, or
any publicly accessible web server.
3. The origin server returns the file to an edge server in the POP.
4. An edge server in the POP caches the file and returns the file to the original requestor (Alice). The file
remains cached on the edge server in the POP until the time-to-live (TTL) specified by its HTTP
headers expires. If the origin server didn't specify a TTL, the default TTL is seven days.
5. Additional users can then request the same file by using the same URL that Alice used, and can also
be directed to the same POP.
6. If the TTL for the file hasn't expired, the POP edge server returns the file directly from the cache. This
process results in a faster, more responsive user experience.
This will globally list every CDN profile associated with your subscription. If you want to filter this list
down to a specific resource group, you can use the --resource-group parameter:
az cdn profile list --resource-group ExampleGroup
To create a new profile, you should use the new create verb for the az cdn profile command group:
az cdn profile create --name DemoProfile --resource-group ExampleGroup
By default, the CDN will be created by using the standard tier and the Akamai provider. You can custom-
ize this further by using the --sku parameter and one of the following options:
●● Custom_Verizon
●● Premium_Verizon
●● Standard_Akamai
●● Standard_ChinaCdn
●● Standard_Verizon
After you have created a new profile, you can use that profile to create an endpoint. Each endpoint
requires you to specify a profile, a resource group, and an origin URL:
az cdn endpoint create \
--name ContosoEndpoint \
--origin www.contoso.com \
--profile-name DemoProfile \
--resource-group ExampleGroup
You can customize the endpoint further by assigning a custom domain to the CDN endpoint. This helps
ensure that users see only the domains you choose instead of the Azure CDN domains:
az cdn custom-domain create \
--name FilesDomain \
--hostname files.contoso.com \
--endpoint-name ContosoEndpoint \
--profile-name DemoProfile \
--resource-group ExampleGroup
Caching rules
Azure CDN caching rules specify cache expiration behavior both globally and with custom conditions.
There are two types of caching rules:
●● Global caching rules. You can set one global caching rule for each endpoint in your profile that affects
all requests to the endpoint. The global caching rule overrides any HTTP cache-directive headers, if
set.
●● Custom caching rules. You can set one or more custom caching rules for each endpoint in your profile.
Custom caching rules match specific paths and file extensions; are processed in order; and override
the global caching rule, if set.
For global and custom caching rules, you can specify the cache expiration duration in days, hours,
minutes, and seconds.
You can also preload assets into an endpoint. This is useful for scenarios where your application creates a
large number of assets, and you want to improve the user experience by prepopulating the cache before
any actual requests occur:
az cdn endpoint load \
--content-paths '/img/*' '/js/module.js' \
--name ContosoEndpoint \
--profile-name DemoProfile \
--resource-group ExampleGroup
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 369
Lab scenario
Your marketing organization has been tasked with building a website landing page to host content about
an upcoming edX course. While designing the website, your team decided that multimedia videos and
image content would be the ideal way to convey your marketing message. The website is already com-
pleted and available using a Docker container, and your team also decided that it would like to use a
content delivery network (CDN) to improve the performance of the images, the videos, and the website
itself. You have been tasked with using Microsoft Azure Content Delivery Network to improve the perfor-
mance of both standard and streamed content on the website.
Objectives
After you complete this lab, you will be able to:
●● Register a Microsoft.CDN resource provider.
●● Create Content Delivery Network resources.
●● Create and configure Content Delivery Network endpoints that are bound to various Azure services.
Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure
Review Question 2
True or False, Azure Redis cache resources requires a globally unique name.
True
False
Review Question 3
The Redis database is represented by the IDatabase type. Which of the following methods creates a group of
operations to be sent to the server and processed as a single unit?
CreateBatch
CreateTransaction
CreateSingle
SendBatch
MCT USE ONLY. STUDENT USE PROHIBITED
Lab and review questions 371
Answers
Review Question 1
Data in Redis cache is stored in nodes and clusters. Which of the statements below are true?
(Select all that apply.)
Nodes are a collection of clusters
■■ Nodes are a space where data is stored
■■ Clusters are sets of three or more nodes
Clusters are a space where data is stored
Explanation
Review Question 2
True or False, Azure Redis cache resources requires a globally unique name.
■■ True
False
Explanation
The Redis cache will need a globally unique name. The name has to be unique within Azure because it is
used to generate a public-facing URL to connect and communicate with the service.
Review Question 3
The Redis database is represented by the IDatabase type. Which of the following methods creates a
group of operations to be sent to the server and processed as a single unit?
CreateBatch
■■ CreateTransaction
CreateSingle
SendBatch
Explanation
The CreateTransaction method creates a group of operations that will be sent to the server as a single unit
and processed on the server as a single unit.