AZ 204T00A ENU TrainerHandbook

Download as pdf or txt
Download as pdf or txt
You are on page 1of 385

MCT USE ONLY.

STUDENT USE PROHIBITED


Microsoft
Official
Course

AZ-204T00
Developing Solutions for
Microsoft Azure
MCT USE ONLY. STUDENT USE PROHIBITED
AZ-204T00
Developing Solutions for
Microsoft Azure
MCT USE ONLY. STUDENT USE PROHIBITED II  Disclaimer

 
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in 
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
 
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
 
The names of manufacturers, products, or URLs are provided for informational purposes only and   
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is
not responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained  
therein.
 
© 2019 Microsoft Corporation. All rights reserved.
 
Microsoft and the trademarks listed at http://www.microsoft.com/trademarks 1are trademarks of the
Microsoft group of companies. All other trademarks are property of their respective owners.
 
 

1 http://www.microsoft.com/trademarks
MCT USE ONLY. STUDENT USE PROHIBITED
EULA  III

MICROSOFT LICENSE TERMS


MICROSOFT INSTRUCTOR-LED COURSEWARE
These license terms are an agreement between Microsoft Corporation (or based on where you live, one
of its affiliates) and you. Please read them. They apply to your use of the content accompanying this
agreement which includes the media on which you received it, if any. These license terms also apply to
Trainer Content and any updates and supplements for the Licensed Content unless other terms accompa-
ny those items. If so, those terms apply.
BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.
If you comply with these license terms, you have the rights below for each license you acquire.
1. DEFINITIONS.
1. “Authorized Learning Center” means a Microsoft IT Academy Program Member, Microsoft Learn-
ing Competency Member, or such other entity as Microsoft may designate from time to time.
2. “Authorized Training Session” means the instructor-led training class using Microsoft Instruc-
tor-Led Courseware conducted by a Trainer at or through an Authorized Learning Center.
3. “Classroom Device” means one (1) dedicated, secure computer that an Authorized Learning Center
owns or controls that is located at an Authorized Learning Center’s training facility that meets or
exceeds the hardware level specified for the particular Microsoft Instructor-Led Courseware.
4. “End User” means an individual who is (i) duly enrolled in and attending an Authorized Training
Session or Private Training Session, (ii) an employee of an MPN Member, or (iii) a Microsoft
full-time employee.
5. “Licensed Content” means the content accompanying this agreement which may include the
Microsoft Instructor-Led Courseware or Trainer Content.
6. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training
session to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) current-
ly certified as a Microsoft Certified Trainer under the Microsoft Certification Program.
7. “Microsoft Instructor-Led Courseware” means the Microsoft-branded instructor-led training course
that educates IT professionals and developers on Microsoft technologies. A Microsoft Instruc-
tor-Led Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business
Group courseware.
8. “Microsoft IT Academy Program Member” means an active member of the Microsoft IT Academy
Program.
9. “Microsoft Learning Competency Member” means an active member of the Microsoft Partner
Network program in good standing that currently holds the Learning Competency status.
10. “MOC” means the “Official Microsoft Learning Product” instructor-led courseware known as
Microsoft Official Course that educates IT professionals and developers on Microsoft technologies.
11. “MPN Member” means an active Microsoft Partner Network program member in good standing.
12. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic
device that you personally own or control that meets or exceeds the hardware level specified for
the particular Microsoft Instructor-Led Courseware.
13. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led
MCT USE ONLY. STUDENT USE PROHIBITED IV  EULA

Courseware. These classes are not advertised or promoted to the general public and class attend-
ance is restricted to individuals employed by or contracted by the corporate customer.
14. “Trainer” means (i) an academically accredited educator engaged by a Microsoft IT Academy
Program Member to teach an Authorized Training Session, and/or (ii) a MCT.
15. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and
additional supplemental content designated solely for Trainers’ use to teach a training session
using the Microsoft Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint
presentations, trainer preparation guide, train the trainer materials, Microsoft One Note packs,
classroom setup guide and Pre-release course feedback form. To clarify, Trainer Content does not
include any software, virtual hard disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one
copy per user basis, such that you must acquire a license for each individual that accesses or uses the
Licensed Content.
●● 2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
1. If you are a Microsoft IT Academy Program Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:

1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User who is enrolled in the Authorized Training Session, and only immediately
prior to the commencement of the Authorized Training Session that is the subject matter
of the Microsoft Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they
can access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content, provided you comply with the following:
3. you will only provide access to the Licensed Content to those individuals who have acquired
a valid license to the Licensed Content,
4. you will ensure each End User attending an Authorized Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Author-
ized Training Session,
5. you will ensure that each End User provided with the hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject to
the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agreement
in a manner that is enforceable under local law prior to their accessing the Microsoft
Instructor-Led Courseware,
6. you will ensure that each Trainer teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
MCT USE ONLY. STUDENT USE PROHIBITED
EULA  V

7. you will only use qualified Trainers who have in-depth knowledge of and experience with
the Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware
being taught for all your Authorized Training Sessions,
8. you will only deliver a maximum of 15 hours of training per week for each Authorized
Training Session that uses a MOC title, and
9. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer
resources for the Microsoft Instructor-Led Courseware.
2. If you are a Microsoft Learning Competency Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or MCT, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Authorized Training Session and only immediately prior to
the commencement of the Authorized Training Session that is the subject matter of the
Microsoft Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) MCT with the unique redemption code and instructions on how
they can access one (1) Trainer Content, provided you comply with the following:
3. you will only provide access to the Licensed Content to those individuals who have acquired
a valid license to the Licensed Content,
4. you will ensure that each End User attending a Private Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Private
Training Session,
5. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject to
the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agreement
in a manner that is enforceable under local law prior to their accessing the Microsoft
Instructor-Led Courseware,
6. you will ensure that each MCT teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
7. you will only use qualified MCTs who also hold the applicable Microsoft Certification
credential that is the subject of the MOC title being taught for all your Authorized Training
Sessions using MOC,
8. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
9. you will only provide access to the Trainer Content to MCTs.
MCT USE ONLY. STUDENT USE PROHIBITED VI  EULA

3. If you are a MPN Member:


1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:

1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Private Training Session, and only immediately prior to the
commencement of the Private Training Session that is the subject matter of the Micro-
soft Instructor-Led Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the
unique redemption code and instructions on how they can access one (1) Trainer
Content, provided you comply with the following:
3. you will only provide access to the Licensed Content to those individuals who have acquired
a valid license to the Licensed Content,
4. you will ensure that each End User attending a Private Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Private
Training Session,
5. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject to
the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agreement
in a manner that is enforceable under local law prior to their accessing the Microsoft
Instructor-Led Courseware,
6. you will ensure that each Trainer teaching a Private Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Private Training Session,
7. you will only use qualified Trainers who hold the applicable Microsoft Certification creden-
tial that is the subject of the Microsoft Instructor-Led Courseware being taught for all your
Private Training Sessions,
8. you will only use qualified MCTs who hold the applicable Microsoft Certification credential
that is the subject of the MOC title being taught for all your Private Training Sessions using
MOC,
9. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
10. you will only provide access to the Trainer Content to Trainers.
4. If you are an End User:
For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for
your personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you
may access the Microsoft Instructor-Led Courseware online using the unique redemption code
provided to you by the training provider and install and use one (1) copy of the Microsoft
MCT USE ONLY. STUDENT USE PROHIBITED
EULA  VII

Instructor-Led Courseware on up to three (3) Personal Devices. You may also print one (1) copy
of the Microsoft Instructor-Led Courseware. You may not install the Microsoft Instructor-Led
Courseware on a device you do not own or control.
5. If you are a Trainer.
1. For each license you acquire, you may install and use one (1) copy of the Trainer Content in
the form provided to you on one (1) Personal Device solely to prepare and deliver an
Authorized Training Session or Private Training Session, and install one (1) additional copy
on another Personal Device as a backup copy, which may be used only to reinstall the
Trainer Content. You may not install or use a copy of the Trainer Content on a device you do
not own or control. You may also print one (1) copy of the Trainer Content solely to prepare
for and deliver an Authorized Training Session or Private Training Session.
2. You may customize the written portions of the Trainer Content that are logically associated
with instruction of a training session in accordance with the most recent version of the MCT
agreement. If you elect to exercise the foregoing rights, you agree to comply with the
following: (i) customizations may only be used for teaching Authorized Training Sessions
and Private Training Sessions, and (ii) all customizations will comply with this agreement. For
clarity, any use of “customize” refers only to changing the order of slides and content, and/
or not using all the slides or content, it does not mean changing or modifying any slide or
content.
●● 2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may
not separate their components and install them on different devices.
●● 2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above,
you may not distribute any Licensed Content or any portion thereof (including any permitted
modifications) to any third parties without the express written permission of Microsoft.
●● 2.4 Third Party Notices. The Licensed Content may include third party code that Microsoft, not
the third party, licenses to you under this agreement. Notices, if any, for the third party code are
included for your information only.
●● 2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and
licenses also apply to your use of that respective component and supplements the terms described
in this agreement.
3. LICENSED CONTENT BASED ON PRE-RELEASE TECHNOLOGY. If the Licensed Content’s subject
matter is based on a pre-release version of Microsoft technology ("Pre-release"), then in addition to
the other provisions in this agreement, these terms also apply:
1. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version
of the Microsoft technology. The technology may not work the way a final version of the technolo-
gy will and we may change the technology for the final version. We also may not release a final
version. Licensed Content based on the final version of the technology may not contain the same
information as the Licensed Content based on the Pre-release version. Microsoft is under no
obligation to provide you with any further content, including any Licensed Content based on the
final version of the technology.
2. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly
or through its third party designee, you give to Microsoft without charge, the right to use, share
and commercialize your feedback in any way and for any purpose. You also give to third parties,
without charge, any patent rights needed for their products, technologies and services to use or
interface with any specific parts of a Microsoft technology, Microsoft product, or service that
includes the feedback. You will not give feedback that is subject to a license that requires Microsoft
MCT USE ONLY. STUDENT USE PROHIBITED VIII  EULA

to license its technology, technologies, or products to third parties because we include your
feedback in them. These rights survive this agreement.
3. Pre-release Term. If you are a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed
Content on the Pre-release technology upon (i) the date which Microsoft informs you is the end
date for using the Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the
commercial release of the technology that is the subject of the Licensed Content, whichever is
earliest ("Pre-release term"). Upon expiration or termination of the Pre-release term, you will
irretrievably delete and destroy all copies of the Licensed Content in your possession or under
your control.
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you
more rights despite this limitation, you may use the Licensed Content only as expressly permitted in
this agreement. In doing so, you must comply with any technical limitations in the Licensed Content
that only allows you to use it in certain ways. Except as expressly permitted in this agreement, you
may not:
●● access or allow any individual to access the Licensed Content if they have not acquired a valid
license for the Licensed Content,
●● alter, remove or obscure any copyright or other protective notices (including watermarks), brand-
ing or identifications contained in the Licensed Content,
●● modify or create a derivative work of any Licensed Content,
●● publicly display, or make the Licensed Content available for others to access or use,
●● copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
●● work around any technical limitations in the Licensed Content, or
●● reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property
laws and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property
rights in the Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regula-
tions. You must comply with all domestic and international export laws and regulations that apply to
the Licensed Content. These laws include restrictions on destinations, end users and end use. For
additional information, see www. microsoft. com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for
it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you
fail to comply with the terms and conditions of this agreement. Upon termination of this agreement
for any reason, you will immediately stop all use of and delete and destroy all copies of the Licensed
Content in your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible
for the contents of any third party sites, any links contained in third party sites, or any changes or
MCT USE ONLY. STUDENT USE PROHIBITED
EULA  IX

updates to third party sites. Microsoft is not responsible for webcasting or any other form of transmis-
sion received from any third party sites. Microsoft is providing these links to third party sites to you
only as a convenience, and the inclusion of any link does not imply an endorsement by Microsoft of
the third party site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11. APPLICABLE LAW.
1. United States. If you acquired the Licensed Content in the United States, Washington state law
governs the interpretation of this agreement and applies to claims for breach of it, regardless of
conflict of laws principles. The laws of the state where you live govern all other claims, including
claims under state consumer protection laws, unfair competition laws, and in tort.
2. Outside the United States. If you acquired the Licensed Content in any other country, the laws of
that country apply.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
Licensed Content. This agreement does not change your rights under the laws of your country if the
laws of your country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED"AS-IS"AND"AS AVAILABLE.
"YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE AFFILIATES GIVES NO
EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY HAVE ADDITIONAL CON-
SUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. TO
THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND ITS RESPECTIVE AFFILI-
ATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICU-
LAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO
US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST
PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
This limitation applies to
●● anything related to the Licensed Content, services, content (including code) on third party Internet
sites or third-party programs; and
●● claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion
or limitation of incidental, consequential or other damages.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque: Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre
garantie expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection
dues consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les
MCT USE ONLY. STUDENT USE PROHIBITED X  EULA

garanties implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contre-


façon sont exclues.
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES DOMMAG-
ES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages
directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation pour les
autres dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.
Cette limitation concerne:
●● tout ce qui est relié au le contenu sous licence, aux services ou au contenu (y compris le code) figurant
sur des sites Internet tiers ou dans des programmes tiers; et.
●● les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité stricte, de
négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel
dommage. Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages
indirects, accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus
ne s’appliquera pas à votre égard.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois
de votre pays si celles-ci ne le permettent pas.
Revised November 2014
MCT USE ONLY. STUDENT USE PROHIBITED
Contents

■■ Module 0 Course introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1


About this course  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1
■■ Module 1 Creating Azure App Service Web Apps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5
Azure App Service core concepts  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5
Creating an Azure App Service Web App  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  19
Configuring and Monitoring App Service apps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  25
Scaling App Service apps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  32
Azure App Service staging environments  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  43
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  50
■■ Module 2 Implement Azure functions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55
Azure Functions overview  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55
Developing Azure Functions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  64
Implement Durable Functions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  69
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  77
■■ Module 3 Develop solutions that use blob storage  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  81
Azure Blob storage core concepts  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  81
Managing the Azure Blob storage lifecycle  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  90
Working with Azure Blob storage  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  98
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  108
■■ Module 4 Develop solutions that use Cosmos DB storage  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  113
Azure Cosmos DB overview  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  113
Azure Cosmos DB data structure  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  120
Working with Azure Cosmos DB resources and data  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  126
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  137
■■ Module 5 Implement IaaS solutions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  141
Provisioning VMs in Azure  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  141
Create and deploy ARM templates  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  150
Create container images for solutions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  162
Publish a container image to Azure Container Registry  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  176
Create and run container images in Azure Container Instances  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  182
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  191
■■ Module 6 Implement user authentication and authorization  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  195
MCT USE ONLY. STUDENT USE PROHIBITED
Microsoft Identity Platform v2.0  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  195
Authentication using the Microsoft Authentication Library  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  199
Using Microsoft Graph  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  206
Authorizing data operations in Azure Storage  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  211
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  213
■■ Module 7 Implement secure cloud solutions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  217
Manage keys, secrets, and certificates by using the KeyVault API  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  217
Implement Managed Identities for Azure resources  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  222
Secure app configuration data by using Azure App Configuration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  227
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  232
■■ Module 8 Implement API Management  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  237
API Management overview  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  237
Defining policies for APIs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  245
Securing your APIs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  251
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  256
■■ Module 9 Develop App Service Logic Apps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  261
Azure Logic Apps overview  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  261
Creating custom connectors for Logic Apps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  268
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  270
■■ Module 10 Develop event-based solutions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  273
Implement solutions that use Azure Event Grid  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  273
Implement solutions that use Azure Event Hubs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  284
Implement solutions that use Azure Notification Hubs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  293
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  302
■■ Module 11 Develop message-based solutions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  307
Implement solutions that use Azure Service Bus  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  307
Implement solutions that use Azure Queue Storage queues  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  319
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  324
■■ Module 12 Monitor and optimize Azure solutions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  329
Overview of monitoring in Azure  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  329
Instrument an app for monitoring  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  334
Analyzing and troubleshooting apps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  341
Implement code that handles transient faults  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  346
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  349
■■ Module 13 Integrate caching and content delivery within solutions  . . . . . . . . . . . . . . . . . . . . . .  353
Develop for Azure Cache for Redis  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  353
Develop for storage on CDNs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  366
Lab and review questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  369
MCT USE ONLY. STUDENT USE PROHIBITED
Module 0 Course introduction

About this course


About this course
Welcome to the Developing Solutions for Microsoft Azure course. This course teaches developers how
to create solutions that are hosted in, and utilize, Azure services.
Level: Intermediate

Audience
This course is for Azure Developers. They design and build cloud solutions such as applications and
services. They participate in all phases of development, from solution design, to development and
deployment, to testing and maintenance. They partner with cloud solution architects, cloud DBAs, cloud
administrators, and clients to implement the solution.

Prerequisites
This course assumes you have already acquired the following skills and experience:
●● At least one year of experience developing scalable solutions through all phases of software develop-
ment.
●● Be skilled in at least one cloud-supported programming language. Much of the course focuses on C#,
.NET Framework, HTML, and using REST in applications.
●● Have a base understanding of Azure and cloud concepts, services, and the Azure Portal. If you need to
ramp up you can start with the Azure Fundamentals1 course which is freely available.
●● Are familiar with PowerShell and/or Azure CLI.
✔️ Note: This course presents more Azure CLI examples overall than PowerShell.

1 https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
MCT USE ONLY. STUDENT USE PROHIBITED 2  Module 0 Course introduction  

Labs and demonstrations


The labs and demonstrations in this course are designed to be performed using the virtual machine
provided to the students as part of the course.
Many of the demonstrations include links to tools you may need to install if you want to practice the
demos on a different machine.

Certification exam preparation


This course helps you prepare for the AZ-204: Developing Solutions for Microsoft Azure2 certification
exam.
AZ-204 includes six study areas, as shown in the table. The percentages indicate the relative weight of
each area on the exam. The higher the percentage, the more questions you are likely to see in that area.

AZ-204 Study Areas Weight


Develop Azure compute solutions 25-30%
Develop for Azure storage 10-15%
Implement Azure security 15-20%
Monitor, troubleshoot, and optimize Azure 10-15%
solutions
Connect to and consume Azure and third-party 25-30%
services
✔️ Note: The relative weightings are subject to change. For the latest information visit the exam page3
and review the Skills measured section.
Passing the exam will earn you the Microsoft Certified: Azure Developer Associate4 certification.
The modules in the course are mapped to the objectives listed in each study area on the Skills Measured
section of the exam page to make it easier for you to focus on areas of the exam you choose to revisit.

Course syllabus
The course content includes a mix of content, demonstrations, hands-on labs, and reference links.

  Module Name
0 Welcome to the course
1 Create Azure App Service Web Apps
2 Implement Azure functions
3 Develop solutions that use blob storage
4 Develop solutions that use Cosmos DB storage
5 Implement IaaS solutions
6 Implement secure cloud solutions
7 Implement user authentication and authorization
8 Implement API Management
9 Develop App Service Logic Apps

2 https://docs.microsoft.com/en-us/learn/certifications/exams/az-204
3 https://docs.microsoft.com/en-us/learn/certifications/exams/az-204
4 https://docs.microsoft.com/en-us/learn/certifications/azure-developer
MCT USE ONLY. STUDENT USE PROHIBITED
 About this course  3

  Module Name
10 Develop event-based solutions
11 Develop message-based solutions
12 Instrument solutions to support monitoring and
logging
13 Integrate caching and content delivery within
solutions

Course resources
There are a lot of resources to help you learn about Azure. We recommend you bookmark these pages.
●● Microsoft Learning Community Blog:5 Get the latest information about the certification tests and
exam study groups.
●● Microsoft Learn:6 Free role-based learning paths and hands-on experiences for practice
●● Azure Fridays:7 Join Scott Hanselman as he engages one-on-one with the engineers who build the
services that power Microsoft Azure, as they demo capabilities, answer Scott's questions, and share
their insights.
●● Microsoft Azure Blog:8 Keep current on what's happening in Azure, including what's now in preview,
generally available, news & updates, and more.
●● Azure Documentation:9 Stay informed on the latest products, tools, and features. Get information on
pricing, partners, support, and solutions.

5 https://www.microsoft.com/en-us/learning/community-blog.aspx
6 https://docs.microsoft.com/en-us/learn/
7 https://channel9.msdn.com/Shows/Azure-Friday
8 https://azure.microsoft.com/en-us/blog/
9 https://docs.microsoft.com/en-us/azure/
MCT USE ONLY. STUDENT USE PROHIBITED
Module 1 Creating Azure App Service Web
Apps

Azure App Service core concepts


App Service Overview
Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back
ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or
Python. Applications run and scale with ease on both Windows and Linux-based environments.

Deployment slots
Using the Azure portal, you can easily add deployment slots to an App Service web app. For instance, you
can create a staging deployment slot where you can push your code to test on Azure. Once you are
happy with your code, you can easily swap the staging deployment slot with the production slot. You do
all this with a few simple mouse clicks in the Azure portal.
MCT USE ONLY. STUDENT USE PROHIBITED 6  Module 1 Creating Azure App Service Web Apps  

Continuous integration/deployment support


The Azure portal provides out-of-the-box continuous integration and deployment with Azure DevOps,
GitHub, Bitbucket, FTP, or a local Git repository on your development machine. Connect your web app
with any of the above sources and App Service will do the rest for you by auto-syncing code and any
future changes on the code into the web app. Furthermore, with Azure DevOps, you can define your own
build and release process that compiles your source code, runs the tests, builds a release, and finally
deploys the release into your web app every time you commit the code. All that happens implicitly
without any need to intervene.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service core concepts  7

Integrated Visual Studio publishing and FTP publishing


In addition to being able to set up continuous integration/deployment for your web app, you can always
benefit from the tight integration with Visual Studio to publish your web app to Azure via Web Deploy
technology. App Service also supports FTP-based publishing for more traditional workflows.

Built-in auto scale support (automatic scale-out based on


real-world load)
Baked into the web app is the ability to scale up/down or scale out. Depending on the usage of the web
app, you can scale your app up/down by increasing/decreasing the resources of the underlying machine
that is hosting your web app. Resources can be number of cores or the amount of RAM available.
Scaling out, on the other hand, is the ability to increase the number of machine instances that are
running your web app.

Azure App Service plans


In App Service, an app runs in an App Service plan. An App Service plan defines a set of compute resourc-
es for a web app to run. One or more apps can be configured to run on the same computing resources
(or in the same App Service plan).
When you create an App Service plan in a certain region (for example, West Europe), a set of compute
resources is created for that plan in that region. Whatever apps you put into this App Service plan run on
these compute resources as defined by your App Service plan. Each App Service plan defines:
●● Region (West US, East US, etc.)
MCT USE ONLY. STUDENT USE PROHIBITED 8  Module 1 Creating Azure App Service Web Apps  

●● Number of VM instances
●● Size of VM instances (Small, Medium, Large)
●● Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, Isolated, Consumption)
The pricing tier of an App Service plan determines what App Service features you get and how much you
pay for the plan. There are a few categories of pricing tiers:
●● Shared compute:Free and Shared, the two base tiers, runs an app on the same Azure VM as other
App Service apps, including apps of other customers. These tiers allocate CPU quotas to each app that
runs on the shared resources, and the resources cannot scale out.
●● Dedicated compute: The Basic, Standard, Premium, and PremiumV2 tiers run apps on dedicated
Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher
the tier, the more VM instances are available to you for scale-out.
●● Isolated: This tier runs dedicated Azure VMs on dedicated Azure Virtual Networks, which provides
network isolation on top of compute isolation to your apps. It provides the maximum scale-out
capabilities.
●● Consumption: This tier is only available to function apps. It scales the functions dynamically depend-
ing on workload.
✔️ Note: App Service Free and Shared (preview) hosting plans are base tiers that run on the same Azure
VM as other App Service apps. Some apps may belong to other customers. These tiers are intended to be
used only for development and testing purposes.
Each tier also provides a specific subset of App Service features. These features include custom domains
and SSL certificates, autoscaling, deployment slots, backups, Traffic Manager integration, and more. The
higher the tier, the more features are available. To find out which features are supported in each pricing
tier, see App Service plan details1.

How does my app run and scale?


In the Free and Shared tiers, an app receives CPU minutes on a shared VM instance and cannot scale out.
In other tiers, an app runs and scales as follows.
When you create an app in App Service, it is put into an App Service plan. When the app runs, it runs on
all the VM instances configured in the App Service plan. If multiple apps are in the same App Service
plan, they all share the same VM instances. If you have multiple deployment slots for an app, all deploy-
ment slots also run on the same VM instances. If you enable diagnostic logs, perform backups, or run
WebJobs, they also use CPU cycles and memory on these VM instances.
In this way, the App Service plan is the scale unit of the App Service apps. If the plan is configured to run
five VM instances, then all apps in the plan run on all five instances. If the plan is configured for autoscal-
ing, then all apps in the plan are scaled out together based on the autoscale settings.

What if my app needs more capabilities or features?


Your App Service plan can be scaled up and down at any time. It is as simple as changing the pricing tier
of the plan. You can choose a lower pricing tier at first and scale up later when you need more App
Service features. The same works in the reverse. When you feel you no longer need the capabilities or
features of a higher tier, you can scale down to a lower tier, which saves you money.

1 https://azure.microsoft.com/pricing/details/app-service/plans/
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service core concepts  9

If your app is in the same App Service plan with other apps, you may want to improve the app's perfor-
mance by isolating the compute resources. You can do it by moving the app into a separate App Service
plan.
Since you pay for the computing resources your App Service plan allocates, you can potentially save
money by putting multiple apps into one App Service plan. However, keep in mind that apps in the same
App Service plan all share the same compute resources. To determine whether the new app has the
necessary resources, you need to understand the capacity of the existing App Service plan, and the
expected load for the new app.
Isolate your app into a new App Service plan when:
●● The app is resource-intensive.
●● You want to scale the app independently from the other apps the existing plan.
●● The app needs resource in a different geographical region.
This way you can allocate a new set of resources for your app and gain greater control of your apps.

Deploy code to App Service


Automated deployment
Automated deployment, or continuous integration, is a process used to push out new features and bug
fixes in a fast and repetitive pattern with minimal impact on end users.
Azure supports automated deployment directly from several sources. The following options are available:
●● Azure DevOps: You can push your code to Azure DevOps (previously known as Visual Studio Team
Services), build your code in the cloud, run the tests, generate a release from the code, and finally,
push your code to an Azure Web App.
●● GitHub: Azure supports automated deployment directly from GitHub. When you connect your GitHub
repository to Azure for automated deployment, any changes you push to your production branch on
GitHub will be automatically deployed for you.
●● Bitbucket: With its similarities to GitHub, you can configure an automated deployment with Bitbuck-
et.

Manual deployment
There are a few options that you can use to manually push your code to Azure:
●● Git: App Service web apps feature a Git URL that you can add as a remote repository. Pushing to the
remote repository will deploy your app.
●● CLI: webapp up is a feature of the az command-line interface that packages your app and deploys it.
Unlike other deployment methods, az webapp up can create a new App Service web app for you if
you haven't already created one.
●● Zipdeploy: Use curl or a similar HTTP utility to send a ZIP of your application files to App Service.
●● Visual Studio: Visual Studio features an App Service deployment wizard that can walk you through
the deployment process.
●● FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting environments, including
App Service.
MCT USE ONLY. STUDENT USE PROHIBITED 10  Module 1 Creating Azure App Service Web Apps  

Authentication and authorization in Azure App


Service
Azure App Service provides built-in authentication and authorization support, so you can sign in users
and access data by writing minimal or no code in your web app, API, and mobile back end, and also
Azure Functions.
Secure authentication and authorization require deep understanding of security, including federation,
encryption, JSON web tokens (JWT) management, grant types, and so on. App Service provides these util-
ities so that you can spend more time and energy on providing business value to your customer.
✔️ Note: You're not required to use App Service for authentication and authorization. Many web frame-
works are bundled with security features, and you can use them if you like. If you need more flexibility
than App Service provides, you can also write your own utilities.

How it works
The authentication and authorization module runs in the same sandbox as your application code. When
it's enabled, every incoming HTTP request passes through it before being handled by your application
code. This module handles several things for your app:
●● Authenticates users with the specified provider
●● Validates, stores, and refreshes tokens
●● Manages the authenticated session
●● Injects identity information into request headers
The module runs separately from your application code and is configured using app settings. No SDKs,
specific languages, or changes to your application code are required.

User claims
For all language frameworks, App Service makes the user's claims available to your code by injecting
them into the request headers. For ASP.NET 4.6 apps, App Service populates ClaimsPrincipal.
Current with the authenticated user's claims, so you can follow the standard .NET code pattern, includ-
ing the [Authorize] attribute. Similarly, for PHP apps, App Service populates the _SERVER['REMOTE_
USER'] variable.
For Azure Functions, ClaimsPrincipal.Current is not hydrated for .NET code, but you can still find
the user claims in the request headers.

Token store
App Service provides a built-in token store, which is a repository of tokens that are associated with the
users of your web apps, APIs, or native mobile apps. When you enable authentication with any provider,
this token store is immediately available to your app. If your application code needs to access data from
these providers on the user's behalf, such as:
●● post to the authenticated user's Facebook timeline
●● read the user's corporate data from the Azure Active Directory Graph API or even the Microsoft Graph
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service core concepts  11

You typically must write code to collect, store, and refresh these tokens in your application. With the
token store, you just retrieve the tokens when you need them and tell App Service to refresh them when
they become invalid.
The id tokens, access tokens, and refresh tokens cached for the authenticated session, and they're
accessible only by the associated user.

Logging and tracing


If you enable application logging, you will see authentication and authorization traces directly in your log
files. If you see an authentication error that you didn’t expect, you can conveniently find all the details by
looking in your existing application logs. If you enable failed request tracing, you can see exactly what
role the authentication and authorization module may have played in a failed request. In the trace logs,
look for references to a module named EasyAuthModule_32/64.

Identity providers
App Service uses federated identity, in which a third-party identity provider manages the user identities
and authentication flow for you. Five identity providers are available by default:

Provider Sign-in endpoint


Azure Active Directory /.auth/login/aad
Microsoft Account /.auth/login/microsoftaccount
Facebook /.auth/login/facebook
Google /.auth/login/google
Twitter /.auth/login/twitter

When you enable authentication and authorization with one of these providers, its sign-in endpoint is
available for user authentication and for validation of authentication tokens from the provider. You can
provide your users with any number of these sign-in options with ease. You can also integrate another
identity provider or your own custom identity solution.

Authentication flow
The authentication flow is the same for all providers, but differs depending on whether you want to sign
in with the provider's SDK:
●● Without provider SDK: The application delegates federated sign-in to App Service. This is typically
the case with browser apps, which can present the provider's login page to the user. The server code
manages the sign-in process, so it is also called server-directed flow or server flow. This case applies to
web apps. It also applies to native apps that sign users in using the Mobile Apps client SDK because
the SDK opens a web view to sign users in with App Service authentication.
●● With provider SDK: The application signs users in to the provider manually and then submits the
authentication token to App Service for validation. This is typically the case with browser-less apps,
which can't present the provider's sign-in page to the user. The application code manages the sign-in
process, so it is also called client-directed flow or client flow. This case applies to REST APIs, Azure
Functions, and JavaScript browser clients, as well as web apps that need more flexibility in the sign-in
process. It also applies to native mobile apps that sign users in using the provider's SDK.
MCT USE ONLY. STUDENT USE PROHIBITED 12  Module 1 Creating Azure App Service Web Apps  

✔️ Note: Calls from a trusted browser app in App Service calls another REST API in App Service or Azure
Functions can be authenticated using the server-directed flow. For more information, see Customize
authentication and authorization in App Service2.
The table below shows the steps of the authentication flow.

Step Without provider SDK With provider SDK


1. Sign user in Redirects client to /.auth/ Client code signs user in directly
login/<provider> with provider's SDK and receives
an authentication token. For
information, see the provider's
documentation.
2. Post-authentication Provider redirects client to Client code posts token from
/.auth/login/<provider>/ provider to /.auth/
callback. login/<provider> for
validation.
3. Establish authenticated session App Service adds authenticated App Service returns its own
cookie to response. authentication token to client
code.
4. Serve authenticated content Client includes authentication Client code presents authentica-
cookie in subsequent requests tion token in X-ZUMO-AUTH
(automatically handled by header (automatically handled
browser) by Mobile Apps client SDKs).
For client browsers, App Service can automatically direct all unauthenticated users to /.auth/
login/<provider>. You can also present users with one or more /.auth/login/<provider> links
to sign in to your app using their provider of choice.

Authorization behavior
In the Azure portal, you can configure App Service authorization with a number of behaviors:
1. Allow Anonymous requests (no action): This option defers authorization of unauthenticated traffic
to your application code. For authenticated requests, App Service also passes along authentication
information in the HTTP headers.This option provides more flexibility in handling anonymous re-
quests. It lets you present multiple sign-in providers to your users.
2. Allow only authenticated requests: The option is Log in with <provider>. App Service redirects all
anonymous requests to /.auth/login/<provider> for the provider you choose. If the anony-
mous request comes from a native mobile app, the returned response is an HTTP 401 Unauthor-
ized. With this option, you don't need to write any authentication code in your app.
Caution: Restricting access in this way applies to all calls to your app, which may not be desirable for
apps wanting a publicly available home page, as in many single-page applications.

2 https://docs.microsoft.com/en-us/azure/app-service/app-service-authentication-how-to
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service core concepts  13

OS and runtime patching in Azure App Service


How and when are OS updates applied?
Azure manages OS patching on two levels, the physical servers and the guest virtual machines (VMs) that
run the App Service resources. Both are updated monthly, which aligns to the monthly Patch Tuesday3
schedule. These updates are applied automatically, in a way that guarantees the high-availability SLA of
Azure services.

How does Azure deal with significant vulnerabilities?


When severe vulnerabilities require immediate patching, such as zero-day vulnerabilities, the high-priority
updates are handled on a case-by-case basis. Stay current with critical security announcements in Azure
by visiting Azure Security Blog4.

When are supported language runtimes updated, added,


or deprecated?
New stable versions of supported language runtimes (major, minor, or patch) are periodically added to
App Service instances. Some updates overwrite the existing installation, while others are installed side by
side with existing versions. An overwrite installation means that your app automatically runs on the
updated runtime. A side-by-side installation means you must manually migrate your app to take advan-
tage of a new runtime version.
✔️ Note: Information here applies to language runtimes that are built into an App Service app. A custom
runtime you upload to App Service, for example, remains unchanged unless you manually upgrade it.

New patch updates


Patch updates to .NET, PHP, Java SDK, or Tomcat/Jetty version are applied automatically by overwriting
the existing installation with the new version. Node.js patch updates are installed side by side with the
existing versions (similar to major and minor versions in the next section). New Python patch versions can
be installed manually through site extensions), side by side with the built-in Python installations.

New major and minor versions


When a new major or minor version is added, it is installed side by side with the existing versions. You can
manually upgrade your app to the new version. If you configured the runtime version in a configuration
file (such as web.config and package.json), you need to upgrade with the same method. If you used
an App Service setting to configure your runtime version, you can change it in the Azure portal or by
running an Azure CLI command in the Cloud Shell, as shown in the following examples:
az webapp config set \
--net-framework-version v4.7 \
--resource-group <groupname> \
--name <appname>

3 https://technet.microsoft.com/security/bulletins.aspx
4 https://azure.microsoft.com/blog/topics/security/
MCT USE ONLY. STUDENT USE PROHIBITED 14  Module 1 Creating Azure App Service Web Apps  

az webapp config appsettings set \


--settings WEBSITE_NODE_DEFAULT_VERSION=8.9.3 \
--resource-group <groupname> \
--name <appname>

Deprecated versions
When an older version is deprecated, the removal date is announced so that you can plan your runtime
version upgrade accordingly.

App Service networking features


There are two primary deployment types for the Azure App Service. There is the multi-tenant public
service, which hosts App Service plans in the Free, Shared, Basic, Standard, Premium, and Premiumv2
pricing SKUs. Then there is the single tenant App Service Environment(ASE), which hosts Isolated SKU
App Service plans directly in your Azure Virtual Network (VNet). The features you use will vary on if you
are in the multi-tenant service or in an ASE.

Multi-tenant App Service networking features


The Azure App Service is a distributed system. The roles that handle incoming HTTP/HTTPS requests are
called front-ends. The roles that host the customer workload are called workers. All of the roles in an App
Service deployment exist in a multi-tenant network. Because there are many different customers in the
same App Service scale unit, you cannot connect the App Service network directly to your network.
Instead of connecting the networks, we need features to handle the different aspects of application
communication. The features that handle requests TO your app can't be used to solve problems when
making calls FROM your app. Likewise, the features that solve problems for calls FROM your app can't be
used to solve problems TO your app.

Inbound features Outbound features


App assigned address Hybrid Connections
Access Restrictions Gateway required VNet Integration
Service Endpoints VNet Integration (preview)

Default networking behavior


The Azure App Service scale units support many customers in each deployment. The Free and Shared
SKU plans host customer workloads on multi-tenant workers. The Basic, and above plans host customer
workloads that are dedicated to only one App Service plan (ASP). If you had a Standard App Service plan,
then all of the apps in that plan will run on the same worker. If you scale out the worker, then all of the
apps in that ASP will be replicated on a new worker for each instance in your ASP. The workers that are
used for Premiumv2 are different from the workers used for the other plans.
Each App Service deployment has one IP address that is used for all of the inbound traffic to the apps in
that App Service deployment. There are however anywhere from 4 to 11 addresses used for making
outbound calls. These addresses are shared by all of the apps in that App Service deployment. The
outbound addresses are different based on the different worker types. That means that the addresses
used by the Free, Shared, Basic, Standard and Premium ASPs are different than the addresses used for
outbound calls from the Premiumv2 ASPs. The inbound and outbound addresses used by your app are
viewable in the app properties.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service core concepts  15

Inbound and outbound IP addresses in Azure


App Service
When inbound IP changes
Regardless of the number of scaled-out instances, each app has a single inbound IP address. The inbound
IP address may change when you perform one of the following actions:
●● Delete an app and recreate it in a different resource group.
●● Delete the last app in a resource group and region combination and recreate it.
●● Delete an existing SSL binding, such as during certificate renewal.

Get static inbound IP


Sometimes you might want a dedicated, static IP address for your app. To get a static inbound IP address,
you need to configure an IP-based SSL binding. If you don't actually need SSL functionality to secure your
app, you can even upload a self-signed certificate for this binding. In an IP-based SSL binding, the
certificate is bound to the IP address itself, so App Service provisions a static IP address to make it
happen.

When outbound IPs change


Regardless of the number of scaled-out instances, each app has a set number of outbound IP addresses
at any given time. Any outbound connection from the App Service app, such as to a back-end database,
uses one of the outbound IP addresses as the origin IP address. You can't know beforehand which IP
address a given app instance will use to make the outbound connection, so your back-end service must
open its firewall to all the outbound IP addresses of your app.
The set of outbound IP addresses for your app changes when you scale your app between the lower tiers
(Basic, Standard, and Premium) and the Premium V2 tier.
You can find the set of all possible outbound IP addresses your app can use, regardless of pricing tiers, by
looking for the possibleOutboundIPAddresses property.

Find outbound IPs


To find the outbound IP addresses currently used by your app in the Azure portal, click Properties in your
app's left-hand navigation.
You can find the same information by running the following command in the Cloud Shell. They are listed
in the Additional Outbound IP Addresses field.
az webapp show \
--resource-group <group_name> \
--name <app_name> \
--query outboundIpAddresses \
--output tsv

To find all possible outbound IP addresses for your app, regardless of pricing tiers, run the following
command in the Cloud Shell.
MCT USE ONLY. STUDENT USE PROHIBITED 16  Module 1 Creating Azure App Service Web Apps  

az webapp show \
--resource-group <group_name> \
--name <app_name> \
--query possibleOutboundIpAddresses \
--output tsv

Controlling App Service traffic by using Azure


Traffic Manager
You can use Azure Traffic Manager to control how requests from web clients are distributed to apps in
Azure App Service. When App Service endpoints are added to an Azure Traffic Manager profile, Azure
Traffic Manager keeps track of the status of your App Service apps (running, stopped, or deleted) so that
it can decide which of those endpoints should receive traffic.

Routing methods
Azure Traffic Manager uses four different routing methods. These methods are described in the following
list as they pertain to Azure App Service.
●● Priority: use a primary app for all traffic, and provide backups in case the primary or the backup apps
are unavailable.
●● Weighted: distribute traffic across a set of apps, either evenly or according to weights, which you
define.
●● Performance: when you have apps in different geographic locations, use the “closest” app in terms of
the lowest network latency.
●● Geographic: direct users to specific apps based on which geographic location their DNS query
originates from.
For more information, see Traffic Manager routing methods5.

App Service and Traffic Manager Profiles


To configure the control of App Service app traffic, you create a profile in Azure Traffic Manager that uses
one of the three load balancing methods described previously, and then add the endpoints (in this case,
App Service) for which you want to control traffic to the profile. Your app status (running, stopped, or
deleted) is regularly communicated to the profile so that Azure Traffic Manager can direct traffic accord-
ingly.

Introduction to Azure App Service on Linux


Customers can use App Service on Linux to host web apps natively on Linux for supported application
stacks. The Languages section lists the application stacks that are currently supported.

5 https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-routing-methods
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service core concepts  17

Languages
App Service on Linux supports a number of Built-in images in order to increase developer productivity. If
the runtime your application requires is not supported in the built-in images, you build your own Docker
image to deploy to Web App for Containers. Creating Docker images is covered later in the course.

Language Supported Versions


Node.js 4.4, 4.5, 4.8, 6.2, 6.6, 6.9, 6.10, 6.11, 8.0, 8.1, 8.2, 8.8,
8.9, 8.11, 8.12, 9.4, 10.1, 10.10, 10.14
Java * Tomcat 8.5, 9.0, Java SE, WildFly 14 (all running JRE
8)
PHP 5.6, 7.0, 7.2, 7.3
Python 2.7, 3.6, 3.7
.NET Core 1.0, 1.1, 2.0, 2.1, 2.2
Ruby 2.3, 2.4, 2.5, 2.6

Currently supported features


●● Deployments

●● FTP
●● Local Git
●● GitHub
●● Bitbucket
●● DevOps

●● Staging environments
●● Azure Container Registry and DockerHub CI/CD
●● Console, Publishing, and Debugging

●● Environments
●● Deployments
●● Basic console
●● SSH
●● Scaling

●● Customers can scale web apps up and down by changing the tier of their App Service plan
●● Locations

●● Check the Azure Status Dashboard. https://azure.microsoft.com/status


MCT USE ONLY. STUDENT USE PROHIBITED 18  Module 1 Creating Azure App Service Web Apps  

Limitations
App Service on Linux is only supported with Free, Basic, Standard, and Premium app service plans and
does not have a Shared tier. You cannot create a Linux Web App in an App Service plan already hosting
non-Linux Web Apps.
Based on a current limitation, for the same resource group you cannot mix Windows and Linux apps in
the same region.

Troubleshooting
When your application fails to start or you want to check the logging from your app, check the Docker
logs in the LogFiles directory. You can access this directory either through your SCM site or via FTP. To log
the stdout and stderr from your container, you need to enable Docker Container logging under
App Service Logs. The setting takes effect immediately. App Service detects the change and restarts the
container automatically.
You can access the SCM site from Advanced Tools in the Development Tools menu.
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating an Azure App Service Web App  19

Creating an Azure App Service Web App


Demo: Create a Web App by using the Azure
Portal
In this demo you will learn how to create a Web App by using the Azure Portal.

Create a web app


1. Sign in to the Azure portal6.
2. Select the Create a resource link at the top of the left-hand navigation.
3. Select Web > Web App to display the web app creation wizard.
4. Fill out the following fields in each of the sections on the wizard:
●● Project Details
●● Subscription: Select the Azure subscription you are using for this class.
●● Resource Group: Create a new resource group to make it easier to clean up the resources
later.
●● Instance Details
●● Subscription: Select the Azure subscription you are using for this class.
●● Resource Group: Create a new resource group to make it easier to clean up the resources
later.
●● Name: The name you choose must be unique among all Azure web apps. This name will be
part of the app's URL: appname.azurewebsites.net.
●● Publish: Select Code for this demo.
●● Runtime Stack: Select .NET Core 3.0. Your choice here may affect whether you have a choice
of operating system - for some runtime stacks, App Service supports only one operating
system.
●● Operating System: Keep Windows selected here, it's the default when you selected .NET Core
3.0 above.
●● Region: Keep the default selection.
●● App Service Plan
●● Windows Service Plan: Leave the default selection. By default, the wizard will create a new
plan in the same region as the web app.
●● SKU and size: Select F1. To select the F1 tier, select Change size to open the Spec Picker
wizard. On the Dev / Test tab, select F1 from the list, then select Apply.
5. Navigate to the Monitoring tab at the top of the page and toggle Enable Application Insights to
No.
6. Select Review and Create to navigate to the review page, then select Create to create the app. The
portal will display the deployment page, where you can view the status of your deployment.

6 http://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 20  Module 1 Creating Azure App Service Web Apps  

7. Once the app is ready you can select the Go to resource button and the portal will display the web
app overview page. To preview your new web app's default content, select its URL at the top right. The
placeholder page that loads indicates that your web app is up and running and ready to receive
deployment of your app's code.

Clean up resources
1. In the Azure Portal select Resource groups.
2. Right-click on the resource group you created above and select Delete resource group. You will be
prompted to enter the resource group name to verify you want to delete it. Enter the name of the
resource group and select Delete.

Demo: Create a static HTML web app by using


Azure Cloud Shell
In this demo you'll learn how to perform the following actions:
●● Deploy a basic HTML+CSS site to Azure App Service by using the az webapp up command
●● Update and redeploy the app
The az webapp up command does the following actions:
●● Create a default resource group.
●● Create a default app service plan.
●● Create an app with the specified name.
●● Zip deploy files from the current working directory to the web app.

Prerequisites
This demo is performed in the Cloud Shell using the Bash environment.
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating an Azure App Service Web App  21

Login to Azure
1. Login to the Azure portal7 and open open the cloud shell.
2. Be sure to select the Bash environment.

Download the sample


1. In the Cloud Shell, create a demo directory and then change to it.
mkdir demoHTML

cd $HOME/demoHTML

2. Run the following command to clone the sample app repository to your demoHTML directory.
git clone https://github.com/Azure-Samples/html-docs-hello-world.git

Create the web app


1. Change to the directory that contains the sample code and run the az webapp up command. In the
following example, replace <app_name> with a unique app name, and <region> with a region near
you.
cd html-docs-hello-world

az webapp up --location <region> --name <app_name>

This command may take a few minutes to run. While running, it displays information similar to the
example below. Make a note of the resourceGroup value. You need it for the clean up resources
section.
{
"app_url": "https://<app_name>.azurewebsites.net",
"location": "westeurope",
"name": "<app_name>",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_westeurope",
"serverfarm": "appsvc_asp_Windows_westeurope",
"sku": "FREE",
"src_path": "/home/<username>/demoHTML/html-docs-hello-world ",
< JSON data removed for brevity. >
}

2. Open a browser and navigate to the app URL (http://<app_name>.azurewebsites.net) and


verify the app is running. Leave the browser open on the app for the next section.

7 https://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 22  Module 1 Creating Azure App Service Web Apps  

Update and redeploy the app


1. In the Cloud Shell, type nano index.html to open the nano text editor. In the <h1> heading tag,
change “Azure App Service - Sample Static HTML Site” to "Azure App Service".
2. Use the commands ^O to save and ^X to exit.
3. Redeploy the app with the same az webapp up command. Be sure to use the same region and app_
name as you used earlier.
az webapp up --location <region> --name <app_name>

4. Once deployment is completed switch back to the browser from step 2 in the “Create the web app”
section above and refresh the page.

Clean up resources
1. After completing the demo you can delete the resources you created using the resource group name
you noted in step 1 of the “Create the web app” section above.
az group delete --name <resource_group> --no-wait

Demo: Create a Web App with a local Git de-


ployment source
This demo shows you how to deploy your app to Azure App Service from a local Git repository by using
the Kudu build server and the Azure CLI.

Download sample code and launch Azure CLI


1. To download a sample repository, run the following command in your Git Bash window:
git clone https://github.com/Azure-Samples/html-docs-hello-world.git

Later in the demo you'll be entering more commands in the Git Bash window so be sure to leave it
open.
2. Launch the Azure Cloud Shell and be sure to select the Bash environment.
●● You can either launch the Cloud Shell through the portal (https://portal.azure.com),
or by launching the shell directly (https://shell.azure.com).

Create the web app


In the Cloud Shell run the following commands to create the web app and the necessary resources:
1. Create a resource group:
az group create --location <MyLocation> --name <MyResourceGroup>

2. Create an app service plan:


az appservice plan create --name <MyPlan> --resource-group <MyResourceGroup>
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating an Azure App Service Web App  23

3. Create the web app:


az webapp create --name <MyUniqueApp> --resource-group <MyResourceGroup> --plan <MyPlan>
--deployment-local-git

Deploy with Kudu build server


We'll be configuring and using the Kudu build server for deployments in this demo. FTP and local Git can
deploy to an Azure web app by using a deployment user. Once you configure your deployment user, you
can use it for all your Azure deployments. Your account-level deployment username and password are
different from your Azure subscription credentials.
The first two steps in this section are performed in the Cloud Shell, the third is performed in the local Git
Bash window.
1. Configure a deployment user.
az webapp deployment user set \
--user-name <username> \
--password <password>

●● The username must be unique within Azure, and for local Git pushes, must not contain the ‘@’
symbol.
●● The password must be at least eight characters long, with two of the following three elements:
letters, numbers, and symbols.
●● The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error,
change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger
password.
Record your username and password to use to deploy your web apps.
2. Get the web app deployment URL, the deployment URL is used in the Git Bash window to connect
your local Git repository to the web app:
az webapp deployment source config-local-git --name <MyUniqueApp> --resource-group <MyRe-
sourceGroup>

The command will return JSON similar to the example below, you'll use the URL in the Git Bash
window in the next step.
{
"url": "https://<deployment-user>@<MyUniqueApp>.scm.azurewebsites.net/<MyUniqueApp>.git"
}

3. Deploy the web app:


This step is performed in the local Git Bash window left open from earlier in the demo.
Use the cd command to change in to the directory where the html-docs-hello-world was
cloned.
Use the following command to connect the repository to the Web App:
git remote add azure <url>
MCT USE ONLY. STUDENT USE PROHIBITED 24  Module 1 Creating Azure App Service Web Apps  

Use the following command to push the to the Azure remote:


git push azure master

What happens to my app during deployment?


All the officially supported deployment methods make changes to the files in the /home/site/www-
root folder of your app. These files are the same ones that are run in production. Therefore, the deploy-
ment can fail because of locked files. The app in production may also behave unpredictably during
deployment, because not all the files updated at the same time. There are a few different ways to avoid
these issues:
●● Stop your app or enable offline mode for your app during deployment.
●● Deploy to a staging slot with auto swap enabled.
●● Use Run From Package instead of continuous deployment.

Verify results
In the Azure Portal navigate to the web app you created above:
1. In the Overview section select the URL to verify the app was deployed successfully.
2. Select Deployment Center to view deployment information.
From here you can make change to the code in the local repository and push the change to the web app.

Clean up resources
In the Cloud Shell use the following command to delete the resource group and the resources it contains.
The --no-wait portion of the command will return you to the Bash prompt quickly without showing
you the results of the command. You can confirm the resource group was deleted in the Azure Portal
az group delete --name <MyResourceGroup> --no-wait
MCT USE ONLY. STUDENT USE PROHIBITED
 Configuring and Monitoring App Service apps  25

Configuring and Monitoring App Service apps


Configure an App Service app in the Azure por-
tal
In App Service, app settings are variables passed as environment variables to the application code. For
Linux apps and custom containers, App Service passes app settings to the container using the --env flag
to set the environment variable in the container.
Application settings can be accessed by navigating to your app's management page. From there select
Configuration > Application Settings.
For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in
<appSettings> in Web.config or appsettings.json, but the values in App Service override the ones in
Web.config or appsettings.json. You can keep development settings (for example, local MySQL password)
in Web.config or appsettings.json, but production secrets (for example, Azure MySQL database password)
safe in App Service. The same code uses your development settings when you debug locally, and it uses
your production secrets when deployed to Azure.
App settings are always encrypted when stored (encrypted-at-rest).

Adding and editing settings


To add a new app setting, click New application setting. In the dialog, you can stick the setting to the
current slot.
To edit a setting, click the Edit button on the right side.
When finished, click Update. Don't forget to click Save back in the Configuration page.
✔️ Note: In a default Linux container or a custom Linux container, any nested JSON key structure in the
app setting name like ApplicationInsights:InstrumentationKey needs to be configured in App
Service as ApplicationInsights__InstrumentationKey for the key name. In other words, any :
should be replaced by __ (double underscore).

Editing in bulk
To add or edit app settings in bulk, click the Advanced edit button. When finished, click Update. App
settings have the following JSON formatting:
[
{
"name": "<key-1>",
"value": "<value-1>",
"slotSetting": false
},
{
"name": "<key-2>",
"value": "<value-2>",
"slotSetting": false
},
...
MCT USE ONLY. STUDENT USE PROHIBITED 26  Module 1 Creating Azure App Service Web Apps  

Configure general settings and default docu-


ments
In the Azure portal, navigate to the app's management page. In the app's left menu, click Configuration
> General settings. Here, you can configure some common settings for the app. Some settings require
you to scale up to higher pricing tiers.
●● Stack settings: The software stack to run the app, including the language and SDK versions. For Linux
apps and custom container apps, you can also set an optional start-up command or file.
●● Platform settings: Lets you configure settings for the hosting platform, including:
●● Bitness: 32-bit or 64-bit.
●● WebSocket protocol: For ASP.NET SignalR or socket.io, for example.
●● Always On: Keep the app loaded even when there's no traffic. It's required for continuous Web-
Jobs or for WebJobs that are triggered using a CRON expression.
●● Managed pipeline version: The IIS pipeline mode. Set it to Classic if you have a legacy app that
requires an older version of IIS.
●● HTTP version: Set to 2.0 to enable support for HTTPS/2 protocol.
●● ARR affinity: In a multi-instance deployment, ensure that the client is routed to the same instance for
the life of the session. You can set this option to Off for stateless applications.
●● Debugging: Enable remote debugging for ASP.NET, ASP.NET Core, or Node.js apps. This option turns
off automatically after 48 hours.
●● Incoming client certificates: require client certificates in mutual authentication.

Configure default documents


This setting is only for Windows apps. The default document is the web page that's displayed at the root
URL for a website. The first matching file in the list is used. To add a new default document, click New
document.
If the app uses modules that route based on URL instead of serving static content, there is no need for
default documents.

Configure path mappings


The Path mappings page will display different options based on the OS type.

Windows apps (uncontainerized)


For Windows apps, you can customize the IIS handler mappings and virtual applications and directories.
Handler mappings let you add custom script processors to handle requests for specific file extensions. To
add a custom handler, click New handler. Configure the handler as follows:
●● Extension: The file extension you want to handle, such as *.php or handler.fcgi.
MCT USE ONLY. STUDENT USE PROHIBITED
 Configuring and Monitoring App Service apps  27

●● Script processor: The absolute path of the script processor to you. Requests to files that match the
file extension are processed by the script processor. Use the path D:\home\site\wwwroot to refer
to your app's root directory.
●● Arguments: Optional command-line arguments for the script processor.
Each app has the default root path (/) mapped to D:\home\site\wwwroot, where your code is
deployed by default. If your app root is in a different folder, or if your repository has more than one
application, you can edit or add virtual applications and directories here.
To configure virtual applications and directories, specify each virtual directory and its corresponding
physical path relative to the website root (D:\home). Optionally, you can select the Application check-
box to mark a virtual directory as an application.

Containerized apps
You can add custom storage for your containerized app. Containerized apps include all Linux apps and
also the Windows and Linux custom containers running on App Service. Click New Azure Storage
Mount and configure your custom storage as follows:
●● Name: The display name.
●● Configuration options: Basic or Advanced.
●● Storage accounts: The storage account with the container you want.
●● Storage type: Azure Blobs or Azure Files. Windows container apps only support Azure Files.
●● Storage container: For basic configuration, the container you want.
●● Share name: For advanced configuration, the file share name.
●● Access key: For advanced configuration, the access key.
●● Mount path: The absolute path in your container to mount the custom storage.

Enable diagnostics logging for apps in Azure


App Service
Azure provides built-in diagnostics to assist with debugging an App Service app. In this lesson, you will
learn how to enable diagnostic logging and add instrumentation to your application, as well as how to
access the information logged by Azure.
Below is a table showing the types of logging, the platforms supported, and where the logs can be stored
and located for accessing the information.
MCT USE ONLY. STUDENT USE PROHIBITED 28  Module 1 Creating Azure App Service Web Apps  

Type Platform Location Description


Application logging Windows, Linux App Service file system Logs messages generat-
and/or Azure Storage ed by your application
blobs code. The messages can
be generated by the
web framework you
choose, or from your
application code directly
using the standard
logging pattern of your
language. Each message
is assigned one of the
following categories:
Critical, Error, Warn-
ing, Info, Debug, and
Trace. You can select
how verbose you want
the logging to be by
setting the severity level
when you enable
application logging.
Web server logging Windows App Service file system Raw HTTP request data
or Azure Storage blobs in the W3C extended
log file format. Each log
message includes data
such as the HTTP
method, resource URI,
client IP, client port,
user agent, response
code, and so on.
Detailed error logging Windows App Service file system Copies of the .htm error
pages that would have
been sent to the client
browser. For security
reasons, detailed error
pages shouldn't be sent
to clients in production,
but App Service can
save the error page
each time an application
error occurs that has
HTTP code 400 or
greater. The page may
contain information that
can help determine why
the server returns the
error code.
MCT USE ONLY. STUDENT USE PROHIBITED
 Configuring and Monitoring App Service apps  29

Type Platform Location Description


Failed request tracing Windows App Service file system Detailed tracing infor-
mation on failed
requests, including a
trace of the IIS compo-
nents used to process
the request and the
time taken in each
component. It's useful if
you want to improve
site performance or
isolate a specific HTTP
error. One folder is
generated for each
failed request, which
contains the XML log
file, and the XSL
stylesheet to view the
log file with.
Deployment logging Windows, Linux App Service file system Logs for when you
publish content to an
app. Deployment
logging happens
automatically and there
are no configurable
settings for deployment
logging. It helps you
determine why a
deployment failed. For
example, if you use a
custom deployment
script, you might use
deployment logging to
determine why the
script is failing.

Enable application logging (Windows)


1. To enable application logging for Windows apps in the Azure portal, navigate to your app and select
App Service logs.
2. Select On for either Application Logging (Filesystem) or Application Logging (Blob), or both.
3. The Filesystem option is for temporary debugging purposes, and turns itself off in 12 hours. The Blob
option is for long-term logging, and needs a blob storage container to write logs to. The Blob option
also includes additional information in the log messages, such as the ID of the origin VM instance of
the log message (InstanceId), thread ID (Tid), and a more granular timestamp (EventTick-
Count).
4. You can also set the Level of details included in the log as shown in the table below.
MCT USE ONLY. STUDENT USE PROHIBITED 30  Module 1 Creating Azure App Service Web Apps  

Level Included categories


Disabled None
Error Error, Critical
Warning Warning, Error, Critical
Information Info, Warning, Error, Critical
Verbose Trace, Debug, Info, Warning, Error, Critical (all
categories)
5. When finished, select Save.

Enable application logging (Linux/Container)


1. In Application logging, select File System.
2. In Quota (MB), specify the disk quota for the application logs. In Retention Period (Days), set the
number of days the logs should be retained.
3. When finished, select Save.

Enable web server logging


1. For Web server logging, select Storage to store logs on blob storage, or File System to store logs
on the App Service file system.
2. In Retention Period (Days), set the number of days the logs should be retained.
3. When finished, select Save.

Add log messages in code


In your application code, you use the usual logging facilities to send log messages to the application
logs. For example:
●● ASP.NET applications can use the System.Diagnostics.Trace class to log information to the
application diagnostics log. For example:
System.Diagnostics.Trace.TraceError("If you're seeing this, something bad happened");

●● By default, ASP.NET Core uses the Microsoft.Extensions.Logging.AzureAppServices


logging provider.

Stream logs
Before you stream logs in real time, enable the log type that you want. Any information written to files
ending in .txt, .log, or .htm that are stored in the /LogFiles directory (d:/home/logfiles) is streamed
by App Service.
Note: Some types of logging buffer write to the log file, which can result in out of order events in the
stream. For example, an application log entry that occurs when a user visits a page may be displayed in
the stream before the corresponding HTTP log entry for the page request.
●● Azure Portal - To stream logs in the Azure portal, navigate to your app and select Log stream.
●● Azure CLI - To stream logs live in Cloud Shell, use the following command:
MCT USE ONLY. STUDENT USE PROHIBITED
 Configuring and Monitoring App Service apps  31

az webapp log tail --name appname --resource-group myResourceGroup

●● Local console - To stream logs in the local console, install Azure CLI and sign in to your account. Once
signed in, follow the instructions for Azure CLI above.

Access log files


If you configure the Azure Storage blobs option for a log type, you need a client tool that works with
Azure Storage.
For logs stored in the App Service file system, the easiest way is to download the ZIP file in the browser
at:
●● Linux/container apps: https://<app-name>.scm.azurewebsites.net/api/logs/docker/
zip
●● Windows apps: https://<app-name>.scm.azurewebsites.net/api/dump
For Linux/container apps, the ZIP file contains console output logs for both the docker host and the
docker container. For a scaled-out app, the ZIP file contains one set of logs for each instance. In the App
Service file system, these log files are the contents of the /home/LogFiles directory.
MCT USE ONLY. STUDENT USE PROHIBITED 32  Module 1 Creating Azure App Service Web Apps  

Scaling App Service apps


Common autoscale patterns
Scale based on CPU
You have a web app (/VMSS/cloud service role) and:
●● You want to scale out/scale in based on CPU.
●● Additionally, you want to ensure there is a minimum number of instances.
●● Also, you want to ensure that you set a maximum limit to the number of instances you can scale to.

Scale differently on weekdays vs weekends


You have a web app (/VMSS/cloud service role) and:
●● You want 3 instances by default (on weekdays)
●● You don't expect traffic on weekends and hence you want to scale down to 1 instance on weekends.
MCT USE ONLY. STUDENT USE PROHIBITED
 Scaling App Service apps  33

Scale differently during holidays


You have a web app (/VMSS/cloud service role) and:
●● You want to scale up/down based on CPU usage by default
●● However, during holiday season (or specific days that are important for your business) you want to
override the defaults and have more capacity at your disposal.
MCT USE ONLY. STUDENT USE PROHIBITED 34  Module 1 Creating Azure App Service Web Apps  

Scale based on custom metric


You have a web front end and a API tier that communicates with the backend.
●● You want to scale the API tier based on custom events in the front end (example: You want to scale
your checkout process based on the number of items in the shopping cart)

Scale up an app in Azure App Service


There are two workflows for scaling, scale up and scale out.
●● Scale up: Get more CPU, memory, disk space, and extra features like dedicated virtual machines
(VMs), custom domains and certificates, staging slots, autoscaling, and more. You scale up by chang-
ing the pricing tier of the App Service plan that your app belongs to.
●● Scale out: Increase the number of VM instances that run your app. You can scale out to as many as 20
instances, depending on your pricing tier. Scale out is covered in the next lesson.
The scale settings take only seconds to apply and affect all apps in your App Service plan. They don't
require you to change your code or redeploy your application. For information about the pricing and
features of individual App Service plans, see App Service Pricing Details8(https://azure.micro-
soft.com/pricing/details/web-sites/).
Note: The steps below show making the scale up changes in the settings for an app. Any changes made
here are also changing the scale settings in the App Service Plan associated with the app. You can make
the same changes directly in the App Service Plan settings as well.

Scale up your pricing tier


1. In your browser, open the Azure portal.
2. In your App Service app page, from the left menu, select Scale Up (App Service plan).

8 https://azure.microsoft.com/pricing/details/web-sites/
MCT USE ONLY. STUDENT USE PROHIBITED
 Scaling App Service apps  35

3. Choose your tier, and then select Apply. Select the different categories (for example, Production) and
also See additional options to show more tiers.

When the operation is complete, you see a notification pop-up with a green success check mark.

Scale related resources


If your app depends on other services, such as Azure SQL Database or Azure Storage, you can scale up
these resources separately. These resources aren't managed by the App Service plan.
To scale up the related resource, see the documentation for the specific resource type.

Scale out an app in Azure App Service


As mentioned earlier, Scale out increases the number of VM instances that run your app. You can scale
out to as many as 20 instances, depending on your pricing tier. The scale out settings apply to all apps
in your App Service plan. You can adjust the scale out settings for your App Service plan by selecting
Scale out (App Service plan) in the left navigation pane of either the app, or the App Service plan.
There are two scaling options:
●● Manual scale: Allows you to set a fixed instance count
●● Custom autoscale: Allows you to set conditional scaling based on a schedule and/or a performance
metric which will raise, or lower, the instance count
✔️ Note: For this course we will be covering how Custom autoscale works, how the conditions are struc-
tured in JSON, and best practices to follow when creating scaling rules. The course will not cover the
metrics and settings in detail.
MCT USE ONLY. STUDENT USE PROHIBITED 36  Module 1 Creating Azure App Service Web Apps  

Manual scale
Using the Manual scale option works best when your app is under fairly consistent loads over time.
Setting the instance count higher than your app needs means you're paying for capacity you aren't using.
And, setting the instance count lower means your app may experience periods where it isn't responsive.
Changing the instance count is straightforward, simply adjust the instance count by either dragging the
Instance count slider, or entering the number manually, and selecting Save.

Custom autoscale
To get started scaling your app based on custom metrics and/or dates, select the Custom autoscale
option on the page. When you select that option a Default autoscale condition is created for you and
that condition is is executed when none of the other scale condition(s) match. You are required to have at
least one condition in place.
MCT USE ONLY. STUDENT USE PROHIBITED
 Scaling App Service apps  37

Important: Always use a scale-out and scale-in rule combination that performs an increase and decrease.
If you use only one part of the combination, autoscale will only take action in a single direction (scale out,
or in) until it reaches the maximum, or minimum instance counts of defined in the profile. This is not
optimal, ideally you want your resource to scale up at times of high usage to ensure availability. And at
times of low usage you want your resource to scale down, so you can realize cost savings.

Autoscale setting structure


A resource in Azure can have only one autoscale setting, and that setting can have one or more profiles
and each profile can have one or more autoscale rules.
●● Autoscale setting

●● Profile 1

●● Autoscale rule 1
●● Autoscale rule 2
●● Profile 2

●● Autoscale rule 1
Below is an example of an autoscale setting that scales out on Friday and Saturday and scales in for the
rest of the week, so it's not using any rules containing metrics to trigger scaling events. The example has
been truncated for readability.
Note the example below displays a single profile:
1. Scale-out Weekends
* capacity is set to 2 instances for the minimum, maximum, and default
* recurrence is set to Week and days is set to “Friday”, "Saturday"
{
"location": "West US",
"tags": {},
"properties": {
"name": "az204-scale-appsvcpln-Autoscale-136",
"enabled": true,
"targetResourceUri": ".../az204-scale-appsvcpln",
"profiles": [
{
"name": "Scale-out Weekends",
"capacity": {
"minimum": "2",
"maximum": "2",
"default": "2"
},
"rules": [],
"recurrence": {
"frequency": "Week",
"schedule": {
"timeZone": "Pacific Standard Time",
"days": [
MCT USE ONLY. STUDENT USE PROHIBITED 38  Module 1 Creating Azure App Service Web Apps  

"Friday",
"Saturday"
],
"hours": [
6
],
"minutes": [
0
]
}
}
},
],
"notifications": [],
"targetResourceLocation": "West US"
},
"id": "...",
"name": "az204-scale-appsvcpln-Autoscale-136",
"type": "Microsoft.Insights/autoscaleSettings"
}

Autoscale profiles
There are three types of Autoscale profiles:
●● Regular profile: The most common profile. If you don’t need to scale your resource based on the day
of the week, or on a particular day, you can use a regular profile. This profile can then be configured
with metric rules that dictate when to scale out and when to scale in. You should only have one
regular profile defined.
●● Fixed date profile: This profile is for special cases. For example, let’s say you have an important event
coming up on December 26, 2017 (PST). You want the minimum and maximum capacities of your
resource to be different on that day, but still scale on the same metrics. In this case, you should add a
fixed date profile to your setting’s list of profiles. The profile is configured to run only on the event’s
day. For any other day, Autoscale uses the regular profile.
●● Recurrence profile: This type of profile enables you to ensure that this profile is always used on a
particular day of the week. Recurrence profiles only have a start time. They run until the next recur-
rence profile or fixed date profile is set to start. An Autoscale setting with only one recurrence profile
runs that profile, even if there is a regular profile defined in the same setting.

Autoscale evaluation
Given that Autoscale settings can have multiple profiles, and each profile can have multiple metric rules,
it is important to understand how an Autoscale setting is evaluated. Each time the Autoscale job runs, it
begins by choosing the profile that is applicable. Then Autoscale evaluates the minimum and maximum
values, and any metric rules in the profile, and decides if a scale action is necessary.
MCT USE ONLY. STUDENT USE PROHIBITED
 Scaling App Service apps  39

Which profile will Autoscale pick?


Autoscale uses the following sequence to pick the profile:
1. It first looks for any fixed date profile that is configured to run now. If there is, Autoscale runs it. If
there are multiple fixed date profiles that are supposed to run, Autoscale selects the first one.
2. If there are no fixed date profiles, Autoscale looks at recurrence profiles. If a recurrence profile is
found, it runs it.
3. If there are no fixed date or recurrence profiles, Autoscale runs the regular profile.

How does Autoscale evaluate multiple rules?


After Autoscale determines which profile to run, it evaluates all the scale-out rules in the profile (these are
rules with direction = “Increase”).
If one or more scale-out rules are triggered, Autoscale calculates the new capacity determined by the
scaleAction of each of those rules. Then it scales out to the maximum of those capacities, to ensure
service availability.
For example, let's say there is a virtual machine scale set with a current capacity of 10. There are two
scale-out rules: one that increases capacity by 10 percent, and one that increases capacity by 3 counts.
The first rule would result in a new capacity of 11, and the second rule would result in a capacity of 13. To
ensure service availability, Autoscale chooses the action that results in the maximum capacity, so the
second rule is chosen.
If no scale-out rules are triggered, Autoscale evaluates all the scale-in rules (rules with direction =
“Decrease”). Autoscale only takes a scale-in action if all of the scale-in rules are triggered.
Autoscale calculates the new capacity determined by the scaleAction of each of those rules. Then it
chooses the scale action that results in the maximum of those capacities to ensure service availability.

Best practices for Autoscale


Autoscale concepts
●● A resource can have only one autoscale setting
●● An autoscale setting can have one or more profiles and each profile can have one or more autoscale
rules.
●● An autoscale setting scales instances horizontally, which is out by increasing the instances and in by
decreasing the number of instances. An autoscale setting has a maximum, minimum, and default
value of instances.
●● An autoscale job always reads the associated metric to scale by, checking if it has crossed the config-
ured threshold for scale-out or scale-in.
●● All thresholds are calculated at an instance level. For example, “scale out by one instance when
average CPU > 80% when instance count is 2”, means scale-out when the average CPU across all
instances is greater than 80%.
●● All autoscale failures are logged to the Activity Log. You can then configure an activity log alert so that
you can be notified via email, SMS, or webhooks whenever there is an autoscale failure.
●● Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity
log alert so that you can be notified via email, SMS, or webhooks whenever there is a successful
MCT USE ONLY. STUDENT USE PROHIBITED 40  Module 1 Creating Azure App Service Web Apps  

autoscale action. You can also configure email or webhook notifications to get notified for successful
scale actions via the notifications tab on the autoscale setting.

Autoscale best practices


Use the following best practices as you use autoscale.

Ensure the maximum and minimum values are different and


have an adequate margin between them
If you have a setting that has minimum=2, maximum=2 and the current instance count is 2, no scale
action can occur. Keep an adequate margin between the maximum and minimum instance counts, which
are inclusive. Autoscale always scales between these limits.

Manual scaling is reset by autoscale min and max


If you manually update the instance count to a value above or below the maximum, the autoscale engine
automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the
range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances
on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will
scale it back to six instances on its next run. Manual scaling is temporary unless you reset the autoscale
rules as well.

Choose the appropriate statistic for your diagnostics metric


For diagnostics metrics, you can choose among Average, Minimum, Maximum and Total as a metric to
scale by. The most common statistic is Average.

Choose the thresholds carefully for all metric types


We recommend carefully choosing different thresholds for scale-out and scale-in based on practical
situations.
We do not recommend autoscale settings like the examples below with the same or very similar threshold
values for out and in conditions:
●● Increase instances by 1 count when Thread Count <= 600
●● Decrease instances by 1 count when Thread Count >= 600
Let's look at an example of what can lead to a behavior that may seem confusing. Consider the following
sequence.
1. Assume there are two instances to begin with and then the average number of threads per instance
grows to 625.
2. Autoscale scales out adding a third instance.
3. Next, assume that the average thread count across instance falls to 575.
4. Before scaling down, autoscale tries to estimate what the final state will be if it scaled in. For example,
575 x 3 (current instance count) = 1,725 / 2 (final number of instances when scaled down) = 862.5
threads. This means autoscale would have to immediately scale-out again even after it scaled in, if the
average thread count remains the same or even falls only a small amount. However, if it scaled up
again, the whole process would repeat, leading to an infinite loop.
MCT USE ONLY. STUDENT USE PROHIBITED
 Scaling App Service apps  41

5. To avoid this situation (termed “flapping”), autoscale does not scale down at all. Instead, it skips and
reevaluates the condition again the next time the service's job executes. This can confuse many
people because autoscale wouldn't appear to work when the average thread count was 575.
Estimation during a scale-in is intended to avoid “flapping” situations, where scale-in and scale-out
actions continually go back and forth. Keep this behavior in mind when you choose the same thresholds
for scale-out and in.
We recommend choosing an adequate margin between the scale-out and in thresholds. As an example,
consider the following better rule combination.
●● Increase instances by 1 count when CPU% >= 80
●● Decrease instances by 1 count when CPU% <= 60
In this case
1. Assume there are 2 instances to start with.
2. If the average CPU% across instances goes to 80, autoscale scales out adding a third instance.
3. Now assume that over time the CPU% falls to 60.
4. Autoscale's scale-in rule estimates the final state if it were to scale-in. For example, 60 x 3 (current
instance count) = 180 / 2 (final number of instances when scaled down) = 90. So autoscale does not
scale-in because it would have to scale-out again immediately. Instead, it skips scaling down.
5. The next time autoscale checks, the CPU continues to fall to 50. It estimates again - 50 x 3 instance =
150 / 2 instances = 75, which is below the scale-out threshold of 80, so it scales in successfully to 2
instances.

Considerations for scaling when multiple rules are config-


ured in a profile
There are cases where you may have to set multiple rules in a profile. The following set of autoscale rules
are used by services use when multiple rules are set.
On scale out, autoscale runs if any rule is met. On scale-in, autoscale require all rules to be met.
To illustrate, assume that you have the following four autoscale rules:
●● If CPU < 30 %, scale-in by 1
●● If Memory < 50%, scale-in by 1
●● If CPU > 75%, scale-out by 1
●● If Memory > 75%, scale-out by 1
Then the follow occurs:
●● If CPU is 76% and Memory is 50%, we scale-out.
●● If CPU is 50% and Memory is 76% we scale-out.
On the other hand, if CPU is 25% and memory is 51% autoscale does not scale-in. In order to scale-in,
CPU must be 29% and Memory 49%.

Always select a safe default instance count


The default instance count is important autoscale scales your service to that count when metrics are not
available. Therefore, select a default instance count that's safe for your workloads.
MCT USE ONLY. STUDENT USE PROHIBITED 42  Module 1 Creating Azure App Service Web Apps  

Configure autoscale notifications


Autoscale will post to the Activity Log if any of the following conditions occur:
●● Autoscale issues a scale operation
●● Autoscale service successfully completes a scale action
●● Autoscale service fails to take a scale action.
●● Metrics are not available for autoscale service to make a scale decision.
●● Metrics are available (recovery) again to make a scale decision.
You can also use an Activity Log alert to monitor the health of the autoscale engine. In addition to using
activity log alerts, you can also configure email or webhook notifications to get notified for successful
scale actions via the notifications tab on the autoscale setting.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service staging environments  43

Azure App Service staging environments


Set up staging environments in Azure App Ser-
vice
When you deploy your web app, web app on Linux, mobile back end, or API app to Azure App Service,
you can use a separate deployment slot instead of the default production slot when you're running in the
Standard, Premium, or Isolated App Service plan tier. Deployment slots are live apps with their own
host names. App content and configurations elements can be swapped between two deployment slots,
including the production slot.
Deploying your application to a non-production slot has the following benefits:
●● You can validate app changes in a staging deployment slot before swapping it with the production
slot.
●● Deploying an app to a slot first and swapping it into production makes sure that all instances of the
slot are warmed up before being swapped into production. This eliminates downtime when you
deploy your app. The traffic redirection is seamless, and no requests are dropped because of swap
operations. You can automate this entire workflow by configuring auto swap when pre-swap valida-
tion isn't needed.
●● After a swap, the slot with previously staged app now has the previous production app. If the changes
swapped into the production slot aren't as you expect, you can perform the same swap immediately
to get your “last known good site” back.
Each App Service plan tier supports a different number of deployment slots. There's no additional charge
for using deployment slots. To find out the number of slots your app's tier supports, see App Service
limits.
To scale your app to a different tier, make sure that the target tier supports the number of slots your app
already uses. For example, if your app has more than five slots, you can't scale it down to the Standard
tier, because the Standard tier supports only five deployment slots.
Add a slot
The app must be running in the Standard, Premium, or Isolated tier in order for you to enable multiple
deployment slots.
In the Azure portal, open your app's resource page.
In the left pane, select Deployment slots > Add Slot.

Add a new deployment slot


1. In the Azure portal, open your app's resource page.
2. In the left pane, select Deployment slots > Add Slot.
3. In the Add a slot dialog box, give the slot a name, and select whether to clone an app configuration
from another deployment slot. Select Add to continue.
You can clone a configuration from any existing slot. Settings that can be cloned include app settings,
connection strings, language framework versions, web sockets, HTTP version, and platform bitness.
4. After the slot is added, select Close to close the dialog box. The new slot is now shown on the
Deployment slots page. By default, Traffic % is set to 0 for the new slot, with all customer traffic
routed to the production slot.
MCT USE ONLY. STUDENT USE PROHIBITED 44  Module 1 Creating Azure App Service Web Apps  

5. Select the new deployment slot to open that slot's resource page. The staging slot has a management
page just like any other App Service app. You can change the slot's configuration. The name of the
slot is shown at the top of the page to remind you that you're viewing the deployment slot.
6. Select the app URL on the slot's resource page. The deployment slot has its own host name and is
also a live app.
The new deployment slot has no content, even if you clone the settings from a different slot. For exam-
ple, you can publish to this slot with Git. You can deploy to the slot from a different repository branch or
a different repository.

What happens during a swap


When you swap two slots (usually from a staging slot into the production slot), App Service does the
following to ensure that the target slot doesn't experience downtime:
1. Apply the following settings from the target slot (for example, the production slot) to all instances of
the source slot:
●● Slot-specific app settings and connection strings, if applicable.
●● Continuous deployment settings, if enabled.
●● App Service authentication settings, if enabled.
Any of these cases trigger all instances in the source slot to restart. During swap with preview, this
marks the end of the first phase. The swap operation is paused, and you can validate that the source
slot works correctly with the target slot's settings.
2. Wait for every instance in the source slot to complete its restart. If any instance fails to restart, the
swap operation reverts all changes to the source slot and stops the operation.
3. If local cache is enabled, trigger local cache initialization by making an HTTP request to the applica-
tion root ("/") on each instance of the source slot. Wait until each instance returns any HTTP response.
Local cache initialization causes another restart on each instance.
4. If auto swap is enabled with custom warm-up, trigger Application Initiation by making an HTTP
request to the application root ("/") on each instance of the source slot.
●● If applicationInitialization isn't specified, trigger an HTTP request to the application root
of the source slot on each instance.
●● If an instance returns any HTTP response, it's considered to be warmed up.
5. If all instances on the source slot are warmed up successfully, swap the two slots by switching the
routing rules for the two slots. After this step, the target slot (for example, the production slot) has the
app that's previously warmed up in the source slot.
6. Now that the source slot has the pre-swap app previously in the target slot, perform the same
operation by applying all settings and restarting the instances.
At any point of the swap operation, all work of initializing the swapped apps happens on the source slot.
The target slot remains online while the source slot is being prepared and warmed up, regardless of
where the swap succeeds or fails. To swap a staging slot with the production slot, make sure that the
production slot is always the target slot. This way, the swap operation doesn't affect your production app.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service staging environments  45

Which settings are swapped?


When you clone configuration from another deployment slot, the cloned configuration is editable. Some
configuration elements follow the content across a swap (not slot specific), whereas other configuration
elements stay in the same slot after a swap (slot specific). The following lists show the settings that
change when you swap slots.
Settings that are swapped:
●● General settings, such as framework version, 32/64-bit, web sockets
●● App settings (can be configured to stick to a slot)
●● Connection strings (can be configured to stick to a slot)
●● Handler mappings
●● Public certificates
●● WebJobs content
●● Hybrid connections *
●● Virtual network integration *
●● Service endpoints *
●● Azure Content Delivery Network *
Features marked with an asterisk (*) are planned to be unswapped.
Settings that aren't swapped:
●● Publishing endpoints
●● Custom domain names
●● Non-public certificates and TLS/SSL settings
●● Scale settings
●● WebJobs schedulers
●● IP restrictions
●● Always On
●● Diagnostic log settings
●● Cross-origin resource sharing (CORS)
✔️ Note: Certain app settings that apply to unswapped settings are also not swapped. For example, since
diagnostic log settings are not swapped, related app settings like WEBSITE_HTTPLOGGING_RETEN-
TION_DAYS and DIAGNOSTICS_AZUREBLOBRETENTIONDAYS are also not swapped, even if they don't
show up as slot settings.
To configure an app setting or connection string to stick to a specific slot (not swapped), go to the
Configuration page for that slot. Add or edit a setting, and then select deployment slot setting. Selecting
this check box tells App Service that the setting is not swappable.

Swap deployment slots


You can swap deployment slots on your app's Deployment slots page and the Overview page. Before you
swap an app from a deployment slot into production, make sure that production is your target slot and
that all settings in the source slot are configured exactly as you want to have them in production.
MCT USE ONLY. STUDENT USE PROHIBITED 46  Module 1 Creating Azure App Service Web Apps  

To swap deployment slots


1. Go to your app's Deployment slots page and select Swap. The Swap dialog box shows settings in
the selected source and target slots that will be changed.
2. Select the desired Source and Target slots. Usually, the target is the production slot. Also, select the
Source Changes and Target Changes tabs and verify that the configuration changes are expected.
When you're finished, you can swap the slots immediately by selecting Swap.
To see how your target slot would run with the new settings before the swap actually happens, don't
select Swap, but follow the instructions in Swap with preview below.
3. When you're finished, close the dialog box by selecting Close.

Swap with preview (multi-phase swap)


Before you swap into production as the target slot, validate that the app runs with the swapped settings.
The source slot is also warmed up before the swap completion, which is desirable for mission-critical
applications.
When you perform a swap with preview, App Service performs the same swap operation but pauses after
the first step. You can then verify the result on the staging slot before completing the swap.
If you cancel the swap, App Service reapplies configuration elements to the source slot.
To swap with preview:
1. Follow the steps above in Swap deployment slots but select Perform swap with preview.

The dialog box shows you how the configuration in the source slot changes in phase 1, and how the
source and target slot change in phase 2.
2. When you're ready to start the swap, select Start Swap.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service staging environments  47

When phase 1 finishes, you're notified in the dialog box. Preview the swap in the source slot by going
to https://<app_name>-<source-slot-name>.azurewebsites.net.
3. When you're ready to complete the pending swap, select Complete Swap in Swap action and select
Complete Swap.
To cancel a pending swap, select Cancel Swap instead.
4. When you're finished, close the dialog box by selecting Close.

Roll back a swap


If any errors occur in the target slot (for example, the production slot) after a slot swap, restore the slots
to their pre-swap states by swapping the same two slots immediately.

Monitor a swap
If the swap operation takes a long time to complete, you can get information on the swap operation in
the activity log.
On your app's resource page in the portal, in the left pane, select Activity log.
A swap operation appears in the log query as Swap Web App Slots. You can expand it and select one
of the suboperations or errors to see the details.

Configure auto swap


✔️ Note: Auto swap isn't currently supported in web apps on Linux.
Auto swap streamlines Azure DevOps scenarios where you want to deploy your app continuously with
zero cold starts and zero downtime for customers of the app. When auto swap is enabled from a slot into
production, every time you push your code changes to that slot, App Service automatically swaps the app
into production after it's warmed up in the source slot.
To configure auto swap:
1. Go to your app's resource page. Select Deployment slots > <desired source slot> > Configuration
> General settings.
2. For Auto swap enabled, select On. Then select the desired target slot for Auto swap deployment slot,
and select Save on the command bar.
3. Execute a code push to the source slot. Auto swap happens after a short time, and the update is
reflected at your target slot's URL.

Specify custom warm-up


Some apps might require custom warm-up actions before the swap. The applicationInitializa-
tion configuration element in web.config lets you specify custom initialization actions. The swap
operation waits for this custom warm-up to finish before swapping with the target slot. Here's a sample
web.config fragment.
<system.webServer>
<applicationInitialization>
<add initializationPage="/" hostName="[app hostname]" />
<add initializationPage="/Home/About" hostName="[app hostname]" />
</applicationInitialization>
MCT USE ONLY. STUDENT USE PROHIBITED 48  Module 1 Creating Azure App Service Web Apps  

</system.webServer>

For more information on customizing the applicationInitialization element, see Most common
deployment slot swap failures and how to fix them9.
You can also customize the warm-up behavior with one or both of the following app settings:
●● WEBSITE_SWAP_WARMUP_PING_PATH: The path to ping to warm up your site. Add this app setting
by specifying a custom path that begins with a slash as the value. An example is /statuscheck. The
default value is /.
●● WEBSITE_SWAP_WARMUP_PING_STATUSES: Valid HTTP response codes for the warm-up operation.
Add this app setting with a comma-separated list of HTTP codes. An example is 200,202 . If the
returned status code isn't in the list, the warmup and swap operations are stopped. By default, all
response codes are valid.
Note:<applicationInitialization> is part of each app start-up, where as these two app settings
apply only to slot swaps.

Routing traffic
By default, all client requests to the app's production URL (http://<app_name>.azurewebsites.
net) are routed to the production slot. You can route a portion of the traffic to another slot. This feature
is useful if you need user feedback for a new update, but you're not ready to release it to production.

Route production traffic automatically


To route production traffic automatically:
1. Go to your app's resource page and select Deployment slots.
2. In the Traffic % column of the slot you want to route to, specify a percentage (between 0 and 100) to
represent the amount of total traffic you want to route. Select Save.
After the setting is saved, the specified percentage of clients is randomly routed to the non-production
slot.
After a client is automatically routed to a specific slot, it's “pinned” to that slot for the life of that client
session. On the client browser, you can see which slot your session is pinned to by looking at the x-ms-
routing-name cookie in your HTTP headers. A request that's routed to the "staging" slot has the cookie
x-ms-routing-name=staging. A request that's routed to the production slot has the cookie
x-ms-routing-name=self.

Route production traffic manually


In addition to automatic traffic routing, App Service can route requests to a specific slot. This is useful
when you want your users to be able to opt in to or opt out of your beta app. To route production traffic
manually, you use the x-ms-routing-name query parameter.
To let users opt out of your beta app, for example, you can put this link on your webpage:
<a href="<webappname>.azurewebsites.net/?x-ms-routing-name=self">Go back to production app</
a>

9 https://ruslany.net/2017/11/most-common-deployment-slot-swap-failures-and-how-to-fix-them/
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure App Service staging environments  49

The string x-ms-routing-name=self specifies the production slot. After the client browser accesses
the link, it's redirected to the production slot. Every subsequent request has the x-ms-rout-
ing-name=self cookie that pins the session to the production slot.
To let users opt in to your beta app, set the same query parameter to the name of the non-production
slot. Here's an example:
<webappname>.azurewebsites.net/?x-ms-routing-name=staging

By default, new slots are given a routing rule of 0%, a default value is displayed in grey. When you
explicitly set this value to 0% it is displayed in black, your users can access the staging slot manually by
using the x-ms-routing-name query parameter. But they won't be routed to the slot automatically
because the routing percentage is set to 0. This is an advanced scenario where you can “hide” your
staging slot from the public while allowing internal teams to test changes on the slot.
MCT USE ONLY. STUDENT USE PROHIBITED 50  Module 1 Creating Azure App Service Web Apps  

Lab and review questions


Lab: Building a web application on Azure plat-
form as a service offerings

Lab scenario
You're the owner of a startup organization and have been building an image gallery application for
people to share great images of food. To get your product to market as quickly as possible, you decided
to use Microsoft Azure App Service to host your web apps and APIs.

Objectives
●● After you complete this lab, you will be able to:
●● Create various apps by using App Service.
●● Configure application settings for an app.
●● Deploy apps by using Kudu, the Azure Command-Line Interface (CLI), and zip file deployment.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 1 review questions


Review Question 1
You have multiple apps running in a single App Service plan. True or False: Each app in the service plan can
have different scaling rules.
†† True
†† False

Review Question 2
Which of the following App Service plans is available only to function apps?
†† Shared compute
†† Dedicated compute
†† Isolated
†† Consumption
†† None of the above
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  51

Review Question 3
Azure Traffic Manager to control how requests from web clients are distributed to apps in Azure App Service.
Which of the options listed below are valid routing methods for Azure Traffic Manager?
Select all that apply.
†† Priority
†† Weighted
†† Performance
†† Scale
†† Geographic

Review Question 4
Which of the following App Service tiers is not currently available to App Service on Linux?
†† Free
†† Basic
†† Shared
†† Standard
†† Premium

Review Question 5
Which of the following settings are not not swapped when you swap an an app?
Select all that apply.
†† Handler mappings
†† Publishing endpoints
†† General settings, such as framework version, 32/64-bit, web sockets
†† Always On
†† Custom domain names
MCT USE ONLY. STUDENT USE PROHIBITED 52  Module 1 Creating Azure App Service Web Apps  

Answers
Review Question 1
You have multiple apps running in a single App Service plan. True or False: Each app in the service plan
can have different scaling rules.
†† True
■■ False
Explanation
The App Service plan is the scale unit of the App Service apps. If the plan is configured to run five VM
instances, then all apps in the plan run on all five instances. If the plan is configured for autoscaling, then all
apps in the plan are scaled out together based on the autoscale settings.
Review Question 2
Which of the following App Service plans is available only to function apps?
†† Shared compute
†† Dedicated compute
†† Isolated
■■ Consumption
†† None of the above
Explanation
The consumption tier is only available to function apps. It scales the functions dynamically depending on
workload.
Review Question 3
Azure Traffic Manager to control how requests from web clients are distributed to apps in Azure App
Service. Which of the options listed below are valid routing methods for Azure Traffic Manager?
Select all that apply.
■■ Priority
■■ Weighted
■■ Performance
†† Scale
■■ Geographic
Explanation
Azure Traffic Manager uses four different routing methods.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  53

Review Question 4
Which of the following App Service tiers is not currently available to App Service on Linux?
†† Free
†† Basic
■■ Shared
†† Standard
†† Premium
Explanation
App Service on Linux is only supported with Free, Basic, Standard, and Premium app service plans and does
not have a Shared tier. You cannot create a Linux Web App in an App Service plan already hosting non-Li-
nux Web Apps.
Review Question 5
Which of the following settings are not not swapped when you swap an an app?
Select all that apply.
†† Handler mappings
■■ Publishing endpoints
†† General settings, such as framework version, 32/64-bit, web sockets
■■ Always On
■■ Custom domain names
Explanation
Some configuration elements follow the content across a swap (not slot specific), whereas other configura-
tion elements stay in the same slot after a swap (slot specific). The following are slot specific:
MCT USE ONLY. STUDENT USE PROHIBITED
Module 2 Implement Azure functions

Azure Functions overview


Introduction to Azure Functions
Azure Functions lets you develop serverless applications on Microsoft Azure. You can write just the code
you need for the problem at hand, without worrying about a whole application or the infrastructure to
run it.

What can I do with Functions?


Functions is a great solution for processing data, integrating systems, working with the internet-of-things
(IoT), and building simple APIs and microservices. Consider Functions for tasks like image or order
processing, file maintenance, or for any tasks that you want to run on a schedule. Functions provides
templates to get you started with key scenarios.
Azure Functions supports triggers, which are ways to start execution of your code, and bindings, which are
ways to simplify coding for input and output data.

Integrations
Azure Functions integrates with various Azure and 3rd-party services. These services can trigger your
function and start execution, or they can serve as input and output for your code. The following service
integrations are supported by Azure Functions:
●● Azure Cosmos DB
●● Azure Event Hubs
●● Azure Event Grid
●● Azure Notification Hubs
●● Azure Service Bus (queues and topics)
●● Azure Storage (blob, queues, and tables)
●● On-premises (using Service Bus)
MCT USE ONLY. STUDENT USE PROHIBITED 56  Module 2 Implement Azure functions  

●● Twilio (SMS messages)

Azure Functions scale and hosting concepts


When you create a function app in Azure, you must choose a hosting plan for your app. There are three
hosting plans available for Azure Functions: Consumption plan, Premium plan, and App Service plan.
The hosting plan you choose dictates the following behaviors:
●● How your function app is scaled.
●● The resources available to each function app instance.
●● Support for advanced features, such as VNET connectivity.
Both Consumption and Premium plans automatically add compute power when your code is running.
Your app is scaled out when needed to handle load, and scaled down when code stops running. For the
Consumption plan, you also don't have to pay for idle VMs or reserve capacity in advance.
Premium plan provides additional features, such as premium compute instances, the ability to keep
instances warm indefinitely, and VNet connectivity.
App Service plan allows you to take advantage of dedicated infrastructure, which you manage. Your
function app doesn't scale based on events, which means is never scales down to zero. (Requires that
Always on is enabled.)

Consumption plan
When you're using the Consumption plan, instances of the Azure Functions host are dynamically added
and removed based on the number of incoming events. This serverless plan scales automatically, and
you're charged for compute resources only when your functions are running. On a Consumption plan, a
function execution times out after a configurable period of time.
Billing is based on number of executions, execution time, and memory used. Billing is aggregated across
all functions within a function app.
The Consumption plan is the default hosting plan and offers the following benefits:
●● Pay only when your functions are running
●● Scale out automatically, even during periods of high load
Function apps in the same region can be assigned to the same Consumption plan. There's no downside
or impact to having multiple apps running in the same Consumption plan. Assigning multiple apps to the
same consumption plan has no impact on resilience, scalability, or reliability of each app.

Premium plan
When you're using the Premium plan, instances of the Azure Functions host are added and removed
based on the number of incoming events just like the Consumption plan. Premium plan supports the
following features:
●● Perpetually warm instances to avoid any cold start
●● VNet connectivity
●● Unlimited execution duration
●● Premium instance sizes (one core, two core, and four core instances)
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Functions overview  57

●● More predictable pricing


●● High-density app allocation for plans with multiple function apps
Instead of billing per execution and memory consumed, billing for the Premium plan is based on the
number of core seconds and memory used across needed and pre-warmed instances. At least one
instance must be warm at all times per plan.
When running JavaScript functions on a Premium plan, you should choose an instance that has fewer
vCPUs.

Dedicated (App Service) plan


Your function apps can also run on the same dedicated VMs as other App Service apps (Basic, Standard,
Premium, and Isolated SKUs). Consider an App Service plan in the following situations:
●● You have existing, underutilized VMs that are already running other App Service instances.
●● You want to provide a custom image on which to run your functions.
You pay the same for function apps in an App Service Plan as you would for other App Service resources,
like web apps. With an App Service plan, you can manually scale out by adding more VM instances. You
can also enable autoscale. When running JavaScript functions on an App Service plan, you should choose
a plan that has fewer vCPUs.

Always On
If you run on an App Service plan, you should enable the Always on setting so that your function app
runs correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity,
so only HTTP triggers will “wake up” your functions. Always on is available only on an App Service plan.
On a Consumption plan, the platform activates function apps automatically.

Function app timeout duration


The timeout duration of a function app is defined by the functionTimeout property in the host.json
project file. The following table shows the default and maximum values in minutes for both plans and in
both runtime versions:

Plan Runtime Version Default Maximum


Consumption 1.x 5 10
Consumption 2.x 5 10
Consumption 3.x (preview) 5 10
App Service 1.x Unlimited Unlimited
App Service 2.x 30 Unlimited
App Service 3.x (preview) 30 Unlimited
Note: Regardless of the function app timeout setting, 230 seconds is the maximum amount of time that
an HTTP triggered function can take to respond to a request. This is because of the default idle timeout
of Azure Load Balancer.

Storage account requirements


On any plan, a function app requires a general Azure Storage account, which supports Azure Blob, Queue,
Files, and Table storage. This is because Functions relies on Azure Storage for operations such as manag-
MCT USE ONLY. STUDENT USE PROHIBITED 58  Module 2 Implement Azure functions  

ing triggers and logging function executions, but some storage accounts do not support queues and
tables. These accounts, which include blob-only storage accounts (including premium storage) and
general-purpose storage accounts with zone-redundant storage replication, are filtered-out from your
existing Storage Account selections when you create a function app.
The same storage account used by your function app can also be used by your triggers and bindings to
store your application data. However, for storage-intensive operations, you should use a separate storage
account.

How the Consumption plan works


In the consumption and premium plans, the Azure Functions infrastructure scales CPU and memory
resources by adding additional instances of the Functions host based on the number of events that its
functions are triggered on.
Each instance of the Functions host in the consumption plan is limited to 1.5 GB of memory and one CPU.
An instance of the host is the entire function app, meaning all functions within a function app share
resource within an instance and scale at the same time.
Function apps that share the same consumption plan are scaled independently. In the premium plan,
your plan size will determine the available memory and CPU for all apps in that plan on that instance.
Function code files are stored on Azure Files shares on the function's main storage account. When you
delete the main storage account of the function app, the function code files are deleted and cannot be
recovered.

Runtime scaling
Azure Functions uses a component called the scale controller to monitor the rate of events and deter-
mine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For
example, when you're using an Azure Queue storage trigger, it scales based on the queue length and the
age of the oldest queue message.
The unit of scale for Azure Functions is the function app. When the function app is scaled out, additional
resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute
demand is reduced, the scale controller removes function host instances. The number of instances is
eventually scaled down to zero when no functions are running within a function app.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Functions overview  59

Understanding scaling behaviors


Scaling can vary on a number of factors, and scale differently based on the trigger and language selected.
There are a few intricacies of scaling behaviors to be aware of:
●● A single function app only scales up to a maximum of 200 instances. A single instance may process
more than one message or request at a time though, so there isn't a set limit on number of concur-
rent executions.
●● For HTTP triggers, new instances will only be allocated at most once every 1 second.
●● For non-HTTP triggers, new instances will only be allocated at most once every 30 seconds.

Azure Functions triggers and bindings concepts


This section is a conceptual overview of triggers and bindings in Azure Functions.

Overview
Triggers are what cause a function to run. A trigger defines how a function is invoked and a function must
have exactly one trigger. Triggers have associated data, which is often provided as the payload of the
function.
Binding to a function is a way of declaratively connecting another resource to the function; bindings may
be connected as input bindings, output bindings, or both. Data from bindings is provided to the function
as parameters.
You can mix and match different bindings to suit your needs. Bindings are optional and a function might
have one or multiple input and/or output bindings.
Triggers and bindings let you avoid hardcoding access to other services. Your function receives data (for
example, the content of a queue message) in function parameters. You send data (for example, to create
a queue message) by using the return value of the function.
MCT USE ONLY. STUDENT USE PROHIBITED 60  Module 2 Implement Azure functions  

Trigger and binding definitions


Triggers and bindings are defined differently depending on the development approach.

Platform Triggers and bindings are configured by…


C# class library decorating methods and parameters with C#
attributes
All others (including Azure portal) updating function.json (schema)
The portal provides a UI for this configuration, but you can edit the file directly by opening the Advanced
editor available via the Integrate tab of your function.
In .NET, the parameter type defines the data type for input data. For instance, use string to bind to the
text of a queue trigger, a byte array to read as binary and a custom type to de-serialize to an object.
For languages that are dynamically typed such as JavaScript, use the dataType property in the function.
json file. For example, to read the content of an HTTP request in binary format, set dataType to binary:
{
"dataType": "binary",
"type": "httpTrigger",
"name": "req",
"direction": "in"
}

Other options for dataType are stream and string.

Binding direction
All triggers and bindings have a direction property in the function.json file:
●● For triggers, the direction is always in
●● Input and output bindings use in and out
●● Some bindings support a special direction inout. If you use inout, only the Advanced editor is
available via the Integrate tab in the portal.
When you use attributes in a class library to configure triggers and bindings, the direction is provided in
an attribute constructor or inferred from the parameter type.

Azure Functions trigger and binding example


Suppose you want to write a new row to Azure Table storage whenever a new message appears in Azure
Queue storage. This scenario can be implemented using an Azure Queue storage trigger and an Azure
Table storage output binding.
Here's a function.json file for this scenario.
{
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"name": "order",
"queueName": "myqueue-items",
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Functions overview  61

"connection": "MY_STORAGE_ACCT_APP_SETTING"
},
{
"type": "table",
"direction": "out",
"name": "$return",
"tableName": "outTable",
"connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
}
]
}

The first element in the bindings array is the Queue storage trigger. The type and direction
properties identify the trigger. The name property identifies the function parameter that receives the
queue message content. The name of the queue to monitor is in queueName, and the connection string
is in the app setting identified by connection.
The second element in the bindings array is the Azure Table Storage output binding. The type and
direction properties identify the binding. The name property specifies how the function provides the
new table row, in this case by using the function return value. The name of the table is in tableName,
and the connection string is in the app setting identified by connection.
To view and edit the contents of function.json in the Azure portal, click the Advanced editor option on
the Integrate tab of your function.

C# script example
Here's C# script code that works with this trigger and binding. Notice that the name of the parameter
that provides the queue message content is order; this name is required because the name property
value in function.json is order.
#r "Newtonsoft.Json"

using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;

// From an incoming queue message that is a JSON object, add fields and write to Table storage
// The method return value creates a new row in Table Storage
public static Person Run(JObject order, ILogger log)
{
return new Person() {
PartitionKey = "Orders",
RowKey = Guid.NewGuid().ToString(),
Name = order["Name"].ToString(),
MobileNumber = order["MobileNumber"].ToString() };
}

public class Person


{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public string Name { get; set; }
MCT USE ONLY. STUDENT USE PROHIBITED 62  Module 2 Implement Azure functions  

public string MobileNumber { get; set; }


}

JavaScript example
The same function.json file can be used with a JavaScript function:
// From an incoming queue message that is a JSON object, add fields and write to Table Storage
// The second parameter to context.done is used as the value for the new row
module.exports = function (context, order) {
order.PartitionKey = "Orders";
order.RowKey = generateRandomId();

context.done(null, order);
};

function generateRandomId() {
return Math.random().toString(36).substring(2, 15) +
Math.random().toString(36).substring(2, 15);
}

Class library example


In a class library, the same trigger and binding information — queue and table names, storage accounts,
function parameters for input and output — is provided by attributes instead of a function.json file. Here's
an example:
public static class QueueTriggerTableOutput
{
[FunctionName("QueueTriggerTableOutput")]
[return: Table("outTable", Connection = "MY_TABLE_STORAGE_ACCT_APP_SETTING")]
public static Person Run(
[QueueTrigger("myqueue-items", Connection = "MY_STORAGE_ACCT_APP_SETTING")]JObject order,
ILogger log)
{
return new Person() {
PartitionKey = "Orders",
RowKey = Guid.NewGuid().ToString(),
Name = order["Name"].ToString(),
MobileNumber = order["MobileNumber"].ToString() };
}
}

public class Person


{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public string Name { get; set; }
public string MobileNumber { get; set; }
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Functions overview  63

Further reading
For more detailed examples of triggers and bindings please visit:
●● Azure Blob storage bindings for Azure Functions

●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob
●● Azure Cosmos DB bindings for Azure Functions 2.x

●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2
●● Timer trigger for Azure Functions

●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer
●● Azure Functions HTTP triggers and bindings

●● https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook
MCT USE ONLY. STUDENT USE PROHIBITED 64  Module 2 Implement Azure functions  

Developing Azure Functions


Azure Functions development overview
In Azure Functions, specific functions share a few core technical concepts and components, regardless of
the language or binding you use.

Function code
A function contains two important pieces - your code, which can be written in a variety of languages, and
some config, the function.json file. For compiled languages, this config file is generated automatically
from annotations in your code. For scripting languages, you must provide the config file yourself.
The function.json file defines the function's trigger, bindings, and other configuration settings. Every func-
tion has one and only one trigger. The runtime uses this config file to determine the events to monitor
and how to pass data into and return data from a function execution. The following is an example
function.json file.
{
"disabled":false,
"bindings":[
// ... bindings here
{
"type": "bindingType",
"direction": "in",
"name": "myParamName",
// ... more depending on binding
}
]
}

The bindings property is where you configure both triggers and bindings. Each binding shares a few
common settings and some settings which are specific to a particular type of binding. Every binding
requires the following settings:

Property Values/Types Comments


type string Binding type. For example,
queueTrigger.
direction 'in', 'out' Indicates whether the binding is
for receiving data into the
function or sending data from
the function.
name string The name that is used for the
bound data in the function. For
C#, this is an argument name; for
JavaScript, it's the key in a key/
value list.
MCT USE ONLY. STUDENT USE PROHIBITED
 Developing Azure Functions  65

Function app
A function app provides an execution context in Azure in which your functions run. As such, it is the unit
of deployment and management for your functions. A function app is comprised of one or more individ-
ual functions that are managed, deployed, and scaled together. All of the functions in a function app
share the same pricing plan, deployment method, and runtime version. Think of a function app as a way
to organize and collectively manage your functions.

Folder structure
The code for all the functions in a specific function app is located in a root project folder that contains a
host configuration file and one or more subfolders. Each subfolder contains the code for a separate
function. The folder structure is shown in the following figure.

The host.json file contains runtime-specific configurations and is in the root folder of the function app. A
bin folder contains packages and other library files that the function app requires.
The Functions editor built into the Azure portal lets you update your code and your function.json file
directly inline. This is recommended only for small changes or proofs of concept - best practice is to use a
local development tool like VS Code.

Local development environments


Functions makes it easy to use your favorite code editor and development tools to create and test
functions on your local computer. Your local functions can connect to live Azure services, and you can
debug them on your local computer using the full Functions runtime.
The way in which you develop functions on your local computer depends on your language and tooling
preferences. The environments in the following table support local development:
MCT USE ONLY. STUDENT USE PROHIBITED 66  Module 2 Implement Azure functions  

Environment Languages Description


Visual Studio Code C# (class library), C# script (.csx), The Azure Functions extension
JavaScript, PowerShell, Python for VS Code adds Functions
support to VS Code. Requires the
Core Tools. Supports develop-
ment on Linux, MacOS, and
Windows, when using version 2.x
of the Core Tools.
Command prompt or terminal C# (class library), C# script (.csx), Azure Functions Core Tools
JavaScript, PowerShell, Python provides the core runtime and
templates for creating functions,
which enable local development.
Version 2.x supports develop-
ment on Linux, MacOS, and
Windows. All environments rely
on Core Tools for the local
Functions runtime.
Visual Studio 2019 C# (class library) The Azure Functions tools are
included in the Azure develop-
ment workload of Visual Studio
2019 and later versions. Lets you
compile functions in a class
library and publish the .dll to
Azure. Includes the Core Tools
for local testing.
Maven (various) Java Integrates with Core Tools to
enable development of Java
functions. Version 2.x supports
development on Linux, MacOS,
and Windows. To learn more, see
Create your first function with
Java and Maven. Also supports
development using Eclipse and
IntelliJ IDEA
❗️ Important: Do not mix local development with portal development in the same function app. When
you create and publish functions from a local project, you should not try to maintain or modify project
code in the portal.

Demo: Create an HTTP trigger function by using


the Azure portal
In this demo, you'll learn how to use Functions to create a “hello world” function in the Azure portal. This
demo has two main steps:
1. Create a Function app to host the function
2. Create and test the HTTP trigger function
To begin, sign in to the Azure portal at https://portal.azure.com with your account.
MCT USE ONLY. STUDENT USE PROHIBITED
 Developing Azure Functions  67

Create a function app


You must have a function app to host the execution of your functions. A function app lets you group
functions as a logic unit for easier management, deployment, and sharing of resources.
1. From the Azure portal menu, select Create a resource.
2. In the New page, select Compute > Function App.
3. Use the function app settings as specified in the table below.

Setting Suggested value Description


Subscription Your subscription The subscription under which
this new function app is created.
Resource Group myResourceGroup Name for the new resource
group in which to create your
function app.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9, and
-.
Publish Code Option to publish code files or a
Docker container.
Runtime stack Preferred language Choose a runtime that supports
your favorite function program-
ming language. Choose .NET for
C# and F# functions.
Region Preferred region Choose a region near you or
near other services your func-
tions access.
4. Select the Next : Hosting > button and enter the following settings for hosting.

Setting Suggested value Description


Storage account Globally unique name Create a storage account used
by your function app. You can
accept the account name
generated for you, or create one
with a different name.
Operating system Preferred operating system An operating system is pre-se-
lected for you based on your
runtime stack selection, but you
can change the setting if neces-
sary.
Plan Consumption plan Hosting plan that defines how
resources are allocated to your
function app. In the default
Consumption Plan, resources
are added dynamically as
required by your functions.
5. Select Review + Create to review the app configuration selections.
MCT USE ONLY. STUDENT USE PROHIBITED 68  Module 2 Implement Azure functions  

6. Select Create to provision and deploy the function app. When the deployment is complete select Go
to resource to view your new function app.
Next, you'll create a function in the new function app.

Create and test the HTTP triggered function


1. Expand your new function app, then select the + button next to Functions.

2. Select the In-portal development environment, and select Continue.


3. Choose WebHook + API and then select Create.
A function is created using a language-specific template for an HTTP triggered function.

Test the function


1. In your new function, click </> Get function URL at the top right.

2. In the dialog box that appears select default (Function key), and then click Copy.
3. Paste the function URL into your browser's address bar. Add the query string value &name=<your-
name> to the end of this URL and press the Enter key on your keyboard to execute the request. You
should see the response returned by the function displayed in the browser.
4. When your function runs, trace information is written to the logs. To see the trace output from the
previous execution, return to your function in the portal and click the arrow at the bottom of the
screen to expand the Logs.

Clean up resources
You can clean up the resources created in this demo simply by deleting the resource group that was
created early in the demo.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement Durable Functions  69

Implement Durable Functions


Durable Functions overview
Durable Functions is an extension of Azure Functions that lets you write stateful functions in a serverless
compute environment. The extension lets you define stateful workflows by writing orchestrator functions
and stateful entities by writing entity functions using the Azure Functions programming model. Behind
the scenes, the extension manages state, checkpoints, and restarts for you, allowing you to focus on your
business logic.

Supported languages
Durable Functions currently supports the following languages:
●● C#: both precompiled class libraries and C# script.
●● JavaScript: supported only for version 2.x of the Azure Functions runtime. Requires version 1.7.0 of
the Durable Functions extension, or a later version.
●● F#: precompiled class libraries and F# script. F# script is only supported for version 1.x of the Azure
Functions runtime.

Application patterns
The primary use case for Durable Functions is simplifying complex, stateful coordination requirements in
serverless applications. The following sections describe typical application patterns that can benefit from
Durable Functions:
●● Function chaining
●● Fan-out/fan-in
●● Async HTTP APIs

Function chaining
In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the
output of one function is applied to the input of another function.

In the example below, the values F1, F2, F3, and F4 are the names of other functions in the function app.
You can implement control flow by using normal imperative coding constructs. Code executes from the
top down. The code can involve existing language control flow semantics, like conditionals and loops.
You can include error handling logic in try/catch/finally blocks.
// Functions 2.0 only
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context) {
MCT USE ONLY. STUDENT USE PROHIBITED 70  Module 2 Implement Azure functions  

const x = yield context.df.callActivity("F1");


const y = yield context.df.callActivity("F2", x);
const z = yield context.df.callActivity("F3", y);
return yield context.df.callActivity("F4", z);
});

Fan out/fan in
In the fan out/fan in pattern, you execute multiple functions in parallel and then wait for all functions to
finish. Often, some aggregation work is done on the results that are returned from the functions.

With normal functions, you can fan out by having the function send multiple messages to a queue. To fan
in you write code to track when the queue-triggered functions end, and then store function outputs.
In the example below, the fan-out work is distributed to multiple instances of the F2 function. The work is
tracked by using a dynamic list of tasks. The .NET Task.WhenAll API or JavaScript context.df.Task.
all API is called, to wait for all the called functions to finish. Then, the F2 function outputs are aggregat-
ed from the dynamic task list and passed to the F3 function.
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context) {
const parallelTasks = [];

// Get a list of N work items to process in parallel.


const workBatch = yield context.df.callActivity("F1");
for (let i = 0; i < workBatch.length; i++) {
parallelTasks.push(context.df.callActivity("F2", workBatch[i]));
}

yield context.df.Task.all(parallelTasks);

// Aggregate all N outputs and send the result to F3.


const sum = parallelTasks.reduce((prev, curr) => prev + curr, 0);
yield context.df.callActivity("F3", sum);
});
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement Durable Functions  71

The automatic checkpointing that happens at the await or yield call on Task.WhenAll or context.
df.Task.all ensures that a potential midway crash or reboot doesn't require restarting an already
completed task.

Async HTTP APIs


The async HTTP API pattern addresses the problem of coordinating the state of long-running operations
with external clients. A common way to implement this pattern is by having an HTTP endpoint trigger the
long-running action. Then, redirect the client to a status endpoint that the client polls to learn when the
operation is finished.

Durable Functions provides built-in support for this pattern, simplifying or even removing the code you
need to write to interact with long-running function executions. After an instance starts, the extension
exposes webhook HTTP APIs that query the orchestrator function status.
The following example shows REST commands that start an orchestrator and query its status. For clarity,
some protocol details are omitted from the example.
> curl -X POST https://myfunc.azurewebsites.net/orchestrators/DoWork -H
"Content-Length: 0" -i
HTTP/1.1 202 Accepted
Content-Type: application/json
Location: https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/
b79baf67f717453ca9e86c5da21e03ec

{"id":"b79baf67f717453ca9e86c5da21e03ec", ...}

> curl https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/


b79baf67f717453ca9e86c5da21e03ec -i
HTTP/1.1 202 Accepted
Content-Type: application/json
Location: https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/
b79baf67f717453ca9e86c5da21e03ec

{"runtimeStatus":"Running","lastUpdatedTime":"2019-03-16T21:20:47Z", ...}

> curl https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/


b79baf67f717453ca9e86c5da21e03ec -i
HTTP/1.1 200 OK
Content-Length: 175
MCT USE ONLY. STUDENT USE PROHIBITED 72  Module 2 Implement Azure functions  

Content-Type: application/json

{"runtimeStatus":"Completed","lastUpdatedTime":"2019-03-16T21:20:57Z", ...}

The Durable Functions extension exposes built-in HTTP APIs that manage long-running orchestrations.
You can alternatively implement this pattern yourself by using your own function triggers (such as HTTP,
a queue, or Azure Event Hubs) and the orchestration client binding.

Additional resources
●● To learn about the differences between Durable Functions 1.x and 2.x visit:

●● https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-ver-
sions

Durable Orchestrations
This lesson gives you an overview of orchestrator functions and how they can help you solve various app
development challenges.

Orchestration identity
Each instance of an orchestration has an instance identifier (also known as an instance ID). By default,
each instance ID is an autogenerated GUID. However, instance IDs can also be any user-generated string
value. Each orchestration instance ID must be unique within a task hub.
✔️Note: It is generally recommended to use autogenerated instance IDs whenever possible. User-gener-
ated instance IDs are intended for scenarios where there is a one-to-one mapping between an orchestra-
tion instance and some external application-specific entity, like a purchase order or a document.
An orchestration's instance ID is a required parameter for most instance management operations. They
are also important for diagnostics, such as searching through orchestration tracking data in Application
Insights for troubleshooting or analytics purposes. For this reason, it is recommended to save generated
instance IDs to some external location (for example, a database or in application logs) where they can be
easily referenced later.

Reliability
Orchestrator functions reliably maintain their execution state by using the event sourcing design pattern.
Instead of directly storing the current state of an orchestration, the Durable Task Framework uses an
append-only store to record the full series of actions the function orchestration takes.
Durable Functions uses event sourcing transparently. Behind the scenes, the await (C#) or yield
(JavaScript) operator in an orchestrator function yields control of the orchestrator thread back to the
Durable Task Framework dispatcher. The dispatcher then commits any new actions that the orchestrator
function scheduled (such as calling one or more child functions or scheduling a durable timer) to storage.
The transparent commit action appends to the execution history of the orchestration instance. The
history is stored in a storage table. The commit action then adds messages to a queue to schedule the
actual work. At this point, the orchestrator function can be unloaded from memory.
When an orchestration function is given more work to do, the orchestrator wakes up and re-executes the
entire function from the start to rebuild the local state. During the replay, if the code tries to call a
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement Durable Functions  73

function (or do any other async work), the Durable Task Framework consults the execution history of the
current orchestration. If it finds that the activity function has already executed and yielded a result, it
replays that function's result and the orchestrator code continues to run. Replay continues until the
function code is finished or until it has scheduled new async work.

Features and patterns


The table below describes the features and patterns of orchestrator functions.

Pattern/Feature Description
Sub-orchestrations Orchestrator functions can call activity functions,
but also other orchestrator functions. For example,
you can build a larger orchestration out of a library
of orchestrator functions. Or, you can run multiple
instances of an orchestrator function in parallel.
Durable timers Orchestrations can schedule durable timers to
implement delays or to set up timeout handling
on async actions. Use durable timers in orchestra-
tor functions instead of Thread.Sleep and
Task.Delay (C#) or setTimeout() and
setInterval() (JavaScript).
External events Orchestrator functions can wait for external events
to update an orchestration instance. This Durable
Functions feature often is useful for handling a
human interaction or other external callbacks.
Error handling Orchestrator functions can use the error-handling
features of the programming language. Existing
patterns like try/catch are supported in
orchestration code.
Critical sections Orchestration instances are single-threaded so it
isn't necessary to worry about race conditions
within an orchestration. However, race conditions
are possible when orchestrations interact with
external systems. To mitigate race conditions when
interacting with external systems, orchestrator
functions can define critical sections using a Lock-
Async method in .NET.
Calling HTTP endpoints Orchestrator functions aren't permitted to do I/O.
The typical workaround for this limitation is to
wrap any code that needs to do I/O in an activity
function. Orchestrations that interact with external
systems frequently use activity functions to make
HTTP calls and return the result to the orchestra-
tion.
Passing multiple parameters It isn't possible to pass multiple parameters to an
activity function directly. The recommendation is
to pass in an array of objects or to use ValueTuples
objects in .NET.
MCT USE ONLY. STUDENT USE PROHIBITED 74  Module 2 Implement Azure functions  

Timers in Durable Functions


Durable Functions provides durable timers for use in orchestrator functions to implement delays or to set
up timeouts on async actions. Durable timers should be used in orchestrator functions instead of
Thread.Sleep and Task.Delay (C#), or setTimeout() and setInterval() (JavaScript).
You create a durable timer by calling the CreateTimer (.NET) method or the createTimer (JavaScript)
method of the orchestration trigger binding. The method returns a task that completes on a specified
date and time.

Timer limitations
When you create a timer that expires at 4:30 pm, the underlying Durable Task Framework enqueues a
message that becomes visible only at 4:30 pm. When running in the Azure Functions Consumption plan,
the newly visible timer message will ensure that the function app gets activated on an appropriate VM.
✔️ Note: Durable timers are currently limited to 7 days. If longer delays are needed, they can be simulat-
ed using the timer APIs in a while loop.

Always use CurrentUtcDateTime instead of DateTime.UtcNow in .NET or currentUtcDateTime


instead of Date.now or Date.UTC in JavaScript when computing the fire time for durable timers.

Usage for delay


The following example illustrates how to use durable timers for delaying execution. The example is
issuing a billing notification every day for 10 days.
const df = require("durable-functions");
const moment = require("moment");

module.exports = df.orchestrator(function*(context) {
for (let i = 0; i < 10; i++) {
const dayOfMonth = context.df.currentUtcDateTime.getDate();
const deadline = moment.utc(context.df.currentUtcDateTime).add(1, 'd');
yield context.df.createTimer(deadline.toDate());
yield context.df.callActivity("SendBillingEvent");
}
});

Usage for timeout


This example illustrates how to use durable timers to implement timeouts.
const df = require("durable-functions");
const moment = require("moment");

module.exports = df.orchestrator(function*(context) {
const deadline = moment.utc(context.df.currentUtcDateTime).add(30, "s");

const activityTask = context.df.callActivity("GetQuote");


const timeoutTask = context.df.createTimer(deadline.toDate());
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement Durable Functions  75

const winner = yield context.df.Task.any([activityTask, timeoutTask]);


if (winner === activityTask) {
// success case
timeoutTask.cancel();
return true;
}
else
{
// timeout case
return false;
}
});

⚠️ Warning: Use a CancellationTokenSource to cancel a durable timer (.NET) or call cancel() on


the returned TimerTask (JavaScript) if your code will not wait for it to complete. The Durable Task
Framework will not change an orchestration's status to “completed” until all outstanding tasks are
completed or canceled.
This cancellation mechanism doesn't terminate in-progress activity function or sub-orchestration execu-
tions. It simply allows the orchestrator function to ignore the result and move on.

Handling external events in Durable Functions


Orchestrator functions have the ability to wait and listen for external events. This feature of Durable
Functions is often useful for handling human interaction or other external triggers.

Wait for events


The WaitForExternalEvent (.NET) and waitForExternalEvent (JavaScript) methods of the
orchestration trigger binding allows an orchestrator function to asynchronously wait and listen for an
external event. The listening orchestrator function declares the name of the event and the shape of the
data it expects to receive.
The following example listens for a specific single event and takes action when it's received.
const df = require("durable-functions");

module.exports = df.orchestrator(function*(context) {
const approved = yield context.df.waitForExternalEvent("Approval");
if (approved) {
// approval granted - do the approved action
} else {
// approval denied - send a notification
}
});

Send events
The RaiseEventAsync (.NET) or raiseEvent (JavaScript) method of the orchestration client binding
sends the events that WaitForExternalEvent (.NET) or waitForExternalEvent (JavaScript) waits
MCT USE ONLY. STUDENT USE PROHIBITED 76  Module 2 Implement Azure functions  

for. The RaiseEventAsync method takes eventName and eventData as parameters. The event data
must be JSON-serializable.
Below is an example queue-triggered function that sends an “Approval” event to an orchestrator function
instance. The orchestration instance ID comes from the body of the queue message.
const df = require("durable-functions");

module.exports = async function(context, instanceId) {


const client = df.getClient(context);
await client.raiseEvent(instanceId, "Approval", true);
};

Internally, RaiseEventAsync (.NET) or raiseEvent (JavaScript) enqueues a message that gets picked
up by the waiting orchestrator function. If the instance is not waiting on the specified event name, the
event message is added to an in-memory queue. If the orchestration instance later begins listening for
that event name, it will check the queue for event messages.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  77

Lab and review questions


Lab: Implement task processing logic by using
Azure Functions

Lab scenario
Your company has built a desktop software tool that parses a local JavaScript Object Notation (JSON) file
for its configuration settings. During its latest meeting, your team decided to reduce the number of files
that are distributed with your application by serving your default configuration settings from a URL
instead of from a local file. As the new developer on the team, you've been tasked with evaluating
Microsoft Azure Functions as a solution to this problem.

Objectives
After you complete this lab, you will be able to:
●● Create a Functions app.
●● Create various functions by using built-in triggers.
●● Configure function app triggers and input integrations.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 2 review questions


Review Question 5
Which of the following plans for Functions automatically add compute power, if needed, when your code is
running.
(Select all that apply.)
†† Consumption Plan
†† App Service Plan
†† Premium Plan
†† Virtual Plan
MCT USE ONLY. STUDENT USE PROHIBITED 78  Module 2 Implement Azure functions  

Review Question 2
Is the following statement True or False?
Only Functions running in the consumption plan require a general Azure Storage account.
†† True
†† False

Review Question 3
You created a Function in the Azure Portal. Which of the following statments regarding the direction
property of the triggers, or bindings, are valid?
(Select all that apply.)
†† For triggers, the direction is always "in"
†† For triggers, the direction can be "in" or "out"
†† Input and output bindings can use "in" and "out"
†† Some bindings can use "inout"

Review Question 4
Azure Functions uses a component called the scale controller to monitor the rate of events and determine
whether to scale out or scale in.
Is the following statement True or False?
When you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the
newest queue message.
†† True
†† False

Review Question 5
Which of the below are valid application patterns that can benefit from durable functions?
(Select all that apply.)
†† Function chaining
†† Fan-out/fan-in
†† Chained lightning
†† Async HTTP APIs
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  79

Answers
Review Question 5
Which of the following plans for Functions automatically add compute power, if needed, when your code
is running.
(Select all that apply.)
■■ Consumption Plan
†† App Service Plan
■■ Premium Plan
†† Virtual Plan
Explanation
Both Consumption and Premium plans automatically add compute power when your code is running. Your
app is scaled out when needed to handle load, and scaled down when code stops running.
Review Question 2
Is the following statement True or False?
Only Functions running in the consumption plan require a general Azure Storage account.
†† True
■■ False
Explanation
On any plan, a function app requires a general Azure Storage account, which supports Azure Blob, Queue,
Files, and Table storage.
Review Question 3
You created a Function in the Azure Portal. Which of the following statments regarding the direction
property of the triggers, or bindings, are valid?
(Select all that apply.)
■■ For triggers, the direction is always "in"
†† For triggers, the direction can be "in" or "out"
■■ Input and output bindings can use "in" and "out"
■■ Some bindings can use "inout"
Explanation
All triggers and bindings have a direction property in the function.json file:
MCT USE ONLY. STUDENT USE PROHIBITED 80  Module 2 Implement Azure functions  

Review Question 4
Azure Functions uses a component called the scale controller to monitor the rate of events and deter-
mine whether to scale out or scale in.
Is the following statement True or False?
When you're using an Azure Queue storage trigger, it scales based on the queue length and the age of
the newest queue message.
†† True
■■ False
Explanation
When using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest
queue message.
Review Question 5
Which of the below are valid application patterns that can benefit from durable functions?
(Select all that apply.)
■■ Function chaining
■■ Fan-out/fan-in
†† Chained lightning
■■ Async HTTP APIs
Explanation
The following are application patterns that can benefit from Durable Functions:
MCT USE ONLY. STUDENT USE PROHIBITED
Module 3 Develop solutions that use blob
storage

Azure Blob storage core concepts


Azure Blob storage overview
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for
storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a
particular data model or definition, such as text or binary data.
Blob storage is designed for:
●● Serving images or documents directly to a browser.
●● Storing files for distributed access.
●● Streaming video and audio.
●● Writing to log files.
●● Storing data for backup and restore, disaster recovery, and archiving.
●● Storing data for analysis by an on-premises or Azure-hosted service.
Users or client applications can access objects in Blob storage via HTTP/HTTPS, from anywhere in the
world. Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI,
or an Azure Storage client library.
An Azure Storage account is the top-level container for all of your Azure Blob storage. The storage
account provides a unique namespace for your Azure Storage data that is accessible from anywhere in
the world over HTTP or HTTPS.
MCT USE ONLY. STUDENT USE PROHIBITED 82  Module 3 Develop solutions that use blob storage  

Types of storage accounts


Azure Storage offers several types of storage accounts. Each type supports different features and has its
own pricing model. The types of storage accounts are:
●● General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables.
Recommended for most scenarios using Azure Storage.
●● General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use gener-
al-purpose v2 accounts instead when possible.
●● Block blob storage accounts: Blob-only storage accounts with premium performance characteristics.
Recommended for scenarios with high transactions rates, using smaller objects, or requiring consist-
ently low storage latency.
●● FileStorage storage accounts: Files-only storage accounts with premium performance characteristics.
Recommended for enterprise or high performance scale applications.
●● Blob storage accounts: Blob-only storage accounts. Use general-purpose v2 accounts instead when
possible.
The following table describes the types of storage accounts and their capabilities:

Storage Supported Supported Supported Replication Deployment Encryption 2


account type services performance access tiers options model 1
tiers
General-pur- Blob, File, Standard, Hot, Cool, LRS, GRS, Resource Encrypted
pose V2 Queue, Premium 5 Archive 3 RA-GRS, Manager
Table, and ZRS, GZRS
Disk (preview),
RA-GZRS
(preview) 4
General-pur- Blob, File, Standard, N/A LRS, GRS, Resource Encrypted
pose V1 Queue, Premium 5 RA-GRS Manager,
Table, and Classic
Disk
Block blob Blob (block Premium N/A LRS Resource Encrypted
storage blobs and Manager
append
blobs only)
FileStorage Files only Premium N/A LRS Resource Encrypted
Manager
Blob storage Blob (block Standard Hot, Cool, LRS, GRS, Resource Encrypted
blobs and Archive 3 RA-GRS Manager
append
blobs only)
●● 1
Using the Azure Resource Manager deployment model is recommended. Storage accounts using the
classic deployment model can still be created in some locations, and existing classic accounts contin-
ue to be supported.
●● 2
All storage accounts are encrypted using Storage Service Encryption (SSE) for data at rest.
●● 3
The Archive tier is available at level of an individual blob only, not at the storage account level. Only
block blobs and append blobs can be archived.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Blob storage core concepts  83

●● 4
Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS) (preview) are available only
for standard general-purpose v2 storage accounts. For more information about ZRS, see Zone-redun-
dant storage (ZRS): Highly available Azure Storage applications.
●● 5
Premium performance for general-purpose v2 and general-purpose v1 accounts is available for disk
and page blob only.

General-purpose v2 accounts
General-purpose v2 storage accounts support the latest Azure Storage features and deliver the lowest
per-gigabyte capacity prices for Azure Storage. General-purpose v2 storage accounts offer multiple
access tiers for storing data based on your usage patterns.
Note: Microsoft recommends using a general-purpose v2 storage account for most scenarios. You can
easily upgrade a general-purpose v1 or Blob storage account to a general-purpose v2 account with no
downtime and without the need to copy data.

General-purpose v1 accounts
While general-purpose v2 accounts are recommended in most cases, general-purpose v1 accounts are
best suited to these scenarios:
●● Your applications require the Azure classic deployment model.
●● Your applications are transaction-intensive or use significant geo-replication bandwidth, but do not
require large capacity.
●● You use a version of the Storage Services REST API that is earlier than 2014-02-14 or a client library
with a version lower than 4.x.

Access tiers for block blob data


Azure Storage provides different options for accessing block blob data based on usage patterns. Each
access tier in Azure Storage is optimized for a particular pattern of data usage. By selecting the right
access tier for your needs, you can store your block blob data in the most cost-effective manner.
The available access tiers are:
●● The Hot access tier, which is optimized for frequent access of objects in the storage account. Access-
ing data in the hot tier is most cost-effective, while storage costs are higher. New storage accounts
are created in the hot tier by default.
●● The Cool access tier, which is optimized for storing large amounts of data that is infrequently ac-
cessed and stored for at least 30 days. Storing data in the cool tier is more cost-effective, but access-
ing that data may be more expensive than accessing data in the hot tier.
●● The Archive tier, which is available only for individual block blobs. The archive tier is optimized for
data that can tolerate several hours of retrieval latency and will remain in the Archive tier for at least
180 days. The archive tier is the most cost-effective option for storing data, but accessing that data is
more expensive than accessing data in the hot or cool tiers.
If there is a change in the usage pattern of your data, you can switch between these access tiers at any
time.
MCT USE ONLY. STUDENT USE PROHIBITED 84  Module 3 Develop solutions that use blob storage  

Blob storage resources


Blob storage offers three types of resources:
●● The storage account.
●● A container in the storage account
●● A blob in a container
The following diagram shows the relationship between these resources.

Storage accounts
A storage account provides a unique namespace in Azure for your data. Every object that you store in
Azure Storage has an address that includes your unique account name. The combination of the account
name and the Azure Storage blob endpoint forms the base address for the objects in your storage
account.
For example, if your storage account is named mystorageaccount, then the default endpoint for Blob
storage is:
http://mystorageaccount.blob.core.windows.net

Containers
A container organizes a set of blobs, similar to a directory in a file system. A storage account can include
an unlimited number of containers, and a container can store an unlimited number of blobs. The contain-
er name must be lowercase.

Blobs
Azure Storage supports three types of blobs:
●● Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data
that can be managed individually.
●● Append blobs are made up of blocks like block blobs, but are optimized for append operations.
Append blobs are ideal for scenarios such as logging data from virtual machines.
●● Page blobs store random access files up to 8 TB in size. Page blobs store virtual hard drive (VHD) files
and serve as disks for Azure virtual machines. For more information about page blobs, see Overview
of Azure page blobs
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Blob storage core concepts  85

Azure Storage security


Azure Storage provides a comprehensive set of security capabilities that together enable developers to
build secure applications:
●● All data (including metadata) written to Azure Storage is automatically encrypted using Storage
Service Encryption (SSE).
●● Azure Active Directory (Azure AD) and Role-Based Access Control (RBAC) are supported for Azure
Storage for both resource management operations and data operations, as follows:

●● You can assign RBAC roles scoped to the storage account to security principals and use Azure AD
to authorize resource management operations such as key management.
●● Azure AD integration is supported for blob and queue data operations. You can assign RBAC roles
scoped to a subscription, resource group, storage account, or an individual container or queue to
a security principal or a managed identity for Azure resources.
●● Data can be secured in transit between an application and Azure by using Client-Side Encryption,
HTTPS, or SMB 3.0.
●● OS and data disks used by Azure virtual machines can be encrypted using Azure Disk Encryption.
●● Delegated access to the data objects in Azure Storage can be granted using a shared access signature.

Azure Storage encryption for data at rest


Azure Storage automatically encrypts your data when persisting it to the cloud. Encryption protects your
data and to help you to meet your organizational security and compliance commitments. Data in Azure
Storage is encrypted and decrypted transparently using 256-bit AES encryption, one of the strongest
block ciphers available, and is FIPS 140-2 compliant. Azure Storage encryption is similar to BitLocker
encryption on Windows.
Azure Storage encryption is enabled for all new and existing storage accounts and cannot be disabled.
Because your data is secured by default, you don't need to modify your code or applications to take
advantage of Azure Storage encryption.
Storage accounts are encrypted regardless of their performance tier (standard or premium) or deploy-
ment model (Azure Resource Manager or classic). All Azure Storage redundancy options support encryp-
tion, and all copies of a storage account are encrypted. All Azure Storage resources are encrypted,
including blobs, disks, files, queues, and tables. All object metadata is also encrypted.
Encryption does not affect Azure Storage performance. There is no additional cost for Azure Storage
encryption.

Encryption key management


You can rely on Microsoft-managed keys for the encryption of your storage account, or you can manage
encryption with your own keys. If you choose to manage encryption with your own keys, you have two
options:
●● You can specify a customer-managed key to use for encrypting and decrypting all data in the storage
account. A customer-managed key is used to encrypt all data in all services in your storage account.
●● You can specify a customer-provided key on Blob storage operations. A client making a read or write
request against Blob storage can include an encryption key on the request for granular control over
how blob data is encrypted and decrypted.
MCT USE ONLY. STUDENT USE PROHIBITED 86  Module 3 Develop solutions that use blob storage  

The following table compares key management options for Azure Storage encryption.

  Microsoft-managed Customer-managed Customer-provided


keys keys keys
Encryption/decryption Azure Azure Azure
operations
Azure Storage services All Blob storage, Azure Files Blob storage
supported
Key storage Microsoft key store Azure Key Vault Azure Key Vault or any
other key store
Key rotation responsi- Microsoft Customer Customer
bility
Key usage Microsoft Azure portal, Storage Azure Storage REST API
Resource Provider REST (Blob storage), Azure
API, Azure Storage Storage client libraries
management libraries,
PowerShell, CLI
Key access Microsoft only Microsoft, Customer Customer only

Azure Storage redundancy


The data in your Microsoft Azure storage account is always replicated to ensure durability and high
availability. Azure Storage copies your data so that it is protected from planned and unplanned events,
including transient hardware failures, network or power outages, and massive natural disasters. You can
choose to replicate your data within the same data center, across zonal data centers within the same
region, or across geographically separated regions.
Choosing a redundancy option
When you create a storage account, you can select one of the following redundancy options:
●● Locally redundant storage (LRS)
●● Zone-redundant storage (ZRS)
●● Geo-redundant storage (GRS)
●● Read-access geo-redundant storage (RA-GRS)
●● Geo-zone-redundant storage (GZRS)
●● Read-access geo-zone-redundant storage (RA-GZRS)
The following table provides a quick overview of the scope of durability and availability that each replica-
tion strategy will provide you for a given type of event (or event of similar impact).

Scenario LRS ZRS GRS/RA-GRS GZRS/RA-GZRS


(preview)
Node unavailabili- Yes Yes Yes Yes
ty within a data
center
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Blob storage core concepts  87

Scenario LRS ZRS GRS/RA-GRS GZRS/RA-GZRS


(preview)
An entire data No Yes Yes Yes
center (zonal or
non-zonal)
becomes unavaila-
ble
A region-wide No No Yes Yes
outage
Read access to No No Yes (with RA-GRS) Yes (with RA-GZRS)
your data (in a
remote, geo-repli-
cated region) in
the event of
region-wide
unavailability
Designed to at least 99.99…% at least 99.99…% at least 99.99…% at least 99.99…%
provide __ durabili- (11 9's) (12 9's) (16 9's) (16 9's)
ty of objects over a
given year
Supported storage GPv2, GPv1, Blob GPv2 GPv2, GPv1, Blob GPv2
account types
Availability SLA for At least 99.9% At least 99.9% At least 99.9% At least 99.9%
read requests (99% for cool (99% for cool (99% for cool (99% for cool
access tier) access tier) access tier) for GRS access tier) for
GZRS
At least 99.99%
(99.9% for cool At least 99.99%
access tier) for (99.9% for cool
RA-GRS access tier) for
RA-GZRS
Availability SLA for At least 99.9% At least 99.9% At least 99.9% At least 99.9%
write requests (99% for cool (99% for cool (99% for cool (99% for cool
access tier) access tier) access tier) access tier)
All data in your storage account is replicated, including block blobs and append blobs, page blobs,
queues, tables, and files. All types of storage accounts are replicated, although ZRS requires a gener-
al-purpose v2 storage account.

Demo: Create a block blob storage account


The block blob storage account type lets you create block blobs with premium performance characteris-
tics. This type of storage account is optimized for workloads with high transactions rates or that require
very fast access times.
In this demo we'll cover how to create a block blob storage account by using the Azure portal, and in the
cloud shell using both Azure PowerShell and Azure CLI.
MCT USE ONLY. STUDENT USE PROHIBITED 88  Module 3 Develop solutions that use blob storage  

Create account in the Azure portal


To create a block blob storage account in the Azure portal, follow these steps:
1. In the Azure portal, select All services > the Storage category > Storage accounts.
2. Under Storage accounts, select Add.
3. In the Subscription field, select the subscription in which to create the storage account.
4. In the Resource group field, select an existing resource group or select Create new, and enter a
name for the new resource group.
5. In the Storage account name field, enter a name for the account. Note the following guidelines:
●● The name must be unique across Azure.
●● The name must be between three and 24 characters long.
●● The name can include only numbers and lowercase letters.
6. In the Location field, select a location for the storage account, or use the default location.
7. For the rest of the settings, configure the following:

Field Value
Performance Select Premium.
Account kind Select BlockBlobStorage.
Replication Leave the default setting of Locally-redundant
storage (LRS).
8. Select Review + create to review the storage account settings.
9. Select Create.

Create account by using Azure Cloud Shell


1. Login to the Azure Portal1 and open the Cloud Shell.
●● You can also login to the Azure Cloud Shell2 directly.
2. Select PowerShell in the top-left section of the cloud shell.
3. Create a new resource group. Replace the values in quotations, and run the following commands:
$resourcegroup = "new_resource_group_name"
$location = "region_name"
New-AzResourceGroup -Name $resourceGroup -Location $location

4. Create the block blob storage account. See Step 5 in the Create account in the Azure portal instruc-
tions above for the storage account name requirements. Replace the values in quotations, and run the
following commands:
$storageaccount = "new_storage_account_name"

New-AzStorageAccount -ResourceGroupName $resourcegroup -Name $storageaccount `

1 https://portal.azure.com
2 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Blob storage core concepts  89

-Location $location -Kind "BlockBlobStorage" -SkuName "Premium_LRS"


MCT USE ONLY. STUDENT USE PROHIBITED 90  Module 3 Develop solutions that use blob storage  

Managing the Azure Blob storage lifecycle


Managing the Azure Blob storage lifecycle
Data sets have unique lifecycles. Early in the lifecycle, people access some data often. But the need for
access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once
stored. Some data expires days or months after creation, while other data sets are actively read and
modified throughout their lifetimes.
Azure Blob storage lifecycle management offers a rich, rule-based policy for GPv2 and Blob storage
accounts. Use the policy to transition your data to the appropriate access tiers or expire at the end of the
data's lifecycle.
The lifecycle management policy lets you:
●● Transition blobs to a cooler storage tier (hot to cool, hot to archive, or cool to archive) to optimize for
performance and cost
●● Delete blobs at the end of their lifecycles
●● Define rules to be run once per day at the storage account level
●● Apply rules to containers or a subset of blobs (using prefixes as filters)
Consider a scenario where data gets frequent access during the early stages of the lifecycle, but only
occasionally after two weeks. Beyond the first month, the data set is rarely accessed. In this scenario, hot
storage is best during the early stages. Cool storage is most appropriate for occasional access. Archive
storage is the best tier option after the data ages over a month. By adjusting storage tiers in respect to
the age of data, you can design the least expensive storage options for your needs. To achieve this
transition, lifecycle management policy rules are available to move aging data to cooler tiers.
The lifecycle management policy is available with General Purpose v2 (GPv2) accounts, Blob storage
accounts, and Premium Block Blob storage accounts. In the Azure portal, you can upgrade an existing
General Purpose (GPv1) account to a GPv2 account.

Azure Blob storage policies


A lifecycle management policy is a collection of rules in a JSON document:
{
"rules": [
{
"name": "rule1",
"enabled": true,
"type": "Lifecycle",
"definition": {...}
},
{
"name": "rule2",
"type": "Lifecycle",
"definition": {...}
}
]
}
MCT USE ONLY. STUDENT USE PROHIBITED
 Managing the Azure Blob storage lifecycle  91

A policy is a collection of rules:

Parameter name Parameter type Notes


rules An array of rule objects At least one rule is required in a
policy. You can define up to 100
rules in a policy.
Each rule within the policy has several parameters:

Parameter name Parameter type Notes Required


name String A rule name can include True
up to 256 alphanumeric
characters. Rule name is
case-sensitive. It must
be unique within a
policy.
enabled Boolean An optional boolean to False
allow a rule to be
temporary disabled.
Default value is true if
it's not set.
type An enum value The current valid type is True
Lifecycle.
definition An object that defines Each definition is made True
the lifecycle rule up of a filter set and an
action set.

Rules
Each rule definition includes a filter set and an action set. The filter set limits rule actions to a certain set
of objects within a container or objects names. The action set applies the tier or delete actions to the
filtered set of objects.

Sample rule
The following sample rule filters the account to run the actions on objects that exist inside container1
and start with foo.
●● Tier blob to cool tier 30 days after last modification
●● Tier blob to archive tier 90 days after last modification
●● Delete blob 2,555 days (seven years) after last modification
●● Delete blob snapshots 90 days after snapshot creation
{
"rules": [
{
"name": "ruleFoo",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
MCT USE ONLY. STUDENT USE PROHIBITED 92  Module 3 Develop solutions that use blob storage  

"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "container1/foo" ]
},
"actions": {
"baseBlob": {
"tierToCool": { "daysAfterModificationGreaterThan": 30 },
"tierToArchive": { "daysAfterModificationGreaterThan": 90 },
"delete": { "daysAfterModificationGreaterThan": 2555 }
},
"snapshot": {
"delete": { "daysAfterCreationGreaterThan": 90 }
}
}
}
}
]
}

Rule filters
Filters limit rule actions to a subset of blobs within the storage account. If more than one filter is defined,
a logical AND runs on all filters.
Filters include:

Filter name Filter type Notes Is Required


blobTypes An array of predefined The current release Yes
enum values. supports blockBlob.
prefixMatch An array of strings for If you don't define No
prefixes to be match. prefixMatch, the rule
Each rule can define up applies to all blobs
to 10 prefixes. A prefix within the storage
string must start with a account.
container name. For
example, if you want to
match all blobs under
https://myaccount.
blob.core.windows.
net/container1/
foo/... for a rule, the
prefixMatch is con-
tainer1/foo.

Rule actions
Actions are applied to the filtered blobs when the run condition is met.
Lifecycle management supports tiering and deletion of blobs and deletion of blob snapshots. Define at
least one action for each rule on blobs or blob snapshots.
MCT USE ONLY. STUDENT USE PROHIBITED
 Managing the Azure Blob storage lifecycle  93

Action Base Blob Snapshot


tierToCool Support blobs currently at hot Not supported
tier
tierToArchive Support blobs currently at hot or Not supported
cool tier
delete Supported Supported
✔️ Note: If you define more than one action on the same blob, lifecycle management applies the least
expensive action to the blob. For example, action delete is cheaper than action tierToArchive.
Action tierToArchive is cheaper than action tierToCool.
The run conditions are based on age. Base blobs use the last modified time to track age, and blob
snapshots use the snapshot creation time to track age.

Action run condition Condition value Description


daysAfterModificationGreaterT- Integer value indicating the age The condition for base blob
han in days actions
daysAfterCreationGreaterThan Integer value indicating the age The condition for blob snapshot
in days actions

Demo: Add a policy to Azure Blob storage


You can add, edit, or remove a policy by using any of the following methods:
●● Azure portal
●● Azure PowerShell
●● Azure CLI
●● REST APIs
We'll show the steps and some examples for the Portal and PowerShell.

Azure portal
There are two ways to add a policy through the Azure portal: Azure portal List view, and Azure portal
Code view.

Azure portal List view


1. Sign in to the Azure portal3.
2. Select All resources and then select your storage account.
3. Under Blob Service, select Lifecycle management to view or change your rules.
4. Select the List view tab.
5. Select Add rule and then fill out the Action set form fields. In the following example, blobs are
moved to cool storage if they haven't been modified for 30 days.
6. Select Filter set to add an optional filter. Then, select Browse to specify a container and folder by
which to filter.

3 https://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 94  Module 3 Develop solutions that use blob storage  

7. Select Review + add to review the policy settings.


8. Select Add to add the new policy.

Azure portal Code view


1. Follow the first three steps above in the List view section.
2. Select the Code view tab. The following JSON is an example of a policy that can be pasted into the
Code view tab.
{
"rules": [
{
"name": "ruleFoo",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "container1/foo" ]
},
"actions": {
"baseBlob": {
"tierToCool": { "daysAfterModificationGreaterThan": 30 },
"tierToArchive": { "daysAfterModificationGreaterThan": 90 },
"delete": { "daysAfterModificationGreaterThan": 2555 }
},
"snapshot": {
"delete": { "daysAfterCreationGreaterThan": 90 }
}
}
}
}
]
}

3. Select Save.

PowerShell
The following PowerShell script can be used to add a policy to your storage account. The $rgname
variable must be initialized with your resource group name. The $accountName variable must be
initialized with your storage account name.
#Install the latest module if you are running this in a local PowerShell instance
Install-Module -Name Az -Repository PSGallery

#Initialize the following with your resource group and storage account names
$rgname = ""
$accountName = ""

#Create a new action object


MCT USE ONLY. STUDENT USE PROHIBITED
 Managing the Azure Blob storage lifecycle  95

$action = Add-AzStorageAccountManagementPolicyAction -BaseBlobAction Delete -daysAfterModifica-


tionGreaterThan 2555
$action = Add-AzStorageAccountManagementPolicyAction -InputObject $action -BaseBlobAction
TierToArchive -daysAfterModificationGreaterThan 90
$action = Add-AzStorageAccountManagementPolicyAction -InputObject $action -BaseBlobAction
TierToCool -daysAfterModificationGreaterThan 30
$action = Add-AzStorageAccountManagementPolicyAction -InputObject $action -SnapshotAction Delete
-daysAfterCreationGreaterThan 90

# Create a new filter object


# PowerShell automatically sets BlobType as “blockblob” because it is the
# only available option currently
$filter = New-AzStorageAccountManagementPolicyFilter -PrefixMatch ab,cd

# Create a new rule object


# PowerShell automatically sets Type as “Lifecycle” because it is the
# only available option currently
$rule1 = New-AzStorageAccountManagementPolicyRule -Name Test -Action $action -Filter $filter

#Set the policy


$policy = Set-AzStorageAccountManagementPolicy -ResourceGroupName $rgname -StorageAccount-
Name $accountName -Rule $rule1

REST APIs
You can also create and manage lifecycle policies by using REST APIs. For information on the operations
see the Management Policies4 reference page.

Rehydrate blob data from the archive tier


While a blob is in the archive access tier, it's considered offline and can't be read or modified. The blob
metadata remains online and available, allowing you to list the blob and its properties. Reading and
modifying blob data is only available with online tiers such as hot or cool. There are two options to
retrieve and access data stored in the archive access tier.
1. Rehydrate an archived blob to an online tier.
2. Copy an archived blob to an online tier.

Rehydrate an archived blob to an online tier


To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is
known as rehydration and can take hours to complete. We recommend large blob sizes for optimal
rehydration performance. Rehydrating several small blobs concurrently may add additional time. There
are currently two rehydrate priorities, High (preview) and Standard, which can be set via the optional
x-ms-rehydrate-priority property on a Set Blob Tier or Copy Blob operation.
●● Standard priority: The rehydration request will be processed in the order it was received and may
take up to 15 hours.

4 https://docs.microsoft.com/en-us/rest/api/storagerp/managementpolicies
MCT USE ONLY. STUDENT USE PROHIBITED 96  Module 3 Develop solutions that use blob storage  

●● High priority (preview): The rehydration request will be prioritized over Standard requests and may
finish in under 1 hour. High priority may take longer than 1 hour, depending on blob size and current
demand. High priority requests are guaranteed to be prioritized over Standard priority requests.
Standard priority is the default rehydration option for archive. High priority is a faster option that will cost
more than Standard priority rehydration and is usually reserved for use in emergency data restoration
situations.

Set Blob Tier operation


You can change the access tier of a blob by using the Set Blob Tier operation in the Blob Service
REST API. The operation is allowed on a page blob in a premium storage account and on a block blob in
a blob storage or general purpose v2 account. This operation does not update the blob's ETag.
Below is an example of the construction of the operation, HTTPS is recommended. Replace myaccount
with the name of your storage account and myblob with the blob name for which the tier is to be
changed.

PUT Request URI HTTP Version


https://myaccount.blob.core.windows. HTTP/1.1
net/mycontainer/myblob?comp=tier

Copy an archived blob to an online tier


If you don't want to rehydrate a blob, you can choose a Copy Blob operation. Your original blob will
remain unmodified in archive while you work on the new blob in the hot or cool tier. You can set the
optional x-ms-rehydrate-priority property to Standard or High (preview) when using the copy
process.
Archive blobs can only be copied to online destination tiers. Copying an archive blob to another archive
blob isn't supported.
Copying a blob from Archive takes time. Behind the scenes, the Copy Blob operation temporarily
rehydrates your archive source blob to create a new online blob in the destination tier. This new blob is
not available until the temporary rehydration from archive is complete and the data is written to the new
blob.

Copy Blob operation


You can copy a blob using the Copy Blob operation in the Blob Service REST API. The Copy Blob
operation copies a blob to a destination within the storage account.
●● In version 2012-02-12 and later, the source for a Copy Blob operation can be a committed blob in
any Azure storage account.
●● Beginning with version 2015-02-21, the source for a Copy Blob operation can be an Azure file in any
Azure storage account.
Note: Only storage accounts created on or after June 7th, 2012 allow the Copy Blob operation to copy
from another storage account.
Below is an example of the construction of the operation, HTTPS is recommended. Replace myaccount
with the name of your storage account, mycontainer with the name of your container, and myblob with
the name of your destination blob.
MCT USE ONLY. STUDENT USE PROHIBITED
 Managing the Azure Blob storage lifecycle  97

PUT Method Request URI HTTP Version


https://myaccount.blob.core.windows. HTTP/1.1
net/mycontainer/myblob

Additional resources
For more details on REST API operations covered here, see:
●● Set Blob Tier: https://docs.microsoft.com/en-us/rest/api/storageservices/set-blob-tier
●● Copy Blob: https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob
MCT USE ONLY. STUDENT USE PROHIBITED 98  Module 3 Develop solutions that use blob storage  

Working with Azure Blob storage


Azure Blob storage client library for .NET v11
Listed below are some of the main classes and objects you'll use to code against Azure blob storage.
●● CloudStorageAccount: The CloudStorageAccount class represents your Azure storage account.
Use this class to authorize access to Blob storage using your account access keys.
●● CloudBlobClient: The CloudBlobClient class provides a point of access to the Blob service in your
code.
●● CloudBlobContainer: The CloudBlobContainer class represents a blob container in your code.
●● CloudBlockBlob: The CloudBlockBlob object represents a block blob in your code. Block blobs are
made up of blocks of data that can be managed individually.

Demo: Using the Azure Blob storage client li-


brary for .NET v11
This demo uses the Azure Blob storage client library to show you how to perform the following actions
on Azure Blob storage in a console app:
●● Authenticate the client
●● Create a container
●● Upload blobs to a container
●● List the blobs in a container
●● Download blobs
●● Delete a container

Setting up
Perform the following actions to prepare Azure, and your local environment, for the rest of the demo.

Create an Azure storage account and get credentials


1. Create a storage account.
We need a storage account created to use in the application. You can either create it through the
portal or use the following PowerShell script in the Cloud Shell. The script will prompt you for a region
for the resources and then create resources needed for the demo.
# Prompts for a region for the resources to be created
# and generates a random name for the resources using
# "az204" as the prefix for easy identification in the portal

$myLocation = Read-Host -Prompt "Enter the region (i.e. westus): "


$myResourceGroup = "az204-blobdemo-rg"
$myStorageAcct = "az204blobdemo" + $(get-random -minimum 10000 -maximum 100000)

# Create the resource group


MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Blob storage  99

New-AzResourceGroup -Name $myResourceGroup -Location $myLocation

# Create the storage account


New-AzStorageAccount -ResourceGroupName $myResourceGroup -Name $myStorageAcct `
-Location $myLocation -SkuName "Standard_LRS"

Write-Host "`nNote the following resource group, and storage account names, you will use them in
the code examples below.
Resource group: $myResourceGroup
Storage account: $myStorageAcct"

2. Get credentials for the storage account.


●● Navigate to the Azure portal5.
●● Locate the storage account created in step 1.
●● In the Settings section of the storage account overview, select Access keys. Here, you can view
your account access keys and the complete connection string for each key.
●● Find the Connection string value under key1, and select the Copy button to copy the connection
string. You will add the connection string value to the code in the next section.
●● In the Blob section of the storage account overview, select Containers. Leave the windows open
so you can view changes made to the storage as you progress through the demo.

Prepare the .NET project


In this section we'll create project named az204-blobdemo and install the Azure Blob Storage client
library.
1. In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a
new console app. This command creates a simple “Hello World” C# project with a single source file:
Program.cs.
dotnet new console -n az204-blobdemo

2. Use the following commands to switch to the newly created az204-blobdemo folder and build the app
to verify that all is well.
cd az204-blobdemo
dotnet build

3. While still in the application directory, install the Azure Blob Storage client library for .NET package by
using the dotnet add package command.
dotnet add package Microsoft.Azure.Storage.Blob

Note: Leave the console window open so you can use it to build and run the app later in the demo.

5 https://portal.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED 100  Module 3 Develop solutions that use blob storage  

4. Open the Program.cs file in your editor, and replace the contents with the following code. The code
uses the TryParse method to verify if the connection string can be parsed to create a CloudStor-
ageAccount object.
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Blob;

namespace az204_blobdemo
{
class Program
{
public static void Main()
{
Console.WriteLine("Azure Blob Storage Demo\n");

// Run the examples asynchronously, wait for the results before proceeding
ProcessAsync().GetAwaiter().GetResult();

Console.WriteLine("Press enter to exit the sample application.");


Console.ReadLine();

private static async Task ProcessAsync()


{
// Copy the connection string from the portal in the variable below.
string storageConnectionString = "PLACE CONNECTION STRING HERE";

// Check whether the connection string can be parsed.


CloudStorageAccount storageAccount;
if (CloudStorageAccount.TryParse(storageConnectionString, out storageAccount))
{
// Run the example code if the connection string is valid.
Console.WriteLine("Valid connection string.\r\n");

// EXAMPLE CODE STARTS BELOW HERE

//EXAMPLE CODE ENDS ABOVE HERE


}
else
{
// Otherwise, let the user know that they need to define the environment variable.
Console.WriteLine(
"A valid connection string has not been defined in the storageConnectionString varia-
ble.");
Console.WriteLine("Press enter to exit the application.");
Console.ReadLine();
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Blob storage  101

}
}
}
}

 
❗️ Important: The ProcessAsync method above contains directions on where to place the
example code for the demo.
5. Set the storageConnectionString variable to the value you copied from the portal.
6. Build and run the application to verify your connection string is valid by using the dotnet build
and dotnet run commands in your console window.

Building the full app


For each of the following sections below you'll find a brief description of the action being taken as well as
the code snippet you'll add to the project. Each new snippet is appended to the one before it, and we'll
build and run the console app at the end.
For each example below:
1. Copy the code and append it to the previous snippet in the example code section of the Program.cs
file.

Create a container
To create the container, first create an instance of the CloudBlobClient object, which points to Blob
storage in your storage account. Next, create an instance of the CloudBlobContainer object, then
create the container. The code below calls the CreateAsync method to create the container. A GUID
value is appended to the container name to ensure that it is unique. In a production environment, it's
often preferable to use the CreateIfNotExistsAsync method to create a container only if it does not
already exist.
// Create a container called 'quickstartblobs' and
// append a GUID value to it to make the name unique.
CloudBlobContainer cloudBlobContainer =
cloudBlobClient.GetContainerReference("demoblobs" +
Guid.NewGuid().ToString());
await cloudBlobContainer.CreateAsync();

Console.WriteLine("A container has been created, note the " +


"'Public access level' in the portal.");
Console.WriteLine("Press 'Enter' to continue.");
Console.ReadLine();

Upload blobs to a container


The following code snippet gets a reference to a CloudBlockBlob object by calling the GetBlock-
BlobReference method on the container created in the previous section. It then uploads the selected
local file to the blob by calling the UploadFromFileAsync method. This method creates the blob if it
doesn't already exist, and overwrites it if it does.
MCT USE ONLY. STUDENT USE PROHIBITED 102  Module 3 Develop solutions that use blob storage  

// Create a file in your local MyDocuments folder to upload to a blob.


string localPath = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
string localFileName = "BlobDemo_" + Guid.NewGuid().ToString() + ".txt";
string sourceFile = Path.Combine(localPath, localFileName);
// Write text to the file.
File.WriteAllText(sourceFile, "Hello, World!");

Console.WriteLine("\r\nTemp file = {0}", sourceFile);


Console.WriteLine("Uploading to Blob storage as blob '{0}'", localFileName);

// Get a reference to the blob address, then upload the file to the blob.
// Use the value of localFileName for the blob name.
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(localFileName);
await cloudBlockBlob.UploadFromFileAsync(sourceFile);

Console.WriteLine("\r\nVerify the creation of the blob and upload in the portal.");


Console.WriteLine("Press 'Enter' to continue.");
Console.ReadLine();

List the blobs in a container


List the blobs in the container by using the ​List​BlobsSegmentedAsync method. In this case, only one
blob has been added to the container, so the listing operation returns just that one blob. The code below
shows the best practice of using a continuation token to ensure you've retrieved the full list in one call
(by default, more than 5000).
// List the blobs in the container.
Console.WriteLine("List blobs in container.");
BlobContinuationToken blobContinuationToken = null;
do
{
var results = await cloudBlobContainer.ListBlobsSegmentedAsync(null, blobContinuationToken);
// Get the value of the continuation token returned by the listing call.
blobContinuationToken = results.ContinuationToken;
foreach (IListBlobItem item in results.Results)
{
Console.WriteLine(item.Uri);
}
} while (blobContinuationToken != null); // Loop while the continuation token is not null.

Console.WriteLine("\r\nCompare the list in the console to the portal.");


Console.WriteLine("Press 'Enter' to continue.");
Console.ReadLine();

Download blobs
Download the blob created previously to your local file system by using the ​
Download​
To​
FileAsync
method. The example code adds a suffix of “_DOWNLOADED” to the blob name so that you can see both
files in local file system.
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Blob storage  103

// Download the blob to a local file, using the reference created earlier.
// Append the string "_DOWNLOADED" before the .txt extension so that you
// can see both files in MyDocuments.
string destinationFile = sourceFile.Replace(".txt", "_DOWNLOADED.txt");
Console.WriteLine("Downloading blob to {0}", destinationFile);
await cloudBlockBlob.DownloadToFileAsync(destinationFile, FileMode.Create);

Console.WriteLine("\r\nLocate the local file to verify it was downloaded.");


Console.WriteLine("Press 'Enter' to continue.");
Console.ReadLine();

Delete a container
The following code cleans up the resources the app created by deleting the entire container using Cloud​
Blob​Container.​ DeleteAsync. You can also delete the local files if you like.
// Clean up the resources created by the app
Console.WriteLine("Press the 'Enter' key to delete the example files " +
"and example container.");
Console.ReadLine();
// Clean up resources. This includes the container and the two temp files.
Console.WriteLine("Deleting the container");
if (cloudBlobContainer != null)
{
await cloudBlobContainer.DeleteIfExistsAsync();
}
Console.WriteLine("Deleting the source, and downloaded files\r\n");
File.Delete(sourceFile);
File.Delete(destinationFile);

Run the code


Now that the app is complete it's time to build and run it. Ensure you are in your application directory
and run the following commands:
dotnet build
dotnet run

There are many prompts in the app to allow you to take the time to see what's happening in the portal
after each step.

Clean up other resources


The app deleted the resources it created. If you no longer need the resource group or storage account
created earlier be sure to delete those through the portal.
MCT USE ONLY. STUDENT USE PROHIBITED 104  Module 3 Develop solutions that use blob storage  

Setting and Retrieving Properties and Metadata


for Blob Resources by using REST
Containers and blobs support custom metadata, represented as HTTP headers. Metadata headers can be
set on a request that creates a new container or blob resource, or on a request that explicitly creates a
property on an existing resource.

Metadata Header Format


Metadata headers are name/value pairs. The format for the header is:
x-ms-meta-name:string-value

Beginning with version 2009-09-19, metadata names must adhere to the naming rules for C# identifiers.
Names are case-insensitive. Note that metadata names preserve the case with which they were created,
but are case-insensitive when set or read. If two or more metadata headers with the same name are
submitted for a resource, the Blob service returns status code 400 (Bad Request).
The metadata consists of name/value pairs. The total size of all metadata pairs can be up to 8KB in size.
Metadata name/value pairs are valid HTTP headers, and so they adhere to all restrictions governing HTTP
headers.

Operations on Metadata
Metadata on a blob or container resource can be retrieved or set directly, without returning or altering
the content of the resource.
Note that metadata values can only be read or written in full; partial updates are not supported. Setting
metadata on a resource overwrites any existing metadata values for that resource.

Retrieving Properties and Metadata


The GET and HEAD operations both retrieve metadata headers for the specified container or blob. These
operations return headers only; they do not return a response body. The URI syntax for retrieving meta-
data headers on a container is as follows:
GET/HEAD https://myaccount.blob.core.windows.net/mycontainer?restype=con-
tainer

The URI syntax for retrieving metadata headers on a blob is as follows:


GET/HEAD https://myaccount.blob.core.windows.net/mycontainer/my-
blob?comp=metadata

Setting Metadata Headers


The PUT operation sets metadata headers on the specified container or blob, overwriting any existing
metadata on the resource. Calling PUT without any headers on the request clears all existing metadata on
the resource.
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Blob storage  105

The URI syntax for setting metadata headers on a container is as follows:


PUT https://myaccount.blob.core.windows.net/mycontainer?comp=metadata-
?restype=container

The URI syntax for setting metadata headers on a blob is as follows:


PUT https://myaccount.blob.core.windows.net/mycontainer/myblob?comp=metada-
ta

Standard HTTP Properties for Containers and Blobs


Containers and blobs also support certain standard HTTP properties. Properties and metadata are both
represented as standard HTTP headers; the difference between them is in the naming of the headers.
Metadata headers are named with the header prefix x-ms-meta- and a custom name. Property headers
use standard HTTP header names, as specified in the Header Field Definitions section 14 of the HTTP/1.1
protocol specification.
The standard HTTP headers supported on containers include:
●● ETag
●● Last-Modified
The standard HTTP headers supported on blobs include:
●● ETag
●● Last-Modified
●● Content-Length
●● Content-Type
●● Content-MD5
●● Content-Encoding
●● Content-Language
●● Cache-Control
●● Origin
●● Range

Manipulating blob container properties in .NET


The CloudStorageAccount class contains the CreateCloudBlobClient method that gives you
programmatic access to a client that manages your file shares:
CloudBlobClient client = storageAccount.CreateCloudBlobClient();

To reference a specific blob container, you can use the GetContainerReference method of the
CloudBlobClient class:
CloudBlobContainer container = client.GetContainerReference("images");
MCT USE ONLY. STUDENT USE PROHIBITED 106  Module 3 Develop solutions that use blob storage  

After you have a reference to the container, you can ensure that the container exists. This will create the
container if it does not already exist in the Azure storage account:
container.CreateIfNotExists();

Retrieving properties
With a hydrated reference, you can perform actions such as fetching the properties (metadata) of the
container by using the FetchAttributesAsync method of the CloudBlobContainer class:
await container.FetchAttributesAsync();

After the method is invoked, the local variable is hydrated with values for various container metadata.
This metadata can be accessed by using the Properties property of the CloudBlobContainer class,
which is of type BlobContainerProperties:
container.Properties

This class has properties that can be set to change the container, including (but not limited to) those in
the following table.

Property Description
ETag This is a standard HTTP header that gives a value
that is unchanged unless a property of the
container is changed. This value can be used to
implement optimistic concurrency with the blob
containers.
LastModified This property indicates when the container was
last modified.
PublicAccess This property indicates the level of public access
that is allowed on the container. Valid values
include Blob, Container, Off, and Unknown.
HasImmutabilityPolicy This property indicates whether the container has
an immutability policy. An immutability policy will
help ensure that blobs are stored for a minimum
amount of retention time.
HasLegalHold This property indicates whether the container has
an active legal hold. A legal hold will help ensure
that blobs remain unchanged until the hold is
removed.

Setting properties
Using the existing CloudBlobContainer variable (named container), you can set and retrieve custom
metadata for the container instance. This metadata is hydrated when you call the FetchAttributes or
FetchAttributesAsync method on your blob or container to populate the Metadata collection.
The following code example sets metadata on a container. In this example, we use the collection's Add
method to set a metadata value:
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Blob storage  107

container.Metadata.Add("docType", "textDocuments");

In the next example, we set the metadata value by using implicit key/value syntax:
container.Metadata["category"] = "guidance";

To persist the newly set metadata, you must call the SetMetadataAsync method of the CloudBlobCon-
tainer class:
await container.SetMetadataAsync();
MCT USE ONLY. STUDENT USE PROHIBITED 108  Module 3 Develop solutions that use blob storage  

Lab and review questions


Lab: Retrieving Azure Storage resources and
metadata by using the Azure Storage SDK for
.NET

Lab scenario
You're preparing to host a web application in Microsoft Azure that uses a combination of raster and
vector graphics. As a development group, your team has decided to store any multimedia content in
Azure Storage and manage it in an automated fashion by using C# code in Microsoft .NET. Before you
begin this significant milestone, you have decided to take some time to learn the newest version of the .
NET SDK that's used to access Storage by creating a simple application to manage and enumerate blobs
and containers.

Objectives
After you complete this lab, you will be able to:
●● Create containers and upload blobs by using the Azure portal.
●● Enumerate blobs and containers by using the Microsoft Azure Storage SDK for .NET.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 3 review questions


Review Question 1
Which of the following types of blobs are designed to store text and binary data?
†† Page blob
†† Block blob
†† Append blob
†† Data blob
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  109

Review Question 2
There are three access tiers for block blob data: hot, cold, and archive. New storage accounts are created in
which tier by default?
†† Hot
†† Cold
†† Archive

Review Question 3
Which of the following classes provides a point of access to the blob service in your code?
†† CloudBlockBlob
†† CloudStorageAccount
†† CloudBlobClient
†† CloudBlobContainer

Review Question 4
True or false, all data written to Azure Storage is automatically encrypted using SSL.
†† True
†† False

Review Question 5
Which of the following redundancy options will protect your data in the event of a region-wide outage?
†† Locally redundant storage (LRS)
†† Read-access geo-redundant storage (RA-GRS)
†† Zone-redundant storage (ZRS)
†† Geo-redundant storage (GRS)
MCT USE ONLY. STUDENT USE PROHIBITED 110  Module 3 Develop solutions that use blob storage  

Answers
Review Question 1
Which of the following types of blobs are designed to store text and binary data?
†† Page blob
■■ Block blob
†† Append blob
†† Data blob
Explanation
Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data that
can be managed individually
Review Question 2
There are three access tiers for block blob data: hot, cold, and archive. New storage accounts are created
in which tier by default?
■■ Hot
†† Cold
†† Archive
Explanation
The Hot access tier, which is optimized for frequent access of objects in the storage account. New storage
accounts are created in the hot tier by default
Review Question 3
Which of the following classes provides a point of access to the blob service in your code?
†† CloudBlockBlob
†† CloudStorageAccount
■■ CloudBlobClient
†† CloudBlobContainer
Explanation
Review Question 4
True or false, all data written to Azure Storage is automatically encrypted using SSL.
†† True
■■ False
Explanation
All data (including metadata) written to Azure Storage is automatically encrypted using Storage Service
Encryption (SSE).
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  111

Review Question 5
Which of the following redundancy options will protect your data in the event of a region-wide outage?
†† Locally redundant storage (LRS)
■■ Read-access geo-redundant storage (RA-GRS)
†† Zone-redundant storage (ZRS)
■■ Geo-redundant storage (GRS)
Explanation
GRS and RA-GRS will protect your data in case of a region-wide outage. LRS provides protection at the node
level within a data center. ZRS provides protection at the data center level (zonal or non-zonal).
MCT USE ONLY. STUDENT USE PROHIBITED
Module 4 Develop solutions that use Cosmos
DB storage

Azure Cosmos DB overview


Global data distribution with Azure Cosmos DB
Azure Cosmos DB is a globally distributed database service that's designed to provide low latency, elastic
scalability of throughput, well-defined semantics for data consistency, and high availability.
You can configure your databases to be globally distributed and available in any of the Azure regions. To
lower the latency, place the data close to where your users are. Choosing the required regions depends
on the global reach of your application and where your users are located. Cosmos DB transparently
replicates the data to all the regions associated with your Cosmos account. It provides a single system
image of your globally distributed Azure Cosmos database and containers that your application can read
and write to locally.
With Azure Cosmos DB, you can add or remove the regions associated with your account at any time.
Your application doesn't need to be paused or redeployed to add or remove a region. It continues to be
highly available all the time because of the multi-homing capabilities that the service natively provides.
MCT USE ONLY. STUDENT USE PROHIBITED 114  Module 4 Develop solutions that use Cosmos DB storage  

Key benefits of global distribution


Build global active-active apps. With its novel multi-master replication protocol, every region supports
both writes and reads. The multi-master capability also enables:
●● Unlimited elastic write and read scalability.
●● 99.999% read and write availability all around the world.
●● Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile.

Consistency levels in Azure Cosmos DB


Azure Cosmos DB approaches data consistency as a spectrum of choices instead of two extremes. Strong
consistency and eventual consistency are at the ends of the spectrum, but there are many consistency
choices along the spectrum. Developers can use these options to make precise choices and granular
tradeoffs with respect to high availability and performance.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Cosmos DB overview  115

With Azure Cosmos DB, developers can choose from five well-defined consistency models on the consist-
ency spectrum. From strongest to more relaxed, the models include strong, bounded staleness, session,
consistent prefix, and eventual consistency. The models are well-defined and intuitive and can be used for
specific real-world scenarios. Each model provides availability and performance tradeoffs and is backed
by the SLAs. The following image shows the different consistency levels as a spectrum.

The consistency levels are region-agnostic and are guaranteed for all operations regardless of the region
from which the reads and writes are served, the number of regions associated with your Azure Cosmos
account, or whether your account is configured with a single or multiple write regions.
Read consistency applies to a single read operation scoped within a partition-key range or a logical
partition. The read operation can be issued by a remote client or a stored procedure.

Guarantees associated with consistency levels


The comprehensive SLAs provided by Azure Cosmos DB guarantee that 100 percent of read requests
meet the consistency guarantee for any consistency level you choose. A read request meets the consist-
ency SLA if all the consistency guarantees associated with the consistency level are satisfied. The precise
definitions of the five consistency levels in Azure Cosmos DB using the TLA+ specification language are
provided in the azure-cosmos-tla1 GitHub repo.
The semantics of the five consistency levels are described here:
●● Strong: Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests
concurrently. The reads are guaranteed to return the most recent committed version of an item. A
client never sees an uncommitted or partial write. Users are always guaranteed to read the latest
committed write.
●● Bounded staleness: The reads are guaranteed to honor the consistent-prefix guarantee. The reads
might lag behind writes by at most “K” versions (i.e., "updates") of an item or by “T” time interval. In
other words, when you choose bounded staleness, the "staleness" can be configured in two ways:
●● The number of versions (K) of the item
●● The time interval (T) by which the reads might lag behind the writes
Bounded staleness offers total global order except within the “staleness window.” The monotonic read
guarantees exist within a region both inside and outside the staleness window. Strong consistency has
the same semantics as the one offered by bounded staleness. The staleness window is equal to zero.
When a client performs read operations within a region that accepts writes, the guarantees provided
by bounded staleness consistency are identical to those guarantees by the strong consistency.
●● Session: Within a single client session reads are guaranteed to honor the consistent-prefix (assuming
a single “writer” session), monotonic reads, monotonic writes, read-your-writes, and write-follows-
reads guarantees. Clients outside of the session performing writes will see eventual consistency.

1 https://github.com/Azure/azure-cosmos-tla
MCT USE ONLY. STUDENT USE PROHIBITED 116  Module 4 Develop solutions that use Cosmos DB storage  

●● Consistent prefix: Updates that are returned contain some prefix of all the updates, with no gaps.
Consistent prefix consistency level guarantees that reads never see out-of-order writes.
●● Eventual: There's no ordering guarantee for reads. In the absence of any further writes, the replicas
eventually converge.

Choose the right consistency level for your ap-


plication
Azure Cosmos DB allows developers to choose among the five well-defined consistency models: strong,
bounded staleness, session, consistent prefix and eventual. Each of these consistency models is well-de-
fined, intuitive and can be used for specific real-world scenarios. Each of the five consistency models
provide precise availability and performance tradeoffs and are backed by comprehensive SLAs. The
following simple considerations will help you make the right choice in many common scenarios.

SQL API and Table API


Consider the following points if your application is built using SQL API or Table API:
●● For many real-world scenarios, session consistency is optimal and it's the recommended option.
●● If your application requires strong consistency, it is recommended that you use bounded staleness
consistency level.
●● If you need stricter consistency guarantees than the ones provided by session consistency and
single-digit-millisecond latency for writes, it is recommended that you use bounded staleness consist-
ency level.
●● If your application requires eventual consistency, it is recommended that you use consistent prefix
consistency level.
●● If you need less strict consistency guarantees than the ones provided by session consistency, it is
recommended that you use consistent prefix consistency level.
●● If you need the highest availability and the lowest latency, then use eventual consistency level.
●● If you need even higher data durability without sacrificing performance, you can create a custom
consistency level at the application layer.

Cassandra, MongoDB, and Gremlin APIs


●● For details on mapping between “Read Consistency Level” offered in Apache Cassandra and Cosmos
DB consistency levels, see Consistency levels and Cosmos DB APIs2.
●● For details on mapping between “Read Concern” of MongoDB and Azure Cosmos DB consistency
levels, see Consistency levels and Cosmos DB APIs3.

2 https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels-across-apis#cassandra-mapping
3 https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels-across-apis#mongo-mapping
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Cosmos DB overview  117

Consistency guarantees in practice


In practice, you may often get stronger consistency guarantees. Consistency guarantees for a read
operation correspond to the freshness and ordering of the database state that you request. Read-consist-
ency is tied to the ordering and propagation of the write/update operations.
●● When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients
always read the value of a previous write, with a lag bounded by the staleness window.
●● When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the latest committed value of the write operation.
●● For the remaining three consistency levels, the staleness window is largely dependent on your
workload. For example, if there are no write operations on the database, a read operation with
eventual, session, or consistent prefix consistency levels is likely to yield the same results as a read
operation with strong consistency level.
If your Azure Cosmos account is configured with a consistency level other than the strong consistency,
you can find out the probability that your clients may get strong and consistent reads for your workloads
by looking at the Probabilistically Bounded Staleness (PBS) metric.
Probabilistic bounded staleness shows how eventual is your eventual consistency. This metric provides an
insight into how often you can get a stronger consistency than the consistency level that you have
currently configured on your Azure Cosmos account. In other words, you can see the probability (meas-
ured in milliseconds) of getting strongly consistent reads for a combination of write and read regions.

Azure Cosmos DB supported APIs


Today, Azure Cosmos DB can be accessed by using five different APIs. The underlying data structure in
Azure Cosmos DB is a data model based on atom record sequences that enabled Azure Cosmos DB to
support multiple data models. Because of the flexible nature of atom record sequences, Azure Cosmos
DB will be able to support many more models and APIs over time.

MongoDB API
Azure Cosmos DB automatically indexes data without requiring you to deal with schema and index
management. It is multi-model and supports document, key-value, graph, and columnar data models. By
default, you can interact with Cosmos DB using SQL API. Additionally, the Cosmos DB service implements
wire protocols for common NoSQL APIs including Cassandra, MongoDB, Gremlin, and Azure Table
Storage. This allows you to use your familiar NoSQL client drivers and tools to interact with your Cosmos
database.
By default, new accounts created using Azure Cosmos DB's API for MongoDB are compatible with version
3.6 of the MongoDB wire protocol. Any MongoDB client driver that understands this protocol version
should be able to natively connect to Cosmos DB.

Table API
The Table API in Azure Cosmos DB is a key-value database service built to provide premium capabilities
(for example, automatic indexing, guaranteed low latency, and global distribution) to existing Azure Table
storage applications without making any app changes.
MCT USE ONLY. STUDENT USE PROHIBITED 118  Module 4 Develop solutions that use Cosmos DB storage  

Gremlin API
The Azure Cosmos DB Gremlin API is used to store and operate with graph data on a fully managed
database service designed for any scale. Some of the features offered by the Gremlin API are: elastically
scalable throughput and storage, multi-region replication, fast queries and transversals, fully managed
graph database, automatic indexing, compatibility with Apache TinkerPop, and tunable consistency levels.

Apache Cassandra API


Azure Cosmos DB Cassandra API can be used as the data store for apps written for Apache Cassandra.
This means that by using existing Apache drivers compliant with CQLv4, your existing Cassandra applica-
tion can now communicate with the Azure Cosmos DB Cassandra API. In many cases, you can switch from
using Apache Cassandra to using Azure Cosmos DB 's Cassandra API, by just changing a connection
string.
The Cassandra API enables you to interact with data stored in Azure Cosmos DB using the Cassandra
Query Language (CQL) , Cassandra-based tools (like cqlsh) and Cassandra client drivers that you’re
already familiar with.

SQL API
The SQL API in Azure Cosmos DB is a JavaScript and JavaScript Object Notation (JSON) native API based
on the Azure Cosmos DB database engine. The SQL API also provides query capabilities rooted in the
familiar SQL query language. Using SQL, you can query for documents based on their identifiers or make
deeper queries based on properties of the document, complex objects, or even the existence of specific
properties. The SQL API supports the execution of JavaScript logic within the database in the form of
stored procedures, triggers, and user-defined functions.

Request Units in Azure Cosmos DB


With Azure Cosmos DB, you pay for the throughput you provision and the storage you consume on an
hourly basis. Throughput must be provisioned to ensure that sufficient system resources are available for
your Azure Cosmos database at all times.
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units
(or RUs, for short). You can think of RUs per second as the currency for throughput. RUs per second is a
rate-based currency. It abstracts the system resources such as CPU, IOPS, and memory that are required
to perform the database operations supported by Azure Cosmos DB.
The cost to read a 1 KB item is 1 Request Unit (or 1 RU). All other database operations are similarly
assigned a cost using RUs. No matter which API you use to interact with your Azure Cosmos container,
costs are always measured by RUs. Whether the database operation is a write, read, or query, costs are
always measured in RUs.
The following image shows the high-level idea of RUs:
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Cosmos DB overview  119

You provision the number of RUs for your application on a per-second basis in increments of 100 RUs per
second. To scale the provisioned throughput for your application, you can increase or decrease the
number of RUs at any time.
You can provision throughput at two distinct granularities:
●● Containers
●● Databases
MCT USE ONLY. STUDENT USE PROHIBITED 120  Module 4 Develop solutions that use Cosmos DB storage  

Azure Cosmos DB data structure


Azure Cosmos DB resource hierarchy
You can manage data in your account by creating databases, containers, and items. The following image
shows the hierarchy of different entities in an Azure Cosmos DB account:

Azure Cosmos databases


You can create one or multiple Azure Cosmos databases under your account. A database is analogous to
a namespace. A database is the unit of management for a set of Azure Cosmos containers. The following
table shows how an Azure Cosmos database is mapped to various API-specific entities:

Azure Cosmos SQL API Cassandra API Azure Cosmos Gremlin API Table API
entity DB API for
MongoDB
Azure Cosmos Database Keyspace Database Database NA
database
✔️ Note: With Table API accounts, when you create your first table, a default database is automatically
created in your Azure Cosmos account.

Operations on an Azure Cosmos database


Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Enumerate Yes Yes Yes (data- Yes NA NA
all databases base is
mapped to a
keyspace)
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Cosmos DB data structure  121

Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Read Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Create new Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Update Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)

Azure Cosmos containers


An Azure Cosmos container is the unit of scalability both for provisioned throughput and storage. A
container is horizontally partitioned and then replicated across multiple regions. The items that you add
to the container and the throughput that you provision on it are automatically distributed across a set of
logical partitions based on the partition key. An Azure Cosmos container can scale elastically, whether
you create containers by using dedicated or shared provisioned throughput modes. An Azure Cosmos
container is a schema-agnostic container of items. Items in a container can have arbitrary schemas.
An Azure Cosmos container is specialized into API-specific entities as shown in the following table:

Azure Cosmos SQL API Cassandra API Azure Cosmos Gremlin API Table API
entity DB API for
MongoDB
Azure Cosmos Container Table Collection Graph Table
container

Operations on an Azure Cosmos container


Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Enumerate Yes Yes Yes Yes NA NA
containers in
a database
Read a Yes Yes Yes Yes NA NA
container
Create a new Yes Yes Yes Yes NA NA
container
Update a Yes Yes Yes Yes NA NA
container
MCT USE ONLY. STUDENT USE PROHIBITED 122  Module 4 Develop solutions that use Cosmos DB storage  

Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Delete a Yes Yes Yes Yes NA NA
container

Azure Cosmos items


Depending on which API you use, an Azure Cosmos item can represent either a document in a collection,
a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific
entities to an Azure Cosmos item:

Cosmos entity SQL API Cassandra API Azure Cosmos Gremlin API Table API
DB API for
MongoDB
Azure Cosmos Document Row Document Node or edge Item
item

Operations on items
Azure Cosmos items support the following operations. You can use any of the Azure Cosmos APIs to
perform the operations.

Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Insert, No Yes Yes Yes Yes Yes
Replace,
Delete,
Upsert, Read

Partitioning in Azure Cosmos DB


Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance
needs of your application. In partitioning, the items in a container are divided into distinct subsets called
logical partitions. Logical partitions are formed based on the value of a partition key that is associated
with each item in a container. All items in a logical partition have the same partition key value.

Logical partitions
A logical partition consists of a set of items that have the same partition key. For example, in a container
where all items contain a City property, you can use City as the partition key for the container. Groups
of items that have specific values for City, such as London, Paris, and NYC, form distinct logical
partitions. You don't have to worry about deleting a partition when the underlying data is deleted.
In Azure Cosmos DB, a container is the fundamental unit of scalability. Data that's added to the container
and the throughput that you provision on the container are automatically (horizontally) partitioned across
a set of logical partitions. Data and throughput are partitioned based on the partition key you specify for
the Azure Cosmos container.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Cosmos DB data structure  123

Physical partitions
An Azure Cosmos container is scaled by distributing data and throughput across a large number of
logical partitions. Internally, one or more logical partitions are mapped to a physical partition that
consists of a set of replicas, also referred to as a replica set. Each replica set hosts an instance of the Azure
Cosmos database engine. A replica set makes the data stored within the physical partition durable, highly
available, and consistent. A physical partition supports the maximum amount of storage and request
units (RUs). Each replica that makes up the physical partition inherits the partition's storage quota. All
replicas of a physical partition collectively support the throughput that's allocated to the physical parti-
tion.
The following image shows how logical partitions are mapped to physical partitions that are distributed
globally:

Throughput provisioned for a container is divided evenly among physical partitions. A partition key
design that doesn't distribute the throughput requests evenly might create “hot” partitions. Hot partitions
might result in rate-limiting and in inefficient use of the provisioned throughput, and higher costs.
Unlike logical partitions, physical partitions are an internal implementation of the system. You can't
control the size, placement, or count of physical partitions, and you can't control the mapping between
logical partitions and physical partitions. However, you can control the number of logical partitions and
the distribution of data, workload and throughput by choosing the right logical partition key.

Choosing a partition key


The following is a good guidance for choosing a partition key:
●● A single logical partition has an upper limit of 10 GB of storage.
MCT USE ONLY. STUDENT USE PROHIBITED 124  Module 4 Develop solutions that use Cosmos DB storage  

●● Azure Cosmos containers have a minimum throughput of 400 request units per second (RU/s). When
throughput is provisioned on a database, minimum RUs per container is 100 request units per second
(RU/s). Requests to the same partition key can't exceed the throughput that's allocated to a partition.
If requests exceed the allocated throughput, requests are rate-limited. So, it's important to pick a
partition key that doesn't result in “hot spots” within your application.
●● Choose a partition key that has a wide range of values and access patterns that are evenly spread
across logical partitions. This helps spread the data and the activity in your container across the set of
logical partitions, so that resources for data storage and throughput can be distributed across the
logical partitions.
●● Choose a partition key that spreads the workload evenly across all partitions and evenly over time.
Your choice of partition key should balance the need for efficient partition queries and transactions
against the goal of distributing items across multiple partitions to achieve scalability.
●● Candidates for partition keys might include properties that appear frequently as a filter in your
queries. Queries can be efficiently routed by including the partition key in the filter predicate.

Create a synthetic partition key


It's the best practice to have a partition key with many distinct values, such as hundreds or thousands.
The goal is to distribute your data and workload evenly across the items associated with these partition
key values. If such a property doesn’t exist in your data, you can construct a synthetic partition key. This
document describes several basic techniques for generating a synthetic partition key for your Cosmos
container.

Concatenate multiple properties of an item


You can form a partition key by concatenating multiple property values into a single artificial parti-
tionKey property. These keys are referred to as synthetic keys. For example, consider the following
example document:
{
"deviceId": "abc-123",
"date": 2018
}

One option is to set /deviceId or /date as the partition key. Another option is to concatenate these
two values into a synthetic partitionKey property that's used as the partition key.
{
"deviceId": "abc-123",
"date": 2018,
"partitionKey": "abc-123-2018"
}

In real-time scenarios, you can have thousands of items in a database. Instead of adding the synthetic key
manually, define client-side logic to concatenate values and insert the synthetic key into the items in your
Cosmos containers.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Cosmos DB data structure  125

Use a partition key with a random suffix


Another possible strategy to distribute the workload more evenly is to append a random number at the
end of the partition key value. When you distribute items in this way, you can perform parallel write
operations across partitions.
An example is if a partition key represents a date. You might choose a random number between 1 and
400 and concatenate it as a suffix to the date. This method results in partition key values like 2018-08-
09.1,2018-08-09.2, and so on, through 2018-08-09.400. Because you randomize the partition key,
the write operations on the container on each day are spread evenly across multiple partitions. This
method results in better parallelism and overall higher throughput.

Use a partition key with pre-calculated suffixes


The random suffix strategy can greatly improve write throughput, but it's difficult to read a specific item.
You don't know the suffix value that was used when you wrote the item. To make it easier to read individ-
ual items, use the pre-calculated suffixes strategy. Instead of using a random number to distribute the
items among the partitions, use a number that is calculated based on something that you want to query.
Consider the previous example, where a container uses a date as the partition key. Now suppose that
each item has a Vehicle-Identification-Number (VIN) attribute that we want to access. Further,
suppose that you often run queries to find items by the VIN, in addition to date. Before your application
writes the item to the container, it can calculate a hash suffix based on the VIN and append it to the
partition key date. The calculation might generate a number between 1 and 400 that is evenly distributed.
This result is similar to the results produced by the random suffix strategy method. The partition key
value is then the date concatenated with the calculated result.
With this strategy, the writes are evenly spread across the partition key values, and across the partitions.
You can easily read a particular item and date, because you can calculate the partition key value for a
specific Vehicle-Identification-Number. The benefit of this method is that you can avoid creating
a single hot partition key, i.e., a partition key that takes all the workload.
MCT USE ONLY. STUDENT USE PROHIBITED 126  Module 4 Develop solutions that use Cosmos DB storage  

Working with Azure Cosmos DB resources and


data
Working with Azure Cosmos DB overview
The following image shows the hierarchy of different entities in an Azure Cosmos DB account:

Azure Cosmos databases


You can create one or multiple Azure Cosmos databases under your account. A database is analogous to
a namespace. A database is the unit of management for a set of Azure Cosmos containers.

Operations on an Azure Cosmos database


You can interact with an Azure Cosmos database with Azure Cosmos APIs as described in the following
table:

Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Enumerate Yes Yes Yes (data- Yes NA NA
all databases base is
mapped to a
keyspace)
Read Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Cosmos DB resources and data  127

Operation Azure CLI SQL API Cassandra Azure Gremlin API Table API
API Cosmos DB
API for
MongoDB
Create new Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)
Update Yes Yes Yes (data- Yes NA NA
database base is
mapped to a
keyspace)

Azure Cosmos containers


An Azure Cosmos container is the unit of scalability both for provisioned throughput and storage. A
container is horizontally partitioned and then replicated across multiple regions. The items that you add
to the container and the throughput that you provision on it are automatically distributed across a set of
logical partitions based on the partition key.
When you create an Azure Cosmos container, you configure throughput in one of the following modes:
●● Dedicated provisioned throughput mode: The throughput provisioned on a container is exclusively
reserved for that container and it is backed by the SLAs.
●● Shared provisioned throughput mode: These containers share the provisioned throughput with the
other containers in the same database (excluding containers that have been configured with dedicat-
ed provisioned throughput). In other words, the provisioned throughput on the database is shared
among all the “shared throughput” containers.
✔️ Note: You can configure shared and dedicated throughput only when creating the database and
container. To switch from dedicated throughput mode to shared throughput mode (and vice versa) after
the container is created, you have to create a new container and migrate the data to the new container.
You can migrate the data by using the Azure Cosmos DB change feed feature.
An Azure Cosmos container can scale elastically, whether you create containers by using dedicated or
shared provisioned throughput modes.

Azure Cosmos items


Depending on which API you use, an Azure Cosmos item can represent either a document in a collection,
a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific
entities to an Azure Cosmos item:

Cosmos entity SQL API Cassandra API Azure Cosmos Gremlin API Table API
DB API for
MongoDB
Azure Cosmos Document Row Document Node or edge Item
item
MCT USE ONLY. STUDENT USE PROHIBITED 128  Module 4 Develop solutions that use Cosmos DB storage  

Demo: Create Azure Cosmos DB resources by


using the Azure Portal
In this demo you'll learn how to perform the following actions in the Azure Portal:
●● Create an Azure Cosmos DB account
●● Add a database and a container
●● Add data to your database

Prerequisites
This demo is performed in the Azure Portal.

Login to Azure
1. Login to the Azure Portal. https://portal.azure.com

Create an Azure Cosmos DB account


1. In the Azure search bar, search for and select Azure Cosmos DB.

2. Select Add.
3. On the Create Azure Cosmos DB Account page, enter the basic settings for the new Azure Cosmos
account.
●● Subscription: Select the subscription for your pass.
●● Resource Group: Select Create new, then enter az204-cosmos-rg.
●● Account Name: Enter a unique name to identify your Azure Cosmos
account. The name can only contain lowercase letters, numbers, and the hyphen (-) character. It
must be between 3-31 characters in length.
●● API: Select Core (SQL) to create a document database and query by using SQL syntax.
●● Location: Use the location that is closest to your users to give them the fastest access to the data.
4. Select Review + create. You can skip the Network and Tags sections.
5. Review the account settings, and then select Create. It takes a few minutes to create the account. Wait
for the portal page to display Your deployment is complete.
6. Select Go to resource to go to the Azure Cosmos DB account page.
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Cosmos DB resources and data  129

Add a database and a container


You can use the Data Explorer in the Azure portal to create a database and container.
1. Select Data Explorer from the left navigation on your Azure Cosmos DB account page, and then
select New Container.
2. In the Add container pane, enter the settings for the new container.
●● Database ID: Enter ToDoList and check the Provision database throughput option, it allows you
to share the throughput provisioned to the database across all the containers within the database.
●● Throughput: Enter 400
●● Container ID: Enter Items
●● Partition key: Enter /category. The samples in this demo use /category as the partition key.
✔️ Note: Don't add Unique keys for this example. Unique keys let you add a layer of data integrity to
the database by ensuring the uniqueness of one or more values per partition key.
3. Select OK. The Data Explorer displays the new database and the container that you created.

Add data to your database


Add data to your new database using Data Explorer.
1. In Data Explorer, expand the ToDoList database, and expand the Items container. Next, select Items,
and then select New Item.

2. Add the following structure to the document on the right side of the Documents pane:
{
"id": "1",
"category": "personal",
"name": "groceries",
"description": "Pick up apples and strawberries.",
"isComplete": false
MCT USE ONLY. STUDENT USE PROHIBITED 130  Module 4 Develop solutions that use Cosmos DB storage  

3. Select Save.
4. Select New Document again, and create and save another document with a unique id, and any other
properties and values you want. Your documents can have any structure, because Azure Cosmos DB
doesn't impose any schema on your data.
❗️ Important: Don't delete the Azure Cosmos DB account or the az204-cosmos-rg resource group just
yet, we'll use it for another demo later in this lesson.

Manage data in Azure Cosmos DB by using the


Microsoft .NET SDK
This part of the lesson focuses on version 3.0 of the .NET SDK. (Microsoft.Azure.Cosmos NuGet pack-
age.) If you're familiar with the previous version of the .NET SDK, you may be used to the terms collection
and document.
Because Azure Cosmos DB supports multiple API models, version 3.0 of the .NET SDK uses the generic
terms “container” and "item". A container can be a collection, graph, or table. An item can be a docu-
ment, edge/vertex, or row, and is the content inside a container.
Below are examples showing some of the key operations you should be familiar with. For more examples
please visit the GitHub link above. The examples below all use the async version of the methods.

CosmosClient
Creates a new CosmosClient with a connection string. CosmosClient is thread-safe. Its recommend-
ed to maintain a single instance of CosmosClient per lifetime of the application which enables efficient
connection management and performance.
CosmosClient client = new CosmosClient(endpoint, key);

Database examples

Create a database
The CosmosClient.CreateDatabaseIfNotExistsAsync checks if a database exists, and if it
doesn't, creates it. Only the database id is used to verify if there is an existing database.
// An object containing relevant information about the response
DatabaseResponse databaseResponse = await client.CreateDatabaseIfNotExistsAsync(databaseId, 10000);

// A client side reference object that allows additional operations like ReadAsync
Database database = databaseResponse;

Read a database by ID
Reads a database from the Azure Cosmos service as an asynchronous operation.
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Cosmos DB resources and data  131

DatabaseResponse readResponse = await database.ReadAsync();

Delete a database
Delete a Database as an asynchronous operation.
await database.DeleteAsync();

Container examples

Create a container
The Database.CreateContainerIfNotExistsAsync method checks if a container exists, and if it
doesn't, it creates it. Only the container id is used to verify if there is an existing container.
// Set throughput to the minimum value of 400 RU/s
ContainerResponse simpleContainer = await database.CreateContainerIfNotExistsAsync(
id: containerId,
partitionKeyPath: partitionKey,
throughput: 400);

Get a container by ID
Container container = database.GetContainer(containerId);
ContainerProperties containerProperties = await container.ReadContainerAsync();

Delete a container
Delete a Container as an asynchronous operation.
await database.GetContainer(containerId).DeleteContainerAsync();

Item examples

Create an item
Use the Container.CreateItemAsync method to create an item. The method requires a JSON
serializable object that must contain an id property, and a partitionKey.
ItemResponse<SalesOrder> response = await container.CreateItemAsync(salesOrder, new PartitionKey(-
salesOrder.AccountNumber));
MCT USE ONLY. STUDENT USE PROHIBITED 132  Module 4 Develop solutions that use Cosmos DB storage  

Read an item
Use the Container.ReadItemAsync method to read an item. The method requires type to serialize
the item to along with an id property, and a partitionKey.
string id = "[id]";
string accountNumber = "[partition-key]";
ItemResponse<SalesOrder> response = await container.ReadItemAsync(id, new PartitionKey(account-
Number));

Query an item
The Container.GetItemQueryIterator method creates a query for items under a container in an
Azure Cosmos database using a SQL statement with parameterized values. It returns a FeedIterator.
QueryDefinition query = new QueryDefinition(
"select * from sales s where s.AccountNumber = @AccountInput ")
.WithParameter("@AccountInput", "Account1");

FeedIterator<SalesOrder> resultSet = container.GetItemQueryIterator<SalesOrder>(


query,
requestOptions: new QueryRequestOptions()
{
PartitionKey = new PartitionKey("Account1"),
MaxItemCount = 1
});

Additional resources
●● The azure-cosmos-dotnet-v34 GitHub repository includes the latest .NET sample solutions to
perform CRUD and other common operations on Azure Cosmos DB resources.
●● Visit this article Azure Cosmos DB.NET V3 SDK (Microsoft.Azure.Cosmos) examples for the SQL
API5 for direct links to specific examples in the GitHub repository.

Demo: Working with Azure Cosmos DB by using


code
In this demo you'll create a console app to perform the following operations in Azure Cosmos DB:
●● Connect to an Azure Cosmos DB account
●● Create a database
●● Create a container

Prerequisites
This demo is performed in the Visual Studio code on the virtual machine.

4 https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage
5 https://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-dotnet-v3sdk-samples
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Cosmos DB resources and data  133

Retrieve Azure Cosmos DB account keys


1. Login to the Azure portal. https://portal.azure.com
2. Navigate to the Azure Comsos DB account you created in the Create Azure Cosmos DB resources by
using the Azure Portal demo.
3. Select Keys in the left navigation panel. Leave the browser open so you can copy the needed informa-
tion later in this demo.

Set up the console application


1. Open a PowerShell terminal.
2. Create a folder for the project and change in to the folder.
md az204-cosmosdemo

cd az204-cosmosdemo

3. Create the .NET console app.


dotnet new console

4. Open Visual Studio Code and open the az204-cosmosdemo folder.


code .

Build the console app

Add packages and using statements


1. Add the Microsoft.Azure.Cosmos package to the project in a terminal in VS Code.
dotnet add package Microsoft.Azure.Cosmos

2. Add using statements to include Microsoft.Azure.Cosmos and to enable async operations.


using System.Threading.Tasks;
using Microsoft.Azure.Cosmos;

3. Change the Main method to enable async.


public static async Task Main(string[] args)

4. Delete the existing code from the Main method.

Add code to connect to an Azure Cosmos DB account


1. Add these constants and variables into your Program class.
public class Program
{
MCT USE ONLY. STUDENT USE PROHIBITED 134  Module 4 Develop solutions that use Cosmos DB storage  

// The Azure Cosmos DB endpoint for running this sample.


private static readonly string EndpointUri = "<your endpoint here>";
// The primary key for the Azure Cosmos account.
private static readonly string PrimaryKey = "<your primary key>";

// The Cosmos client instance


private CosmosClient cosmosClient;

// The database we will create


private Database database;

// The container we will create.


private Container container;

// The name of the database and container we will create


private string databaseId = "az204Database";
private string containerId = "az204Container";
}

2. In Program.cs, replace <your endpoint URL> with the value of URI. Replace <your primary
key> with the value of PRIMARY KEY. You get these values from the browser window you left open
above.
3. Below the Main method, add a new asynchronous task called GetStartedDemoAsync, which instanti-
ates our new CosmosClient.
public async Task CosmosDemoAsync()
{
// Create a new instance of the Cosmos Client
this.cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
}

4. Add the following code to the Main method to run the CosmosDemoAsync asynchronous task. The
Main method catches exceptions and writes them to the console.
public static async Task Main(string[] args)
{
try
{
Console.WriteLine("Beginning operations...\n");
Program p = new Program();
await p.CosmosDemoAsync();

}
catch (CosmosException de)
{
Exception baseException = de.GetBaseException();
Console.WriteLine("{0} error occurred: {1}", de.StatusCode, de);
}
catch (Exception e)
{
Console.WriteLine("Error: {0}", e);
MCT USE ONLY. STUDENT USE PROHIBITED
 Working with Azure Cosmos DB resources and data  135

}
finally
{
Console.WriteLine("End of demo, press any key to exit.");
Console.ReadKey();
}
}

5. Save your work and, in a terminal in VS Code, run the dotnet run command.
The console displays the message: End of demo, press any key to exit. This message confirms that
your application made a connection to Azure Cosmos DB.

Create a database
1. Copy and paste the CreateDatabaseAsync method below your CosmosDemoAsync method.
CreateDatabaseAsync creates a new database with ID az204Database if it doesn't already exist,
that has the ID specified from the databaseId field.
private async Task CreateDatabaseAsync()
{
// Create a new database
this.database = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(databaseId);
Console.WriteLine("Created Database: {0}\n", this.database.Id);
}

2. Copy and paste the code below where you instantiate the CosmosClient to call the CreateDataba-
seAsync method you just added.
// Runs the CreateDatabaseAsync method
await this.CreateDatabaseAsync();

3. Save your work and, in a terminal in VS Code, run the dotnet run command. The console displays
the message: Created Database: az204Database

Create a container
1. Copy and paste the CreateContainerAsync method below your CreateDatabaseAsync
method.
private async Task CreateContainerAsync()
{
// Create a new container
this.container = await this.database.CreateContainerIfNotExistsAsync(containerId, "/LastName");
Console.WriteLine("Created Container: {0}\n", this.container.Id);
}

2. Copy and paste the code below where you instantiated the CosmosClient to call the CreateCon-
tainer method you just added.
// Run the CreateContainerAsync method
await this.CreateContainerAsync();
MCT USE ONLY. STUDENT USE PROHIBITED 136  Module 4 Develop solutions that use Cosmos DB storage  

3. Save your work and, in a terminal in VS Code, run the dotnet run command. The console displays
the message: Created Container: az204Container
✔️ Note: You can verify the results by returning to your browser and selecting Browse in the Con-
tainers section in the left navigation. You may need to select Refresh.

Wrapping up
You can now safely delete the az204-cosmos-rg resource group from your account.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  137

Lab and review questions


Lab: Constructing a polyglot data solution

Lab scenario
You have been assigned the task of updating your company’s existing retail web application to use more
than one data service in Microsoft Azure. Your company’s goal is to take advantage of the best data
service for each application component. After conducting thorough research, you decide to migrate your
inventory database from Azure SQL Database to Azure Cosmos DB.

Objectives
After you complete this lab, you will be able to:
●● Create instances of various database services by using the Azure portal.
●● Write C# code to connect to SQL Database.
●● Write C# code to connect to Azure Cosmos DB.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 4 review questions


Review Question 1
Which of the below options would contain triggers and stored procedures in the Azure Comsos DB hierar-
chy?
†† Database Accounts
†† Databases
†† Containers
†† Items
MCT USE ONLY. STUDENT USE PROHIBITED 138  Module 4 Develop solutions that use Cosmos DB storage  

Review Question 2
The cost of all database operations is abstracted and normalized by Azure Cosmos DB and is expressed by
which of the options below?
†† Input/Output Operations Per Second (IOPS)
†† CPU usage
†† Request Units (RU)
†† Read requests

Review Question 3
Azure Cosmos DB allows developers to choose among the five well-defined consistency models: strong,
bounded staleness, session, consistent prefix and eventual. Which of statements below describing these
models are true?
(Select all that apply.)
†† When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a latest committed value.
†† When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a previous write.
†† When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the latest committed value of the write operation.
†† When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the previous committed value of the write operation.

Review Question 4
Partition keys should have many distinct values. If such a property doesn’t exist in your data, you can
construct a synthetic partition key. Which of the following methods below create a synthetic key?
†† Partition key with random suffix
†† Partition key with pre-calculated suffixes
†† Concatenate multiple properties of an item
†† None of the above
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  139

Answers
Review Question 1
Which of the below options would contain triggers and stored procedures in the Azure Comsos DB
hierarchy?
†† Database Accounts
†† Databases
■■ Containers
†† Items
Explanation
Stored procedures, user-defined functions, triggers, etc., are stored at the container level.
Review Question 2
The cost of all database operations is abstracted and normalized by Azure Cosmos DB and is expressed
by which of the options below?
†† Input/Output Operations Per Second (IOPS)
†† CPU usage
■■ Request Units (RU)
†† Read requests
Explanation
The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units
(or RUs, for short).
Review Question 3
Azure Cosmos DB allows developers to choose among the five well-defined consistency models: strong,
bounded staleness, session, consistent prefix and eventual. Which of statements below describing these
models are true?
(Select all that apply.)
†† When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a latest committed value.
■■ When the consistency level is set to bounded staleness, Cosmos DB guarantees that the clients always
read the value of a previous write.
■■ When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the latest committed value of the write operation.
†† When the consistency level is set to strong, the staleness window is equivalent to zero, and the clients
are guaranteed to read the previous committed value of the write operation.
Explanation
MCT USE ONLY. STUDENT USE PROHIBITED 140  Module 4 Develop solutions that use Cosmos DB storage  

Review Question 4
Partition keys should have many distinct values. If such a property doesn’t exist in your data, you can
construct a synthetic partition key. Which of the following methods below create a synthetic key?
†† Partition key with random suffix
†† Partition key with pre-calculated suffixes
■■ Concatenate multiple properties of an item
†† None of the above
Explanation
You can form a partition key by concatenating multiple property values into a single artificial partitionKey
property. These keys are referred to as synthetic keys.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 5 Implement IaaS solutions

Provisioning VMs in Azure


Introduction to Azure Virtual Machines
Azure Virtual Machines is one of several types of on-demand, scalable computing resources that Azure
offers. Typically, you'll choose a virtual machine if you need more control over the computing environ-
ment than the choices such as App Service or Cloud Services offer. Azure Virtual Machines provide you
with an operating system, storage, and networking capabilities and can run a wide range of applications.
Virtual machines are part of the Infrastructure as a Service (IaaS) offering. IaaS is an instant computing
infrastructure, provisioned and managed over the Internet. Quickly scale up and down with demand and
pay only for what you use.

There are lots of business scenarios for IaaS.


●● Test and development. Teams can quickly set up and dismantle test and development environments,
bringing new applications to market faster. IaaS makes it quick and economical to scale up dev-test
environments up and down.
●● Website hosting. Running websites using IaaS can be less expensive than traditional web hosting.
●● Storage, backup, and recovery. Organizations avoid the capital outlay for storage and complexity of
storage management, which typically requires a skilled staff to manage data and meet legal and
MCT USE ONLY. STUDENT USE PROHIBITED 142  Module 5 Implement IaaS solutions  

compliance requirements. IaaS is useful for handling unpredictable demand and steadily growing
storage needs. It can also simplify planning and management of backup and recovery systems.
●● Web apps. IaaS provides all the infrastructure to support web apps, including storage, web and
application servers, and networking resources. Organizations can quickly deploy web apps on IaaS
and easily scale infrastructure up and down when demand for the apps is unpredictable.
●● High-performance computing. High-performance computing (HPC) on supercomputers, computer
grids, or computer clusters helps solve complex problems involving millions of variables or calcula-
tions. Examples include earthquake and protein folding simulations, climate and weather predictions,
financial modeling, and evaluating product designs.
●● Big data analysis. Big data is a popular term for massive data sets that contain potentially valuable
patterns, trends, and associations. Mining data sets to locate or tease out these hidden patterns
requires a huge amount of processing power, which IaaS economically provides.
●● Extended Datacenter. Add capacity to your datacenter by adding virtual machines in Azure instead
of incurring the costs of physically adding hardware or space to your physical location. Connect your
physical network to the Azure cloud network seamlessly.

Azure Virtual Machine creation checklist


There are always a lot of design considerations when you build out an application infrastructure in Azure.
These aspects of a VM are important to think about before you start:
●● The names of your application resources
●● The location where the resources are stored
●● The size of the VM
●● The maximum number of VMs that can be created
●● The operating system that the VM runs
●● The configuration of the VM after it starts
●● The related resources that the VM needs

Naming
A virtual machine has a name assigned to it and it has a computer name configured as part of the
operating system. The name of a VM can be up to 15 characters.
If you use Azure to create the operating system disk, the computer name and the virtual machine name
are the same. If you upload and use your own image that contains a previously configured operating
system and use it to create a virtual machine, the names can be different. We recommend that when you
upload your own image file, you make the computer name in the operating system and the virtual
machine name the same.

Locations
All resources created in Azure are distributed across multiple geographical regions around the world.
Usually, the region is called location when you create a VM. For a VM, the location specifies where the
virtual hard disks are stored.
This table shows some of the ways you can get a list of available locations.
MCT USE ONLY. STUDENT USE PROHIBITED
 Provisioning VMs in Azure  143

Method Description
Azure portal Select a location from the list when you create a
VM.
Azure PowerShell Use the Get-AzLocation command.
REST API Use the List locations operation.
Azure CLI Use the az account list-locations opera-
tion.

VM size
The size of the VM that you use is determined by the workload that you want to run. The size that you
choose then determines factors such as processing power, memory, and storage capacity. Azure offers a
wide variety of sizes to support many types of uses.
Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure
charges only for the minutes used. Storage is priced and charged separately.

VM Limits
Your subscription has default quota limits in place that could impact the deployment of many VMs for
your project. The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by
filing a support ticket requesting an increase

Operating system disks and images


Virtual machines use virtual hard disks (VHDs) to store their operating system (OS) and data. VHDs are
also used for the images you can choose from to install an OS.
Azure provides many marketplace images to use with various versions and types of Windows Server
operating systems. Marketplace images are identified by image publisher, offer, sku, and version (typically
version is specified as latest). Only 64-bit operating systems are supported.
For more information on the supported guest operating systems, roles, and features, see Microsoft
server software support for Microsoft Azure virtual machines1.

Extensions
VM extensions give your VM additional capabilities through post deployment configuration and auto-
mated tasks.
These common tasks can be accomplished using extensions:
●● Run custom scripts – The Custom Script Extension helps you configure workloads on the VM by
running your script when the VM is provisioned.
●● Deploy and manage configurations – The PowerShell Desired State Configuration (DSC) Extension
helps you set up DSC on a VM to manage configurations and environments.
●● Collect diagnostics data – The Azure Diagnostics Extension helps you configure the VM to collect
diagnostics data that can be used to monitor the health of your application.

1 https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines
MCT USE ONLY. STUDENT USE PROHIBITED 144  Module 5 Implement IaaS solutions  

Related resources
The resources in this table are used by the VM and need to exist or be created when the VM is created.

Resource Required Description


Resource group Yes The VM must be contained in a
resource group.
Storage account Yes The VM needs the storage
account to store its virtual hard
disks.
Virtual network Yes The VM must be a member of a
virtual network.
Public IP address No The VM can have a public IP
address assigned to it to remote-
ly access it.
Network interface Yes The VM needs the network
interface to communicate in the
network.
Data disks No The VM can include data disks to
expand storage capabilities.

Azure Virtual Machine sizing


The best way to determine the appropriate VM size is to consider the type of workload your VM needs to
run. Based on the workload, you're able to choose from a subset of available VM sizes. Workload options
are classified as follows on Azure:

VM Type Family Description


General Purpose B, Dsv3, Dv3, DSv2, Dv2, Av2, DC General-purpose VMs are
designed to have a balanced
CPU-to-memory ratio. Ideal for
testing and development, small
to medium databases, and low
to medium traffic web servers.
Compute Optimized Fsv2, Fs, F Compute optimized VMs are
designed to have a high
CPU-to-memory ratio. Suitable
for medium traffic web servers,
network appliances, batch
processes, and application
servers.
Memory Optimized Esv3, Ev3, M, GS, G, DSv2, Dv2 Memory optimized VMs are
designed to have a high memo-
ry-to-CPU ratio. Great for
relational database servers,
medium to large caches, and
in-memory analytics.
MCT USE ONLY. STUDENT USE PROHIBITED
 Provisioning VMs in Azure  145

VM Type Family Description


Storage Optimized Lsv2, Ls Storage optimized VMs are
designed to have high disk
throughput and IO. Ideal for VMs
running databases.
GPU NV, NVv2, NC, NCv2, NCv3, ND, GPU VMs are specialized virtual
NDv2 machines targeted for heavy
graphics rendering and video
editing. These VMs are ideal
options for model training and
inferencing with deep learning.
High Performance Compute H High performance compute is
the fastest and most powerful
CPU virtual machines with
optional high-throughput
network interfaces.
What if my size needs change?
Azure allows you to change the VM size when the existing size no longer meets your needs. You can
resize the VM - as long as your current hardware configuration is allowed in the new size. This provides a
fully agile and elastic approach to VM management.
If you stop and deallocate the VM, you can then select any size available in your region since this re-
moves your VM from the cluster it was running on.
✔️ Be cautious when resizing production VMs - they will be rebooted automatically which can cause a
temporary outage and change some configuration settings such as the IP address.

For more information:


●● Sizes for Windows virtual machines in Azure
●● https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes?toc=%-
2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json#size-tables2
●● Sizes for Linux virtual machines in Azure
●● https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes?toc=%2fazure%2fvir-
tual-machines%2flinux%2ftoc.json

Availability options for virtual machines in Azure


Availability sets
An availability set is a logical grouping of VMs within a datacenter that allows Azure to understand how
your application is built to provide for redundancy and availability. We recommended that two or more
VMs are created within an availability set to provide for a highly available application and to meet the
99.95% Azure SLA. There is no cost for the Availability Set itself, you only pay for each VM instance that
you create. When a single VM is using Azure premium SSDs, the Azure SLA applies for unplanned
maintenance events.

2 https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
MCT USE ONLY. STUDENT USE PROHIBITED 146  Module 5 Implement IaaS solutions  

An availability set is composed of two additional groupings that protect against hardware failures and
allow updates to safely be applied - fault domains (FDs) and update domains (UDs). You can read more
about how to manage the availability of Linux VMs or Windows VMs.

Fault domains
A fault domain is a logical group of underlying hardware that share a common power source and net-
work switch, similar to a rack within an on-premises datacenter. As you create VMs within an availability
set, the Azure platform automatically distributes your VMs across these fault domains. This approach
limits the impact of potential physical hardware failures, network outages, or power interruptions.

Update domains
An update domain is a logical group of underlying hardware that can undergo maintenance or be
rebooted at the same time. As you create VMs within an availability set, the Azure platform automatically
distributes your VMs across these update domains. This approach ensures that at least one instance of
your application always remains running as the Azure platform undergoes periodic maintenance. The
order of update domains being rebooted may not proceed sequentially during planned maintenance, but
only one update domain is rebooted at a time.

Managed Disk fault domains


For VMs using Azure Managed Disks, VMs are aligned with managed disk fault domains when using a
managed availability set. This alignment ensures that all the managed disks attached to a VM are within
the same managed disk fault domain. Only VMs with managed disks can be created in a managed
availability set. The number of managed disk fault domains varies by region - either two or three man-
aged disk fault domains per region. You can read more about these managed disk fault domains for Linux
VMs or Windows VMs.

Availability zones
Availability zones, an alternative to availability sets, expand the level of control you have to maintain the
availability of the applications and data on your VMs. An Availability Zone is a physically separate zone
within an Azure region. There are three Availability Zones per supported Azure region. Each Availability
MCT USE ONLY. STUDENT USE PROHIBITED
 Provisioning VMs in Azure  147

Zone has a distinct power source, network, and cooling. By architecting your solutions to use replicated
VMs in zones, you can protect your apps and data from the loss of a datacenter. If one zone is compro-
mised, then replicated apps and data are instantly available in another zone.

Demo: Creating an Azure VM by using the Azure


portal
In this Demo, you will learn how to create and access a Windows virtual machine in the portal.

Create the virtual machine


1. Choose Create a resource in the upper left-hand corner of the Azure portal.
2. In the New page, under Popular, select Windows Server 2016 Datacenter, then choose Create.
3. In the Basics tab, under Project details, make sure the correct subscription is selected and then
choose to Create new resource group. Type the myResourceGroup for the name. Creating a new
resource group will make it easier to clean up at the end of the demo.
4. Under Instance details, type myVM for the Virtual machine name and choose a Location. Leave the
other defaults.
5. Under Administrator account, provide a username, such as azureuser and a password. The password
must be at least 12 characters long and meet the defined complexity requirements.
6. Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP
from the drop-down.
7. Move to the Management tab, and under Monitoring turn Off Boot Diagnostics. This will eliminate
validation errors.
8. Leave the remaining defaults and then select the Review + create button at the bottom of the page.

Connect to virtual machine


Create a remote desktop connection to the virtual machine. These directions tell you how to connect to
your VM from a Windows computer. On a Mac, you need to install an RDP client from the Mac App Store.
1. Select the Connect button on the virtual machine properties page.
2. In the Connect to virtual machine page, keep the default options to connect by DNS name over port
3389 and click Download RDP file.
3. Open the downloaded RDP file and select Connect when prompted.
4. In the Windows Security window, select More choices and then Use a different account. Type the
username as localhost\username, enter password you created for the virtual machine, and then select
OK.
5. You may receive a certificate warning during the sign-in process. Select Yes or Continue to create the
connection.

Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources. To
do so, select the resource group for the virtual machine, select Delete, then confirm the name of the
resource group to delete.
MCT USE ONLY. STUDENT USE PROHIBITED 148  Module 5 Implement IaaS solutions  

Demo: Creating an Azure VM by using Power-


Shell
In this Demo, you will learn how to create a virtual machine by using PowerShell.

Create the virtual machine


1. Launch the Cloud Shell3, or open a local PowerShell window.
●● If running locally use the Connect-AzAccount command to connect to your account.
2. Run this code:
$myResourceGroup = Read-Host -prompt "Enter a resource group name: "
$myLocation = Read-Host -prompt "Enter a location (ie. WestUS): "
$myVM = Read-Host -prompt "Enter a VM name: "
$vmAdmin = Read-Host -prompt "Enter a VM admin user name: "

# The password must meet the length and complexity requirements.


$vmPassword = Read-Host -prompt "Enter the admin password: "

# create a resource group


New-AzResourceGroup -Name $myResourceGroup -Location $myLocation

# Create the virtual machine. When prompted, provide a username


# and password to be used as the logon credentials for the VM.

New-AzVm `
-ResourceGroupName $myResourceGroup `
-Name $myVM `
-Location $myLocation `
-adminUsername $vmAdmin `
-adminPassword $vmPassword

It will take a few minutes for the VM to be created.

Verify the machine creation in the portal


1. Access the portal and view your virtual machines.
2. Verify the new VM was created.
3. Connect to the new VM:

●● Select the VM you just created.


●● Select Connect.
●● Launch Remote Desktop connect to the VM with the IP address and port information displayed.

3 https://shell.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED
 Provisioning VMs in Azure  149

Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources.
Replace <myResourceGroup> with the name you used earlier, or you can delete it through the portal.
Remove-AzResourceGroup -Name "<myResourceGroup>"

Linux VMs in Azure


Azure supports many Linux distributions and versions including CentOS by OpenLogic, Core OS, Debian,
Oracle Linux, Red Hat Enterprise Linux, and Ubuntu.

Here are a few things to know about the Linux distributions.


●● There are hundreds of Linux images in the Azure Marketplace.
●● Linux has the same deployment options as for Windows virtual machines: PowerShell (Resource
Manager), Portal, and Command Line Interface. You can manage your Linux virtual machines with a
host of popular open-source DevOps tools such as Puppet, and Chef.
For more information, you can visit:
●● Linux virtual machines (Documentation) - https://docs.microsoft.com/en-us/azure/virtual-ma-
chines/linux/
MCT USE ONLY. STUDENT USE PROHIBITED 150  Module 5 Implement IaaS solutions  

Create and deploy ARM templates


Azure Resource Manager overview
Azure Resource Manager is the deployment and management service for Azure. It provides a manage-
ment layer that enables you to create, update, and delete resources in your Azure subscription. You use
management features, like access control, locks, and tags, to secure and organize your resources after
deployment.
When a user sends a request from any of the Azure tools, APIs, or SDKs, Resource Manager receives the
request. It authenticates and authorizes the request. Resource Manager sends the request to the Azure
service, which takes the requested action. Because all requests are handled through the same API, you
see consistent results and capabilities in all the different tools.
The following image shows the role Azure Resource Manager plays in handling Azure requests.

Terminology
If you're new to Azure Resource Manager, there are some terms you might not be familiar with.
●● Resource - A manageable item that is available through Azure. Some common resources are a virtual
machine, storage account, web app, database, and virtual network, but there are many more.
●● Resource group - A container that holds related resources for an Azure solution. The resource group
can include all the resources for the solution, or only those resources that you want to manage as a
group. You decide how you want to allocate resources to resource groups based on what makes the
most sense for your organization.
●● Resource provider - A service that supplies the resources you can deploy and manage through
Resource Manager. Each resource provider offers operations for working with the resources that are
deployed. Some common resource providers are Microsoft.Compute, which supplies the virtual
machine resource, Microsoft.Storage, which supplies the storage account resource, and Microsoft.
Web, which supplies resources related to web apps.
●● Resource Manager template - A JavaScript Object Notation (JSON) file that defines one or more
resources to deploy to a resource group. It also defines the dependencies between the deployed
resources. The template can be used to deploy the resources consistently and repeatedly.
●● Declarative syntax - Syntax that lets you state “Here is what I intend to create” without having to
write the sequence of programming commands to create it. The Resource Manager template is an
example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to
Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and deploy ARM templates  151

Understand scope
Azure provides four levels of management scope: management groups, subscriptions, resource groups,
and resources. The following image shows an example of these layers.

You apply management settings at any of these levels of scope. The level you select determines how
widely the setting is applied. Lower levels inherit settings from higher levels. For example, when you apply
a policy to the subscription, the policy is applied to all resource groups and resources in your subscrip-
tion. When you apply a policy on the resource group, that policy is applied the resource group and all its
resources. However, another resource group doesn't have that policy assignment.
You can deploy templates to management groups, subscriptions, or resource groups.

ARM template deployment


With Resource Manager, you can create a template (in JSON format) that defines the infrastructure and
configuration of your Azure solution. By using a template, you can repeatedly deploy your solution
throughout its lifecycle and have confidence your resources are deployed in a consistent state.
✔️ Note: To view the JSON syntax for specific resource types, see Define resources in Azure Resource
Manager templates4.
When you deploy a template, Resource Manager converts the template into REST API operations. For
example, when Resource Manager receives a template with the following resource definition:
"resources": [
{
"apiVersion": "2016-01-01",
"type": "Microsoft.Storage/storageAccounts",
"name": "mystorageaccount",
"location": "westus",
"sku": {
"name": "Standard_LRS"
},

4 https://docs.microsoft.com/en-us/azure/templates/
MCT USE ONLY. STUDENT USE PROHIBITED 152  Module 5 Implement IaaS solutions  

"kind": "Storage",
"properties": {
}
}
]

It converts the definition to the following REST API operation, which is sent to the Microsoft.Storage
resource provider:
PUT
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/
providers/Microsoft.Storage/storageAccounts/mystorageaccount?api-version=2016-01-01
REQUEST BODY
{
"location": "westus",
"properties": {
}
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage"
}

Defining multi-tiered templates


How you define templates and resource groups is entirely up to you and how you want to manage your
solution. For example, you can deploy a three tier application through a single template to a single
resource group.

But, you don't have to define your entire infrastructure in a single template. Often, it makes sense to
divide your deployment requirements into a set of targeted, purpose-specific templates. You can easily
reuse these templates for different solutions. To deploy a particular solution, you create a master tem-
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and deploy ARM templates  153

plate that links all the required templates. The following image shows how to deploy a three tier solution
through a parent template that includes three nested templates.

If you envision your tiers having separate lifecycles, you can deploy your three tiers to separate resource
groups. The resources can still be linked to resources in other resource groups.
Azure Resource Manager analyzes dependencies to ensure resources are created in the correct order. If
one resource relies on a value from another resource (such as a virtual machine needing a storage
account for disks), you set a dependency. For more information, see Defining dependencies in Azure
Resource Manager templates.
You can also use the template for updates to the infrastructure. For example, you can add a resource to
your solution and add configuration rules for the resources that are already deployed. If the template
specifies creating a resource but that resource already exists, Azure Resource Manager performs an
update instead of creating a new asset. Azure Resource Manager updates the existing asset to the same
state as it would be as new.
Resource Manager provides extensions for scenarios when you need additional operations such as
installing particular software that isn't included in the setup. If you're already using a configuration
management service, like DSC, Chef or Puppet, you can continue working with that service by using
extensions.
Finally, the template becomes part of the source code for your app. You can check it in to your source
code repository and update it as your app evolves. You can edit the template through Visual Studio.

Conditional deployment in Resource Manager


templates
Sometimes you need to optionally deploy a resource in a template. Use the condition element to spec-
ify whether the resource is deployed. The value for this element resolves to true or false. When the value
is true, the resource is created. When the value is false, the resource isn't created. The value can only be
applied to the whole resource.

New or existing resource


You can use conditional deployment to create a new resource or use an existing one. The following exam-
ple shows how to use condition to deploy a new storage account or use an existing storage account.
MCT USE ONLY. STUDENT USE PROHIBITED 154  Module 5 Implement IaaS solutions  

{
"condition": "[equals(parameters('newOrExisting'),'new')]",
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
"apiVersion": "2017-06-01",
"location": "[parameters('location')]",
"sku": {
"name": "[variables('storageAccountType')]"
},
"kind": "Storage",
"properties": {}
}

When the parameter newOrExisting is set to new, the condition evaluates to true. The storage account is
deployed. However, when newOrExisting is set to existing, the condition evaluates to false and the
storage account isn't deployed.

Runtime functions
If you use a reference or list function with a resource that is conditionally deployed, the function is
evaluated even if the resource isn't deployed. You get an error if the function refers to a resource that
doesn't exist.
Use the if function to make sure the function is only evaluated for conditions when the resource is
deployed.

Additional resources
●● Azure Resource Manager template functions

●● https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-tem-
plate-functions

Azure Resource Manager deployment modes


When deploying your resources, you specify that the deployment is either an incremental update or a
complete update. The difference between these two modes is how Resource Manager handles existing
resources in the resource group that aren't in the template. The default mode is incremental.
For both modes, Resource Manager tries to create all resources specified in the template. If the resource
already exists in the resource group and its settings are unchanged, no operation is taken for that
resource. If you change the property values for a resource, the resource is updated with those new values.
If you try to update the location or type of an existing resource, the deployment fails with an error.
Instead, deploy a new resource with the location or type that you need.

Complete mode
In complete mode, Resource Manager deletes resources that exist in the resource group but aren't
specified in the template.
If your template includes a resource that isn't deployed because condition evaluates to false, the result
depends on which REST API version you use to deploy the template. If you use a version earlier than
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and deploy ARM templates  155

2019-05-10, the resource isn't deleted. With 2019-05-10 or later, the resource is deleted. The latest
versions of Azure PowerShell and Azure CLI delete the resource.
Be careful using complete mode with copy loops. Any resources that aren't specified in the template
after resolving the copy loop are deleted.

Incremental mode
In incremental mode, Resource Manager leaves unchanged resources that exist in the resource group
but aren't specified in the template.
However, when redeploying an existing resource in incremental mode, the outcome is a different. Specify
all properties for the resource, not just the ones you're updating. A common misunderstanding is to think
properties that aren't specified are left unchanged. If you don't specify certain properties, Resource
Manager interprets the update as overwriting those values.

Example result
To illustrate the difference between incremental and complete modes, consider the following table.

Resource Group Template contains Incremental result Complete result


contains
Resource A Resource A Resource A Resource A
Resource B Resource B Resource B Resource B
Resource C Resource D Resource C Resource D
Resource D
When deployed in incremental mode, Resource D is added to the existing resource group. When
deployed in complete mode, Resource D is added and Resource C is deleted.

Set deployment mode


To set the deployment mode when deploying with PowerShell, use the Mode parameter.
New-AzResourceGroupDeployment `
-Mode Complete `
-Name ExampleDeployment `
-ResourceGroupName ExampleResourceGroup `
-TemplateFile c:\MyTemplates\storage.json

To set the deployment mode when deploying with Azure CLI, use the mode parameter.
az group deployment create \
--name ExampleDeployment \
--mode Complete \
--resource-group ExampleGroup \
--template-file storage.json \
--parameters storageAccountType=Standard_GRS
MCT USE ONLY. STUDENT USE PROHIBITED 156  Module 5 Implement IaaS solutions  

Demo: Create ARM templates by using the Az-


ure Portal
In this Demo you will learn how to create, edit, and deploy an Azure Resource Manager template by using
the Azure portal. This demo shows you how to create an Azure Storage account, but you can use the
same process to create other Azure resources.

Generate a template using the portal


Using the Azure portal, you can configure a resource, for example an Azure Storage account. Before you
deploy the resource, you can export your configuration into a Resource Manager template. You can save
the template and reuse it in the future.
1. Sign in to the Azure portal: https://portal.azure.com/.
2. Select Create a resource > Storage > Storage account.
3. Enter the following information.
●● Resource group: Select Create new, and specify a resource group name of your choice.
●● Name: Give your storage account a unique name. The storage account name must be unique
across all of Azure. If you get an error message saying “The storage account name is already
taken”, try using <your name>storage<Today's date in MMDD>, for example mystorage1016.
●● You can use the default values for the rest of the properties. Note: Some of the exported tem-
plates require some edits before you can deploy them.
4. Select Review + create on the bottom of the screen.
❗️ Note: Do not select Create in the next step.
5. Select Download a template for automation on the bottom of the screen. The portal shows the
generated template:
●● The main pane shows the template. It is a JSON file with six top-level elements - schema, con-
tentVersion, parameters, variables, resources, and output.
●● There are six parameters defined. One of them is called storageAccountName. In the next section,
you edit the template to use a generated name for the storage account.
●● In the template, one Azure resource is defined. The type is Microsoft.Storage/storageAc-
counts. Note how the resource is defined and the definition structure.
6. Select Download from the top of the screen. Open the downloaded zip file, and then save template.
json to your computer. In the next section, you use a template deployment tool to edit the template.
7. Select the Parameter tab to see the values you provided for the parameters. Write down these values,
you need them in the next section when you deploy the template.

Edit and deploy the template


The Azure portal can be used to perform some basic template editing by using a portal tool called
Template Deployment. To edit a more complex template, consider using Visual Studio Code which
provides richer edit functionalities.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and deploy ARM templates  157

Azure requires that each Azure service has a unique name. The deployment fails if you enter a storage
account name that already exists. To avoid this issue, you can use the template function uniques-
tring() to generate a unique storage account name.
1. In the Azure portal, select Create a resource.
2. In Search the Marketplace, type template deployment, and then press ENTER.
3. Select Template deployment (deploy using custom templates).
4. Select Create.
5. Select Build your own template to open the editor.
6. Select Load file, and then select the template.json file you downloaded in the last section.
7. Make the following three changes to the template:
●● Remove the storageAccountName parameter from the parameters element.
●● Add one variable called storageAccountName as shown below to the variables element. The
example below will generate a unique Storage Account name:
"storageAccountName": "[concat(uniqueString(subscription().subscriptionId), 'storage')]"

●● Update the name element of the Microsoft.Storage/storageAccounts resource to use the newly
defined variable instead of the parameter:
"name": "[variables('storageAccountName')]",

8. Select Save.
9. In the BASICS section of the form that appears select the resource group you created in the last
section.
10. In the SETTINGS section of the form enter the values from the parameters you wrote down in Step 8
of the previous section. Here is a screenshot of a sample deployment:
MCT USE ONLY. STUDENT USE PROHIBITED 158  Module 5 Implement IaaS solutions  

11. Accept the terms and conditions and then select Purchase.
12. Select the bell icon (notifications) from the top of the screen to see the deployment status. Wait until
the deployment is completed.
13. Select Go to resource group from the notification pane. You can see the deployment status was
successful, and there is only one storage account in the resource group. The storage account name is
a unique string generated by the template.

Clean up resources
When the Azure resources are no longer needed, clean up the resources you deployed by deleting the
resource group.

Demo: Create ARM templates by using Visual


Studio Code
In this Demo you will learn how to use Visual Studio Code, and the Azure Resource Manager Tools
extension, to create and edit Azure Resource Manager templates. You can create Resource Manager
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and deploy ARM templates  159

templates in Visual Studio Code without the extension, but the extension provides autocomplete options
that simplify template development.
It's often easier, and better, to begin building your ARM template based off one of the existing Quickstart
templates available on the Azure Quickstart Templates5 site.
This Demo is based on the Create a standard storage account6 template.

Prerequisites
You will need:
●● Visual Studio Code. You can download a copy here: https://code.visualstudio.com/.
●● Resource Manager Tools extension.
Follow these steps to install the Resource Manager Tools extension:
1. Open Visual Studio Code.
2. Press CTRL+SHIFT+X to open the Extensions pane.
3. Search for Azure Resource Manager Tools, and then select Install.
4. Select Reload to finish the extension installation.

Open the Quickstart template


1. From Visual Studio Code, select File > Open File.
2. In File name, paste the following URL:
https://raw.githubusercontent.com/Azure/azure-quickstart-templates/mas-
ter/101-storage-account-create/azuredeploy.json

3. Select Open to open the file.


4. Select File > Save As to save the file as azuredeploy.json to your local computer.

Edit the template


Add one more element into the outputs section to show the storage URI.
1. Add one more output to the exported template:
"storageUri": {
"type": "string",
"value": "[reference(variables('storageAccountName')).primaryEndpoints.blob]"
},

When you are done, the outputs section looks like:


"outputs": {
"storageAccountName": {
"type": "string",
"value": "[variables('storageAccountName')]"

5 https://azure.microsoft.com/resources/templates/
6 https://azure.microsoft.com/resources/templates/101-storage-account-create/
MCT USE ONLY. STUDENT USE PROHIBITED 160  Module 5 Implement IaaS solutions  

},
"storageUri": {
"type": "string",
"value": "[reference(variables('storageAccountName')).primaryEndpoints.blob]"
}
}

If you copied and pasted the code inside Visual Studio Code, try to retype the value element to
experience the IntelliSense capability of the Resource Manager Tools extension.

2. Select File>Save to save the file.

Deploy the template


There are many methods for deploying templates, you will be using the Azure Cloud shell.
1. Sign in to the Azure Cloud shell7
2. Choose the PowerShell environment on the upper left corner. Restarting the shell is required when
you switch.
3. Select Upload/download files, and then select Upload.

Select the file you saved in the previous section. The default name is azuredeploy.json. The template
file must be accessible from the shell. You can use the ls command and the cat command to verify the
file was uploaded successfully.
4. From the Cloud shell, run the following commands.
$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
$location = Read-Host -Prompt "Enter the location (i.e. centralus)"

New-AzResourceGroup -Name $resourceGroupName -Location "$location"


New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile
"$HOME/azuredeploy.json"

7 https://shell.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and deploy ARM templates  161

Update the template file name if you save the file to a name other than azuredeploy.json.
The following screenshot shows a sample deployment:

The storage account name and the storage URL in the outputs section are highlighted on the screen-
shot.

Clean up resources
When the Azure resources are no longer needed, clean up the resources you deployed by deleting the
resource group.
MCT USE ONLY. STUDENT USE PROHIBITED 162  Module 5 Implement IaaS solutions  

Create container images for solutions


Containers and Docker overview
A container is a loosely isolated environment that allows us to build and run software packages. These
software packages include the code and all dependencies to run applications quickly and reliably on any
computing environment. We call these packages container images.
The container image becomes the unit we use to distribute our applications. Software containerization is
an OS virtualization method that is used to deploy and run containers without using a virtual machine
(VM). Containers can run on physical hardware, in the cloud, VMs, and across multiple OSs.

Docker
Docker is a containerization platform used to develop, ship, and run containers. Docker doesn't use a
hypervisor, and you can run Docker on your desktop or laptop if you're developing and testing applica-
tions. The desktop version of Docker supports Linux, Windows, and macOS. For production systems,
Docker is available for server environments, including many variants of Linux and Microsoft Windows
Server 2016 and above.
The Docker platform consists of several components that we use to build, run, and manage our contain-
erized applications.

Docker Engine
The Docker Engine consists of several components configured as a client-server implementation where
the client and server run simultaneously on the same host. The client communicates with the server using
a REST API, which allows the client to also communicate with a remote server instance.

The Docker client


The Docker client is a command-line application named docker that provides us with a command line
interface (CLI) to interact with a Docker server. The docker command uses the Docker REST API to send
instructions to either a local or remote server and functions as the primary interface we use to manage
our containers.
The Docker server
The Docker server is a daemon named dockerd. The dockerd daemon responds to requests from the
client via the Docker REST API and can interact with other daemons. The Docker server is also responsible
for tracking the lifecycle of our containers.
Docker objects
MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  163

There are several objects that you'll create and configure to support your container deployments. These
include networks, storage volumes, plugins, and other service objects. We won't cover all of these objects
here, but it's good to keep in mind that these objects are items that we can create and deploy as needed.
Docker Hub
Docker Hub is a Software-as-a-Service (SaaS) Docker container registry. Docker registries are repositories
that we use to store and distribute the container images we create. Docker Hub is the default public
registry Docker uses for image management.
Keep in mind that you can create and use a private Docker registry or use one of the many cloud provider
options available. For example, you can use Azure Container Registry to store Docker containers to use in
several Azure container enabled services.

How Docker images work


Here we'll look at the differences between software, packages, and images as used in Docker. Knowing
the differences between these concepts will help us better understand how Docker images work.
We'll also briefly discuss the roles of the OS running on the host and the OS running in the container.

Software packaged into a container


The software packaged into a container isn't limited to the applications our developers build. When we
talk about software, we refer to application code, system packages, binaries, libraries, configuration files,
and the operating system running in the container.

Container images
A container image is a portable package that contains software. It's this image that, when run, becomes
our container. The container is the in-memory instance of an image.
A container image is immutable. Once you've built an image, the image can't be changed. The only way
to change an image is to create a new image. This feature is our guarantee that the image we use in
production is the same image used in development and QA.

Host OS
The host OS is the OS on which the Docker engine runs. Docker containers running on Linux share the
host OS kernel and don't require a container OS as long as the binary can access the OS kernel directly.
However, Windows containers need a container OS. The container depends on the OS kernel to manage
services such as the file system, network management, process scheduling, and memory management.

Container OS
The container OS is the OS that is part of the packaged image. We have the flexibility to include different
versions of Linux or Windows OSs in a container. This flexibility allows us to access specific OS features or
install additional software our applications may use.
The container OS is isolated from the host OS and is the environment in which we deploy and run our
application. Combined with the image's immutability, this isolation means the environment for our
application running in development is the same as in production.
MCT USE ONLY. STUDENT USE PROHIBITED 164  Module 5 Implement IaaS solutions  

Stackable Unification File System (Unionfs)


Unionfs is used to create Docker images. Unionfs is a filesystem that allows you to stack several
directories, called branches, in such a way that it appears as if the content is merged. However, the
content is physically kept separate. Unionfs allows you to add and remove branches as you build out
your file system.
For example, assume we're building an image for our web application from earlier. We'll layer the Ubuntu
distribution as a base image on top of the boot file system. Next we'll install Nginx and our web app.
We're effectively layering Nginx and the web app on top of the original Ubuntu image.
A final writeable layer is created once the container is run from the image. This layer however, does not
persist when the container is destroyed.

Container File System ← Writable


Image ← Add App
Image ← Add Nginx
Base Image ← Create from Ubuntu
Boot File System ← Kernel

Base and parent images


A base image is an image that uses the Docker scratch image. The scratch image is an empty
container image that doesn't create a filesystem layer. This image assumes that the application you're
going to run can directly use the host OS kernel.
A parent image is a container image from which you create your images. For example, instead of creating
an image from scratch and then installing Ubuntu, we'll rather use an image already based on Ubuntu.
We can even use an image that already has Nginx installed. A parent image usually includes a container
OS.
Both image types allow us to create a reusable image. However, base images allow us more control over
the contents of the final image. Recall from earlier that an image is immutable, you can only add to an
image and not subtract.

Dockerfile overview
A Dockerfile is a text file that contains the instructions we use to build and run a Docker image. The
following aspects of the image are defined:
●● The base or parent image we use to create the new image
●● Commands to update the base OS and install additional software
●● Build artifacts to include, such as a developed application
●● Services to expose, such a storage and network configuration
●● Command to run when the container is launched
Let's map these aspects to an example Dockerfile. Suppose we're creating a Docker image for an ASP.NET
Core website. The Dockerfile may look like the following example.
# Step 1: Specify the parent image for the new image
FROM ubuntu:18.04

# Step 2: Update OS packages and install additional software


MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  165

RUN apt -y update && apt install -y wget nginx software-properties-common apt-transport-https \
&& wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb
-O packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& add-apt-repository universe \
&& apt -y update \
&& apt install -y dotnet-sdk-3.0

# Step 3: Configure Nginx environment


CMD service nginx start

# Step 4: Configure Nginx environment


COPY ./default /etc/nginx/sites-available/default

# STEP 5: Configure work directory


WORKDIR /app

# STEP 6: Copy website code to container


COPY ./website/. .

# STEP 7: Configure network requirements


EXPOSE 80:8080

# STEP 8: Define the entry point of the process that runs in the container
ENTRYPOINT ["dotnet", "website.dll"]

We're not going to cover the Dockerfile file specification here or the detail of each command in our
above example. However, notice that there are several commands in this file that allow us to manipulate
the structure of the image.
Recall, we mentioned earlier that Docker images make use of unionfs. Each of these steps creates a
cached container image as we build the final container image. These temporary images are layered on
top of the previous and presented as single image once all steps complete.
Finally, notice the last step, step 8. The ENTRYPOINT in the file indicates which process will execute once
we run a container from an image.

Additional resources
●● Dockerfile reference

●● https://docs.docker.com/engine/reference/builder/

Building and managing Docker images


Docker images are large files that initially get stored on your PC and we need tools to manage these files.
The Docker CLI allows us to manage images by building, listing, removing, and running them. We
manage Docker images by using the docker client. The client doesn't execute the commands directly
and sends all queries to the dockerd daemon.
MCT USE ONLY. STUDENT USE PROHIBITED 166  Module 5 Implement IaaS solutions  

We aren't going to cover all the client commands and command flags here, but we'll look at some of the
most used commands.
✔️ Note: For a full list of client commands visit https://docs.docker.com/engine/reference/command-
line/docker/

Building an image
We use the docker build command to build Docker images. Let's assume we use the Dockerfile defini-
tion from earlier to build an image. Here is an example that shows the build command.
docker build -t temp-ubuntu .

Here is the output generated from the build command:


Sending build context to Docker daemon 4.69MB
Step 1/8 : FROM ubuntu:18.04
---> a2a15febcdf3
Step 2/8 : RUN apt -y update && apt install -y wget nginx software-proper-
ties-common apt-transport-https && wget -q https://packages.microsoft.com/
config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.
deb && dpkg -i packages-microsoft-prod.deb && add-apt-repository universe
&& apt -y update && apt install -y dotnet-sdk-3.0
---> Using cache
---> feb452bac55a
Step 3/8 : CMD service nginx start
---> Using cache
---> ce3fd40bd13c
Step 4/8 : COPY ./default /etc/nginx/sites-available/default
---> 97ff0c042b03
Step 5/8 : WORKDIR /app
---> Running in 883f8dc5dcce
Removing intermediate container 883f8dc5dcce
---> 6e36758d40b1
Step 6/8 : COPY ./website/. .
---> bfe84cc406a4
Step 7/8 : EXPOSE 80:8080
---> Running in b611a87425f2
Removing intermediate container b611a87425f2
---> 209b54a9567f
Step 8/8 : ENTRYPOINT ["dotnet", "website.dll"]
---> Running in ea2efbc6c375
Removing intermediate container ea2efbc6c375
---> f982892ea056
Successfully built f982892ea056
Successfully tagged temp-ubuntu:latest

Notice the steps listed in the output. When each step executes, a new layer gets added to the image
we're building.
Also, notice that we execute a number of commands to install software and manage configuration. For
example, in step 2, we run the apt -y update and apt install -y commands to update the OS.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  167

These commands execute in a running container that is created for that step. Once the command has run,
the intermediate container is removed. The underlying cached image is kept on the build host and not
automatically deleted. This optimization ensures that later builds reuse these images to speed up build
times.

Image tags
An image tag is a text string that is used to version an image.
In the example build from earlier, notice the last build message that reads "Successfully tagged
temp-ubuntu: latest". When building an image, we name and optionally tag the image using the -t
command flag. In our example, we named the image using -t temp-ubuntu, while the resulting image
name was tagged temp-ubuntu: latest. An image is labeled with the latest tag if you don't specify a tag.
A single image can have multiple tags assigned to it. By convention, the most recent version of an image
is assigned the latest tag and a tag that describes the image version number. When you release a new
version of an image, you can reassign the latest tag to reference the new image.
Here is another example. Suppose you want to use the .NET Core samples Docker images. Here we have
four platforms versions that we can choose from:
●● mcr.microsoft.com/dotnet/core/samples:dotnetapp
●● mcr.microsoft.com/dotnet/core/samples:aspnetapp
●● mcr.microsoft.com/dotnet/core/samples:wcfservice
●● mcr.microsoft.com/dotnet/core/samples:wcfclient

Listing and removing images


The Docker software automatically configures a local image registry on your machine. You can view the
images in this registry with the docker images command.
You can remove an image from the local docker registry with the docker rmi command. Specify the
name or ID of the image to remove. This example removes the image for the sample web app using the
image name:
docker rmi temp-ubuntu:version-1.0

You can't remove an image if the image is still in use by a container. The docker rmi command returns
an error message, which lists the container relying on the image.

Docker containers
A Docker image contains the application and environment required by the application to run, and a
container is a running instance of the image.

Docker container lifecycle


A Docker container has a life cycle that we can manage and track the state of the container.
MCT USE ONLY. STUDENT USE PROHIBITED 168  Module 5 Implement IaaS solutions  

Lifecycle state Example Description


Created docker run <name> Once an image is specified to
run, Docker finds the image,
loads container from the image,
and executes the command
specified as the entry point. It's
at this point that the container is
available for management.
Paused docker pause <name> Pausing a container will suspend
all processes. This command
allows the container to continue
processes at a later stage. The
docker unpause command
unsuspends all processes.
Restart docker restart <name> The container receives a stop
command, followed by a start
command. If the container
doesn't respond to the stop
command, then a kill signal is
sent.
Stopped docker stop <name> The stop command sends a
termination signal to the con-
tainer and the process running in
the container.
Remove docker remove <name> All data in the container is
destroyed once you remove the
container. It's essential to always
consider containers as temporary
when thinking about storing
data.

Docker container storage configuration


Containers should always be be considered temporary when the application in a container needs to store
data.Containers can make use of two options to persist data. The first option is to make use of volumes,
and the second is bind mounts.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  169

Volumes
A volume is stored on the host filesystem at a specific folder location. Volumes are stored within directo-
ries on the host filesystem. Docker will mount and manage the volumes in the container. Once mounted,
these volumes are isolated from the host machine. Multiple containers can simultaneously use the same
volumes. Volumes also don't get removed automatically when a container stops using the volume.
Docker creates and manages the new volume by using the docker volume create command. This
command can form part of our Dockerfile definition, which means that we can create volumes as part of
the container creation process. Docker will create the volume if it doesn't exist when you try to mount the
volume into a container the first time.

Bind mounts
A bind mount is conceptually the same as a volume, however, instead of using a specific folder, you can
mount any file or folder on the host, as long as the host can change the contents of these mounts. Just
like volumes, the bind mount is created if we mount it, and it doesn't yet exist on the host.
Bind mounts have limited functionality compared to volumes, and even though they're more performant,
they depend on the host having a specific folder structure in place.
Volumes are considered the preferred data storage strategy to use with containers.

Docker container network configuration


The default Docker network configuration allows for the isolation of containers on the Docker host. This
feature enables us to build and configure applications that can communicate securely with each other.
Docker provides three pre-configured network configurations:
●● Bridge
●● Host
●● none
You choose which of these network configurations to apply to your container depending on its network
requirements.

Bridge network option


The bridge network is the default configuration applied to containers when launched without specifying
any additional network configuration. This network is an internal, private network used by the container
and isolates the container network from the Docker host network.
Each container in the bridge network is assigned an IP address and subnet mask with the hostname
defaulting to the container name. Containers connected to the default bridge network are allowed to
access other bridge connected containers by IP address. The bridge network doesn't allow communica-
tion between containers using hostnames.
By default, Docker doesn't publish any container ports. The Docker port --publish flag enables port
mapping between the container ports and the Docker host ports. The publish flag effectively configures a
firewall rule that maps the ports.
In the following example the --publish flag maps client browsing on port 80 to port 8080 on the host
container.
MCT USE ONLY. STUDENT USE PROHIBITED 170  Module 5 Implement IaaS solutions  

--publish 80:8080

Any client browsing to the Docker host IP and port 8080 can access the app.

Host network option


The host network allows the container onto run the host network directly. This configuration effectively
removes the isolation between the host and the container at a network level.
In our example, let's assume we decide to change the networking configuration to the host network
option. Our tracking portal is still accessible using the host IP. We can now use the well known port 80
instead of a mapped port.
Keep in mind that the container can use only ports not already used by the host.

None network option


Use the none network option to disable networking for containers.

Operating system considerations


Keep in mind that there are differences between desktop operating systems for the Docker network
configuration options. For example, the Docker0 network interface isn't available on macOS when using
the bridge network, and using the host network configuration isn't supported for both Windows and
macOS desktops.
These differences might affect the way your developers configure their workflow to manage container
development.

Demo: Retrieve and deploy existing Docker im-


age locally
In this demo, you will learn to perform the following actions:
●● Retrieve an image with a sample app from Docker Hub and run it.
●● Examine the local state of Docker to help understand the elements that are deployed.
●● Remove the container and image from your computer.

Prerequisites
●● You'll need a local installation of Docker

●● https://www.docker.com/products/docker-desktop

Retrieve and run sample app from Docker Hub


1. Open a command prompt on your local computer
2. Retrieve the ASP.NET Sample app image from the Docker Hub registry. This image contains a sample
web app developed by Microsoft.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  171

docker pull mcr.microsoft.com/dotnet/core/samples:aspnetapp

3. Verify that the image has been stored locally.


docker image list

You should see a repository named mcr.microsoft.com/dotnet/core/samples with a tag of aspnetapp.


4. Start the sample app. Specify the -d flag to run it as a background, non-interactive app. Use the -p
flag to map port 80 in the container that is created to port 8080 locally, to avoid conflicts with any
web apps already running on your computer. The command will respond with a lengthy hexadecimal
identifier for the instance.
docker run -d -p 8080:80 mcr.microsoft.com/dotnet/core/samples:aspnetapp

5. Open a web browser and go to the page for the sample web app at http://localhost:8080.

Examine the container in the local Docker registry


1. At the command prompt, view the running containers in the local registry.
docker ps

The output should look similar to this:


CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
bffd59ae5c22 mcr.microsoft.com/dotnet/core/samples:aspnetapp "dot-
net aspnetapp.dll" 12 seconds ago Up 11 seconds 0.0.0.0:8080-
>80/tcp competent_hoover

The COMMAND field shows the container started by running the command dotnet aspnetapp.dll. This
command invokes the .NET Core runtime to start the code in the aspnetapp.dll (the code for the
sample web app). The PORTS field indicates port 80 in the image was mapped to port 8080 on your
computer. The STATUS field shows the application is still running. Make a note of the container's
NAME.
2. Stop the Docker container. Specify the container name for the web app in the following command, in
place of <NAME>.
docker container stop <NAME>

3. Verify that the container is no longer running. The following command shows the status of the
container as Exited. The -a flag shows the status of all containers, not just those that are still running.
docker ps -a

4. Return to the web browser and refresh the page for the sample web app. It should fail with a Connec-
tion Refused error.
MCT USE ONLY. STUDENT USE PROHIBITED 172  Module 5 Implement IaaS solutions  

Remove the container and image


1. Although the container has stopped, it's still loaded and can be restarted. Remove it using the
following command. As before, replace <NAME> with the name of your container.
docker container rm <NAME>

2. Verify that the container has been removed with the following command. The command should no
longer list the container.
docker ps -a

3. List the images currently available on your computer.


docker image list

4. Remove the image from the registry.


docker image rm mcr.microsoft.com/dotnet/core/samples:aspnetapp

5. List the images again to verify that the image for the microsoft/dotnet-samples web app has disap-
peared.
docker image list

Demo: Create a container image by using Dock-


er
In this demo, you will learn to perform the following actions:
●● Create a Dockerfile
●● Build and deploy the image
●● Test the web app

Prerequisites
●● A local installation of Docker
●● https://www.docker.com/products/docker-desktop
●● A local installation of Git
●● https://desktop.github.com/
✔️ Note: The sample web app used in this demo implements a web API for a hotel reservations web site.
The web API exposes HTTP POST and GET operations that create and retrieve customer's bookings. The
data is not persisted and queries return sample data.

Create a Dockerfile for the web app


1. In a command prompt on your local computer, create a folder for the project and then run the
following command to download the source code for the web app.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  173

git clone https://github.com/MicrosoftDocs/mslearn-hotel-reservation-sys-


tem.git

2. Move to the src folder.


cd mslearn-hotel-reservation-system/src

3. In this directory, create a new file named Dockerfile with no file extension and open it in a text editor.
echo "" > Dockerfile
notepad Dockerfile

4. Add the following commands to the Dockerfile. Each section is explained in the table below.
#1
FROM mcr.microsoft.com/dotnet/core/sdk:2.2
WORKDIR /src
COPY ["HotelReservationSystem/HotelReservationSystem.csproj", "HotelReservationSystem/"]
COPY ["HotelReservationSystemTypes/HotelReservationSystemTypes.csproj", "HotelReservationSys-
temTypes/"]
RUN dotnet restore "HotelReservationSystem/HotelReservationSystem.csproj"

#2
COPY . .
WORKDIR "/src/HotelReservationSystem"
RUN dotnet build "HotelReservationSystem.csproj" -c Release -o /app

#3
RUN dotnet publish "HotelReservationSystem.csproj" -c Release -o /app

#4
EXPOSE 80
WORKDIR /app
ENTRYPOINT ["dotnet", "HotelReservationSystem.dll"]

Ref # Dockerfile command description


#1 These commands fetch an image containing the
.NET Core Framework SDK. The project files for the
web app (HotelReservationSystem.csproj)
and the library project (HotelReservationSys-
temTypes.csproj) are copied to the /src
folder in the container. The dotnet restore
command downloads the dependencies required
by these projects from NuGet.
#2 These commands copy the source code for the
web app to the container and then run the
dotnet build command to build the app. The
resulting DLLs are written to the /app folder in the
container.
MCT USE ONLY. STUDENT USE PROHIBITED 174  Module 5 Implement IaaS solutions  

Ref # Dockerfile command description


#3 The dotnet publish command copies the
executables for the web site to a new folder and
removes any interim files. The files in this folder
can then be deployed to a web site.
#4 The first command opens port 80 in the container.
The second command moves to the /app folder
containing the published version of the web app.
The final command specifies that when the
container runs it should execute the command
dotnet HotelReservationSystem.dll. This
library contains the compiled code for the web
app
5. Save the file and close your text editor.

Build and deploy the image using the Dockerfile


1. At the command prompt, run the following command to build the image for the sample app using
the Dockerfile and store it locally. Don't forget the . at the end of the command.
docker build -t reservationsystem .

✔️ Note: A warning about file and directory permissions will be displayed when the process com-
pletes. You can ignore these warnings for the purposes of this exercise.
2. Run the following command to verify that the image has been created and stored in the local registry.
docker image list

The image will have the name reservationsystem. You'll also see an image named microsoft/
dotnet. This image contains the .NET Core SDK and was downloaded when the reservationsystem
image was built using the Dockerfile.

Test the web app


1. Run a container using the reservationsystem image using the following command. Docker will
respond with a lengthy string of hex digits – the container runs in the background without any UI. Port
80 in the container is mapped to port 8080 on the host machine. The container is named reserva-
tions.
docker run -p 8080:80 -d --name reservations reservationsystem

2. Start a web browser and navigate to http://localhost:8080/api/reservations/1. You should see a JSON
document containing the data for reservation number 1 returned by the web app. You can replace the
“1” with any reservation number, and you'll see the corresponding reservation details.
3. Examine the status of the container using the following command.
docker ps -a

Verify that the status of the container is Up.


MCT USE ONLY. STUDENT USE PROHIBITED
 Create container images for solutions  175

CONTAINER ID IMAGE COMMAND CREATED


STATUS PORTS NAMES
07b0d1de4db7 reservationsystem "dotnet HotelReserva…" 5 minutes
ago Up 5 minutes 0.0.0.0:8080->80/tcp reservations

4. Stop the reservations container with the following command.


docker container stop reservations

5. Delete the reservations container from the local registry.


docker rm reservations

6. Delete the images from the local registry.


docker image rm <image_name>
MCT USE ONLY. STUDENT USE PROHIBITED 176  Module 5 Implement IaaS solutions  

Publish a container image to Azure Container


Registry
Azure Container Registry overview
Azure Container Registry is a managed, private Docker registry service based on the open-source Docker
Registry 2.0. Create and maintain Azure container registries to store and manage your private Docker
container images.
Use Azure container registries with your existing container development and deployment pipelines, or
use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully auto-
mate builds with triggers such as source code commits and base image updates.

Use cases
Pull images from an Azure container registry to various deployment targets:
●● Scalable orchestration systems that manage containerized applications across clusters of hosts,
including Kubernetes, DC/OS, and Docker Swarm.
●● Azure services that support building and running applications at scale, including Azure Kubernetes
Service (AKS), App Service, Batch, Service Fabric, and others.
Developers can also push to a container registry as part of a container development workflow. For
example, target a container registry from a continuous integration and delivery tool such as Azure
Pipelines or Jenkins.

Key features
●● Registry SKUs - Create one or more container registries in your Azure subscription. Registries are
available in three SKUs: Basic, Standard, and Premium, each of which supports webhook integration,
registry authentication with Azure Active Directory, and delete functionality.
You control access to a container registry using an Azure identity, an Azure Active Directory-backed
service principal, or a provided admin account. Log in to the registry using the Azure CLI or the
standard docker login command.
●● Supported images and artifacts - Grouped in a repository, each image is a read-only snapshot of a
Docker-compatible container. Azure container registries can include both Windows and Linux images.
You control image names for all your container deployments. Use standard Docker commands to push
images into a repository, or pull an image from a repository. In addition to Docker container images,
Azure Container Registry stores related content formats such as Helm charts and images built to the
Open Container Initiative (OCI) Image Format Specification8.
●● Azure Container Registry Tasks - Use Azure Container Registry Tasks (ACR Tasks) to streamline
building, testing, pushing, and deploying images in Azure.

ACR Tasks
ACR Tasks is a suite of features within Azure Container Registry. It provides cloud-based container image
building for platforms including Linux, Windows, and ARM, and can automate OS and framework patch-
ing for your Docker containers. ACR Tasks not only extends your “inner-loop” development cycle to the

8 https://github.com/opencontainers/image-spec/blob/master/spec.md
MCT USE ONLY. STUDENT USE PROHIBITED
 Publish a container image to Azure Container Registry  177

cloud with on-demand container image builds, but also enables automated builds triggered by source
code updates, updates to a container's base image, or timers. For example, with base image update
triggers, you can automate your OS and application framework patching workflow, maintaining secure
environments while adhering to the principles of immutable containers.

Task scenarios
ACR Tasks supports several scenarios to build and maintain container images and other artifacts.
●● Quick task - Build and push a single container image to a container registry on-demand, in Azure,
without needing a local Docker Engine installation. Think docker build, docker push in the
cloud.
●● Automatically triggered tasks - Enable one or more triggers to build an image:
●● Trigger on source code update
●● Trigger on base image update
●● Trigger on a schedule
●● Multi-step task - Extend the single image build-and-push capability of ACR Tasks with multi-step,
multi-container-based workflows.
Each ACR Task has an associated source code context - the location of a set of source files used to build a
container image or other artifact. Example contexts include a Git repository or a local filesystem.

Additional resources
●● For more information on Azure Container Registry SKUs visit:

●● https://docs.microsoft.com/en-us/azure/container-registry/container-registry-skus

Container image storage in Azure Container


Registry
Every Basic, Standard, and Premium Azure container registry benefits from advanced Azure storage
features like encryption-at-rest for image data security and geo-redundancy for image data protection.
●● Encryption-at-rest: All container images in your registry are encrypted at rest. Azure automatically
encrypts an image before storing it, and decrypts it on-the-fly when you or your applications and
services pull the image.
●● Geo-redundant storage: Azure uses a geo-redundant storage scheme to guard against loss of your
container images. Azure Container Registry automatically replicates your container images to multiple
geographically distant data centers, preventing their loss in the event of a regional storage failure.
●● Geo-replication: For scenarios requiring even more high-availability assurance, consider using the
geo-replication feature of Premium registries. Geo-replication helps guard against losing access to
your registry in the event of a total regional failure, not just a storage failure. Geo-replication provides
other benefits, too, like network-close image storage for faster pushes and pulls in distributed
development or deployment scenarios.
●● Image limits: The following table describes the container image and storage limits in place for Azure
container registries.
MCT USE ONLY. STUDENT USE PROHIBITED 178  Module 5 Implement IaaS solutions  

Resource Limit
Repositories No limit
Images No limit
Layers No limit
Tags No limit
Storage 5 TB
Very high numbers of repositories and tags can impact the performance of your registry. Periodically
delete unused repositories, tags, and images as part of your registry maintenance routine. Deleted
registry resources like repositories, images, and tags cannot be recovered after deletion.

Demo: Deploy an image to ACR by using Azure


CLI
In this demo you'll learn how to perform the following actions:
●● Create an Azure Container Registry
●● Push an image to the ACR
●● Verify the image was uploaded
●● Run an image from the ACR

Prerequisites
●● A local installation of Docker

●● https://www.docker.com/products/docker-desktop
●● A local installation of Azure CLI

●● https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
✔️ Note: Because the Azure Cloud Shell doesn't include all required Docker components (the dockerd
daemon), you can't use the Cloud Shell for this demo.

Create an Azure Container Registry


1. Connect to azure
az login

2. Create a resource group for the registry, replace <myResourceGroup> and <myLocation> in the
command below with your own values.
az group create --name <myResourceGroup> --location <myLocation>

3. Create a basic container registry. The registry name must be unique within Azure, and contain 5-50
alphanumeric characters. In the following example, myContainerRegistry007 is used. Update this to a
unique value.
MCT USE ONLY. STUDENT USE PROHIBITED
 Publish a container image to Azure Container Registry  179

az acr create --resource-group <myResourceGroup> --name <myContainerRegistry007> --sku Basic

When the registry is created, the output is similar to the following:


{
"adminUserEnabled": false,
"creationDate": "2017-09-08T22:32:13.175925+00:00",
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/
providers/Microsoft.ContainerRegistry/registries/myContainerRegistry007",
"location": "eastus",
"loginServer": "myContainerRegistry007.azurecr.io",
"name": "myContainerRegistry007",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sku": {
"name": "Basic",
"tier": "Basic"
},
"status": null,
"storageAccount": null,
"tags": {},
"type": "Microsoft.ContainerRegistry/registries"
}

❗️ Important: Throughout the rest of this demo <acrName> is a placeholder for the container registry
name you created. You'll need to use that name throughout the rest of the demo.

Push an image to the registry


1. Before pushing and pulling container images, you must log in to the ACR instance. To do so, use the
az acr login command.
az acr login --name <acrName>

2. Download an image, we'll use the hello-world image for the rest of the demo.
docker pull hello-world

3. Tag the image using the docker tag command. Before you can push an image to your registry, you
must tag it with the fully qualified name of your ACR login server. The login server name is in the
format <acrname>.azurecr.io (all lowercase).
docker tag hello-world <acrname>.azurecr.io/hello-world:v1

4. Finally, use docker push to push the image to the ACR instance. This example creates the hel-
lo-world repository, containing the hello-world:v1 image.
docker push <acrname>.azurecr.io/hello-world:v1

5. After pushing the image to your container registry, remove the hello-world:v1 image from your
local Docker environment.
MCT USE ONLY. STUDENT USE PROHIBITED 180  Module 5 Implement IaaS solutions  

docker rmi <acrname>.azurecr.io/hello-world:v1

✔️ Note: This docker rmi command does not remove the image from the hello-world repository
in your Azure container registry, only the local version.

Verify the image was uploaded


1. Use the az acr repository list command to list the repositories in your registry.
az acr repository list --name <acrName> --output table

Output:
Result
----------------
hello-world

2. Use the az acr repository show-tags command to list the tags on the hello-world repository.
az acr repository show-tags --name <acrName> --repository hello-world --output table

Output:
Result
--------
v1

Run image from registry


1. Pull and run the hello-world:v1 container image from your container registry by using the docker
run command.
docker run <acrname>.azurecr.io/hello-world:v1

Example output:
Unable to find image 'mycontainerregistry007.azurecr.io/hello-world:v1'
locally
v1: Pulling from hello-world
Digest: sha256:662dd8e65ef7ccf13f417962c2f77567d3b132f12c95909de6c85ac-
3c326a345
Status: Downloaded newer image for mycontainerregistry007.azurecr.io/
hello-world:v1

Hello from Docker!


This message shows that your installation appears to be working correctly.

[...]
MCT USE ONLY. STUDENT USE PROHIBITED
 Publish a container image to Azure Container Registry  181

Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group,
the container registry, and the container images stored there.
az group delete --name <myResourceGroup>
MCT USE ONLY. STUDENT USE PROHIBITED 182  Module 5 Implement IaaS solutions  

Create and run container images in Azure Con-


tainer Instances
Azure Container Instances overview
Azure Container Instances is useful for scenarios that can operate in isolated containers, including simple
applications, task automation, and build jobs. Here are some of the benefits:
●● Fast startup: Launch containers in seconds.
●● Per second billing: Incur costs only while the container is running.
●● Hypervisor-level security: Isolate your application as completely as it would be in a VM.
●● Custom sizes: Specify exact values for CPU cores and memory.
●● Persistent storage: Mount Azure Files shares directly to a container to retrieve and persist state.
●● Linux and Windows: Schedule both Windows and Linux containers using the same API.
For scenarios where you need full container orchestration, including service discovery across multiple
containers, automatic scaling, and coordinated application upgrades, we recommend Azure Kubernetes
Service (AKS).

Container groups
The top-level resource in Azure Container Instances is the container group. A container group is a collec-
tion of containers that get scheduled on the same host machine. The containers in a container group
share a lifecycle, resources, local network, and storage volumes. It's similar in concept to a pod in Kuber-
netes.
The following diagram shows an example of a container group that includes multiple containers:
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and run container images in Azure Container Instances  183

This example container group:


●● Is scheduled on a single host machine.
●● Is assigned a DNS name label.
●● Exposes a single public IP address, with one exposed port.
●● Consists of two containers. One container listens on port 80, while the other listens on port 5000.
●● Includes two Azure file shares as volume mounts, and each container mounts one of the shares locally.
✔️ Note: Multi-container groups currently support only Linux containers. For Windows containers, Azure
Container Instances only supports deployment of a single instance.

Deployment
There are two common ways to deploy a multi-container group: use a Resource Manager template or a
YAML file. A Resource Manager template is recommended when you need to deploy additional Azure
service resources (for example, an Azure Files share) when you deploy the container instances. Due to the
YAML format's more concise nature, a YAML file is recommended when your deployment includes only
container instances.

Resource allocation
Azure Container Instances allocates resources such as CPUs, memory, and optionally GPUs (preview) to a
container group by adding the resource requests of the instances in the group. Taking CPU resources as
MCT USE ONLY. STUDENT USE PROHIBITED 184  Module 5 Implement IaaS solutions  

an example, if you create a container group with two instances, each requesting 1 CPU, then the contain-
er group is allocated 2 CPUs.

Networking
Container groups share an IP address and a port namespace on that IP address. To enable external clients
to reach a container within the group, you must expose the port on the IP address and from the contain-
er. Because containers within the group share a port namespace, port mapping isn't supported. Contain-
ers within a group can reach each other via localhost on the ports that they have exposed, even if those
ports aren't exposed externally on the group's IP address.

Storage
You can specify external volumes to mount within a container group. You can map those volumes into
specific paths within the individual containers in a group.

Common scenarios
Multi-container groups are useful in cases where you want to divide a single functional task into a small
number of container images. These images can then be delivered by different teams and have separate
resource requirements.
Example usage could include:
●● A container serving a web application and a container pulling the latest content from source control.
●● An application container and a logging container. The logging container collects the logs and metrics
output by the main application and writes them to long-term storage.
●● An application container and a monitoring container. The monitoring container periodically makes a
request to the application to ensure that it's running and responding correctly, and raises an alert if
it's not.
●● A front-end container and a back-end container. The front end might serve a web application, with
the back end running a service to retrieve data.

Demo: Run Azure Container Instances by using


the Cloud Shell
In this demo you'll learn how to perform the following actions:
●● Create a resource group for the container
●● Create a container
●● Verify the container is running

Prerequisites
●● An Azure subscription, if you don't have an Azure subscription, create a free account before you
begin.

●● https://azure.microsoft.com/free/
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and run container images in Azure Container Instances  185

Create the resource group


1. Sign into the Azure portal 9 with your Azure subscription.
2. Open the Azure Cloud Shell from the Azure portal using the Cloud Shell icon, and select Bash as the
shell option.

3. Create a new resource group with the name az204-aci-rg so that it will be easier to clean up these
resources when you are finished with the module. Replace <myLocation> with a region near you.
az group create --name az204-aci-rg --location <myLocation>

Create a container
You create a container by providing a name, a Docker image, and an Azure resource group to the az
container create command. You will expose the container to the Internet by specifying a DNS name
label.
1. Create a DNS name to expose your container to the Internet. Your DNS name must be unique, run this
command from Cloud Shell to create a Bash variable that holds a unique name.
DNS_NAME_LABEL=aci-demo-$RANDOM

2. Run the following az container create command to start a container instance. Be sure to
replace the <myLocation> with the region you specified earlier. It will take a few minutes for the
operation to complete.
az container create \
--resource-group az204-aci-rg \
--name mycontainer \
--image microsoft/aci-helloworld \
--ports 80 \
--dns-name-label $DNS_NAME_LABEL \
--location <myLocation>

In the command above, $DNS_NAME_LABEL specifies your DNS name. The image name, microsoft/
aci-helloworld, refers to a Docker image hosted on Docker Hub that runs a basic Node.js web
application.

Verify the container is running


1. When the az container create command completes, run az container show to check its
status.
az container show \
--resource-group az204-aci-rg \
--name mycontainer \
--query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" \

9 https://portal.azure.com/
MCT USE ONLY. STUDENT USE PROHIBITED 186  Module 5 Implement IaaS solutions  

--out table

You see your container's fully qualified domain name (FQDN) and its provisioning state. Here's an
example.
FQDN ProvisioningState
-------------------------------------- -------------------
aci-demo.eastus.azurecontainer.io Succeeded

✔️ Note: If your container is in the Creating state, wait a few moments and run the command again
until you see the Succeeded state.
2. From a browser, navigate to your container's FQDN to see it running. You may get a warning that the
site isn't safe.

Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group,
the container registry, and the container images stored there.
az group delete --name az204-aci-rg --no-wait --yes

Run containerized tasks with restart policies


The ease and speed of deploying containers in Azure Container Instances provides a compelling platform
for executing run-once tasks like build, test, and image rendering in a container instance.
With a configurable restart policy, you can specify that your containers are stopped when their processes
have completed. Because container instances are billed by the second, you're charged only for the
compute resources used while the container executing your task is running.
The examples below use the Azure CLI.

Container restart policy


When you create a container group in Azure Container Instances, you can specify one of three restart
policy settings.

Restart policy Description


Always Containers in the container group are always
restarted. This is the default setting applied when
no restart policy is specified at container creation.
Never Containers in the container group are never
restarted. The containers run at most once.
OnFailure Containers in the container group are restarted
only when the process executed in the container
fails (when it terminates with a nonzero exit code).
The containers are run at least once.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and run container images in Azure Container Instances  187

Specify a restart policy


Specify the --restart-policy parameter when you call az container create.
az container create \
--resource-group myResourceGroup \
--name mycontainer \
--image mycontainerimage \
--restart-policy OnFailure

Run to completion
Azure Container Instances starts the container, and then stops it when its application, or script, exits.
When Azure Container Instances stops a container whose restart policy is Never or OnFailure, the con-
tainer's status is set to Terminated.

Set environment variables in container instances


Setting environment variables in your container instances allows you to provide dynamic configuration of
the application or script run by the container. This is similar to the --env command-line argument to
docker run.
If you need to pass secrets as environment variables, Azure Container Instances supports secure values
for both Windows and Linux containers.
In the example below two variables are passed to the container when it is created. The example below is
assuming you are running the CLI in a Bash shell or Cloud Shell, if you use the Windows Command
Prompt, specify the variables with double-quotes, such as --environment-variables "Num-
Words"="5" "MinLength"="8".
az container create \
--resource-group myResourceGroup \
--name mycontainer2 \
--image mcr.microsoft.com/azuredocs/aci-wordcount:latest \
--restart-policy OnFailure \
--environment-variables 'NumWords'='5' 'MinLength'='8'

Secure values
Objects with secure values are intended to hold sensitive information like passwords or keys for your
application. Using secure values for environment variables is both safer and more flexible than including
it in your container's image.
Environment variables with secure values aren't visible in your container's properties. Their values can be
accessed only from within the container. For example, container properties viewed in the Azure portal or
Azure CLI display only a secure variable's name, not its value.
Set a secure environment variable by specifying the secureValue property instead of the regular
value for the variable's type. The two variables defined in the following YAML demonstrate the two
variable types.
MCT USE ONLY. STUDENT USE PROHIBITED 188  Module 5 Implement IaaS solutions  

YAML deployment
Create a secure-env.yaml file with the following snippet.
apiVersion: 2018-10-01
location: eastus
name: securetest
properties:
containers:
- name: mycontainer
properties:
environmentVariables:
- name: 'NOTSECRET'
value: 'my-exposed-value'
- name: 'SECRET'
secureValue: 'my-secret-value'
image: nginx
ports: []
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups

Run the following command to deploy the container group with YAML:
az container create --resource-group myResourceGroup --file secure-env.yaml

Mount an Azure file share in Azure Container


Instances
By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost.
To persist state beyond the lifetime of the container, you must mount a volume from an external store. As
shown in this article, Azure Container Instances can mount an Azure file share created with Azure Files.
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard
Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides
file-sharing features similar to using an Azure file share with Azure virtual machines.
✔️ Note: Mounting an Azure Files share is currently restricted to Linux containers. Mounting an Azure
Files share to a container instance is similar to a Docker bind mount. Be aware that if you mount a share
into a container directory in which files or directories exist, these files or directories are obscured by the
mount and are not accessible while the container runs.

Azure file share


To mount an Azure file share as a volume in Azure Container Instances, you need three values: the
storage account name, the share name, and the storage access key. In the example below these are
stored as variables and passed to the az container create command.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create and run container images in Azure Container Instances  189

az container create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name hellofiles \
--image mcr.microsoft.com/azuredocs/aci-hellofiles \
--dns-name-label aci-demo \
--ports 80 \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /aci/logs/

Deploy container and mount volume - YAML


You can also deploy a container group and mount a volume in a container with the Azure CLI and a YAML
template. Deploying by YAML template is the preferred method when deploying container groups
consisting of multiple containers.
The following YAML template defines a container group with one container created with the aci-hel-
lofiles image. The container mounts the Azure file share acishare created previously as a volume. An
example YAML file is show below.
apiVersion: '2018-10-01'
location: eastus
name: file-share-demo
properties:
containers:
- name: hellofiles
properties:
environmentVariables: []
image: mcr.microsoft.com/azuredocs/aci-hellofiles
ports:
- port: 80
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /aci/logs/
name: filesharevolume
osType: Linux
restartPolicy: Always
ipAddress:
type: Public
ports:
- port: 80
dnsNameLabel: aci-demo
volumes:
- name: filesharevolume
azureFile:
sharename: acishare
storageAccountName: <Storage account name>
MCT USE ONLY. STUDENT USE PROHIBITED 190  Module 5 Implement IaaS solutions  

storageAccountKey: <Storage account key>


tags: {}
type: Microsoft.ContainerInstance/containerGroups

Mount multiple volumes


To mount multiple volumes in a container instance, you must deploy using an Azure Resource Manager
template or a YAML file. To use a template or YAML file, provide the share details and define the volumes
by populating the volumes array in the properties section of the template.
For example, if you created two Azure Files shares named share1 and share2 in storage account myStor-
ageAccount, the volumes array in a Resource Manager template would appear similar to the following:
"volumes": [{
"name": "myvolume1",
"azureFile": {
"shareName": "share1",
"storageAccountName": "myStorageAccount",
"storageAccountKey": "<storage-account-key>"
}
},
{
"name": "myvolume2",
"azureFile": {
"shareName": "share2",
"storageAccountName": "myStorageAccount",
"storageAccountKey": "<storage-account-key>"
}
}]
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  191

Lab and review questions


Lab: Deploying compute workloads by using im-
ages and containers

Lab scenario
Your organization is seeking a way to automatically create virtual machines (VMs) to run tasks and
immediately terminate. You're tasked with evaluating multiple compute services in Microsoft Azure and
determining which service can help you automatically create VMs and install custom software on those
machines. As a proof of concept, you have decided to try creating VMs from built-in images and contain-
er images so that you can compare the two solutions. To keep your proof of concept simple, you'll create
a special “IP check” application written in .NET that you'll automatically deploy to your machines. Your
proof of concept will evaluate the Azure Container Instances and Azure Virtual Machines services.

Objectives
●● After you complete this lab, you will be able to:
●● Create a VM by using the Azure Command-Line Interface (CLI).
●● Deploy a Docker container image to Azure Container Registry.
●● Deploy a container from a container image in Container Registry by using Container Instances.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 5 review questions


Review Question 1
An availability set is a logical grouping of VMs within a datacenter that allows Azure to understand how
your application is built to provide for redundancy and availability. Is the following statement about
availability sets True or False?
It is recommended that a minimum of three or more VMs are created within an availability set to provide
for a highly available application and to meet the 99.95% Azure SLA.
†† True
†† False
MCT USE ONLY. STUDENT USE PROHIBITED 192  Module 5 Implement IaaS solutions  

Review Question 2
Azure Resource Manager is the deployment and management service for Azure. It provides a management
layer that enables you to create, update, and delete resources in your Azure subscription. Azure provides
four levels of management scope.
Which one of the below is not a valid management scope?
†† Access control
†† Resource group
†† Management group
†† Subscription

Review Question 3
Docker is a containerization platform used to develop, ship, and run containers. Which of the following uses
the Docker REST API to send instructions to either a local or remote server?
†† Docker Hub
†† Docker objects
†† Docker client
†† Docker server

Review Question 4
What does the following Docker command do?
docker rmi temp-ubuntu:version-1.0
†† Removes the container from the registry
†† Tags the image with a version
†† Removes the image from the registry
†† Lists containers using the image

Review Question 5
The top-level resource in Azure Container Instances is the container group. A container group is a collection
of containers that get scheduled on the same host machine. Your solution requires using a multi-container
group, which host OS should you use?
†† Windows
†† Linux
†† MacOS
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  193

Answers
Review Question 1
An availability set is a logical grouping of VMs within a datacenter that allows Azure to understand how
your application is built to provide for redundancy and availability. Is the following statement about
availability sets True or False?
It is recommended that a minimum of three or more VMs are created within an availability set to provide
for a highly available application and to meet the 99.95% Azure SLA.
†† True
■■ False
Explanation
Two, or more, VMs are necessary to meet the 99.5% Azure SLA.
Review Question 2
Azure Resource Manager is the deployment and management service for Azure. It provides a manage-
ment layer that enables you to create, update, and delete resources in your Azure subscription. Azure
provides four levels of management scope.
Which one of the below is not a valid management scope?
■■ Access control
†† Resource group
†† Management group
†† Subscription
Explanation
The four levels of management scope are: Management group, subscriptions, resource groups, and resourc-
es.
Review Question 3
Docker is a containerization platform used to develop, ship, and run containers. Which of the following
uses the Docker REST API to send instructions to either a local or remote server?
†† Docker Hub
†† Docker objects
■■ Docker client
†† Docker server
Explanation
The Docker client is a command-line application named docker that provides us with a command line
interface (CLI) to interact with a Docker server. The docker command uses the Docker REST API to send
instructions to either a local or remote server and functions as the primary interface we use to manage our
containers.
MCT USE ONLY. STUDENT USE PROHIBITED 194  Module 5 Implement IaaS solutions  

Review Question 4
What does the following Docker command do?
docker rmi temp-ubuntu:version-1.0
†† Removes the container from the registry
†† Tags the image with a version
■■ Removes the image from the registry
†† Lists containers using the image
Explanation
You can remove an image from the local docker registry with the "docker rmi" command. Specify the name
or ID of the image to remove.
Review Question 5
The top-level resource in Azure Container Instances is the container group. A container group is a
collection of containers that get scheduled on the same host machine. Your solution requires using a
multi-container group, which host OS should you use?
†† Windows
■■ Linux
†† MacOS
Explanation
Multi-container groups currently support only Linux containers. For Windows containers, Azure Container
Instances only supports deployment of a single instance.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 6 Implement user authentication and
authorization

Microsoft Identity Platform v2.0


Microsoft identity platform v2.0 overview
Microsoft identity platform is an evolution of the Azure Active Directory (Azure AD) developer platform. It
allows developers to build applications that sign in all Microsoft identities and get tokens to call Micro-
soft APIs, such as Microsoft Graph, or APIs that developers have built. The Microsoft identity platform
consists of:
●● OAuth 2.0 and OpenID Connect standard-compliant authentication service that enables develop-
ers to authenticate any Microsoft identity, including:
●● Work or school accounts (provisioned through Azure AD)
●● Personal Microsoft accounts (such as Skype, Xbox, and Outlook.com)
●● Social or local accounts (via Azure AD B2C)
●● Open-source libraries: Microsoft Authentication Libraries (MSAL) and support for other stand-
ards-compliant libraries
●● Application management portal: A registration and configuration experience built in the Azure
portal, along with all your other Azure management capabilities.
●● Application configuration API and PowerShell: which allows programmatic configuration of your
applications through REST API (Microsoft Graph and Azure Active Directory Graph 1.6) and Power-
Shell, so you can automate your DevOps tasks.
For developers, Microsoft identity platform offers seamless integration into innovations in the identity
and security space, such as passwordless authentication, step-up authentication, and Conditional Access.
You don’t need to implement such functionality yourself: applications integrated with the Microsoft
identity platform natively take advantage of such innovations.
With Microsoft identity platform, you can write code once and reach any user. You can build an app once
and have it work across many platforms, or build an app that functions as a client as well as a resource
application (API).
MCT USE ONLY. STUDENT USE PROHIBITED 196  Module 6 Implement user authentication and authorization  

Application and service principal objects in Az-


ure Active Directory
Sometimes, the meaning of the term “application” can be misunderstood when used in the context of
Azure Active Directory (Azure AD). An application that has been integrated with Azure AD has implica-
tions that go beyond the software aspect. "Application" is frequently used as a conceptual term, referring
to not only the application software, but also its Azure AD registration and role in authentication/
authorization “conversations” at runtime.
By definition, an application can function in these roles:
●● Client role (consuming a resource)
●● Resource server role (exposing APIs to clients)
●● Both client role and resource server role
An OAuth 2.0 Authorization Grant flow defines the conversation protocol, which allows the client/
resource to access/protect a resource's data, respectively.

Application registration
When you register an Azure AD application in the Azure portal, two objects are created in your Azure AD
tenant:
●● An application object, and
●● A service principal object

Application object
An Azure AD application is defined by its one and only application object, which resides in the Azure AD
tenant where the application was registered, known as the application's “home” tenant. The Microsoft
Graph Application entity defines the schema for an application object's properties.

Service principal object


To access resources that are secured by an Azure AD tenant, the entity that requires access must be
represented by a security principal. This is true for both users (user principal) and applications (service
principal).
The security principal defines the access policy and permissions for the user/application in the Azure AD
tenant. This enables core features such as authentication of the user/application during sign-in, and
authorization during resource access.
When an application is given permission to access resources in a tenant (upon registration or consent), a
service principal object is created. The Microsoft Graph servicePrincipal entity defines the schema
for a service principal object's properties.

Application and service principal relationship


Consider the application object as the global representation of your application for use across all tenants,
and the service principal as the local representation for use in a specific tenant.
MCT USE ONLY. STUDENT USE PROHIBITED
 Microsoft Identity Platform v2.0  197

The application object serves as the template from which common and default properties are derived for
use in creating corresponding service principal objects. An application object therefore has a 1:1 relation-
ship with the software application, and a 1:many relationships with its corresponding service principal
object(s).
A service principal must be created in each tenant where the application is used, enabling it to establish
an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant applica-
tion has only one service principal (in its home tenant), created and consented for use during application
registration. A multi-tenant Web application/API also has a service principal created in each tenant where
a user from that tenant has consented to its use.

Demo: Register an app with the Microsoft Iden-


tity platform
In this demo you'll learn how to perform the following actions:
●● Register an application with the Microsoft identity platform

Prerequisites
This demo is performed in the Azure Portal.

Login to the Azure Portal


1. Login to the portal: https://portal.azure.com

Register a new application


1. Search for and select Azure Active Directory. On the Active Directory page, select App registra-
tions and then select New registration.
2. When the Register an application page appears, enter your application's registration information:

Field Value
Name az204appregdemo
Supported account types Select Accounts in this organizational directory
Redirect URI (optional) Select Public client/native (mobile & desktop)
and enter http://localhost in the box to the
right.
Below are more details on the Supported account types.

Account type Scope


Accounts in this organizational directory only This option maps to Azure AD only single-tenant
(only people in your Azure AD directory).
Accounts in any organizational directory This option maps to an Azure AD only multi-ten-
ant. Select this option if you would like to target
anyone from any Azure AD directory.
Accounts in any organizational directory and This option maps to Azure AD multi-tenant and
personal Microsoft accounts personal Microsoft accounts.
3. Select Register.
MCT USE ONLY. STUDENT USE PROHIBITED 198  Module 6 Implement user authentication and authorization  

Azure AD assigns a unique application (client) ID to your app, and you're taken to your application's
Overview page.
✔️ Note: Leave the app registration in place, we'll be using it in future demos in this module.
MCT USE ONLY. STUDENT USE PROHIBITED
 Authentication using the Microsoft Authentication Library  199

Authentication using the Microsoft Authenti-


cation Library
Microsoft Authentication Library (MSAL) over-
view
Microsoft Authentication Library (MSAL) enables developers to acquire tokens from the Microsoft identity
platform endpoint in order to access secured Web APIs. These Web APIs can be the Microsoft Graph,
other Microsoft APIS, third-party Web APIs, or your own Web API. MSAL is available for .NET, JavaScript,
Android, and iOS, which support many different application architectures and platforms.
MSAL gives you many ways to get tokens, with a consistent API for a number of platforms. Using MSAL
provides the following benefits:
●● No need to directly use the OAuth libraries or code against the protocol in your application.
●● Acquires tokens on behalf of a user or on behalf of an application (when applicable to the platform).
●● Maintains a token cache and refreshes tokens for you when they are close to expire. You don't need to
handle token expiration on your own.
●● Helps you specify which audience you want your application to sign in (your org, several orgs, work,
and school and Microsoft personal accounts, social identities with Azure AD B2C, users in sovereign,
and national clouds).
●● Helps you set up your application from configuration files.
●● Helps you troubleshoot your app by exposing actionable exceptions, logging, and telemetry.

Application types and scenarios


Using MSAL, a token can be acquired from a number of application types: web applications, web APIs,
single-page apps (JavaScript), mobile and native applications, and daemons and server-side applications.
MSAL currently supports the platforms and frameworks listed in the table below.

Library Supported platforms and frameworks


MSAL.NET .NET Framework, .NET Core, Xamarin Android,
Xamarin iOS, Universal Windows Platform. The
NuGet package is Microsoft.Identity.Client.
MSAL.js JavaScript/TypeScript frameworks such as Angular-
JS, Ember.js, or Durandal.js
MSAL for Android Android
MSAL for iOS and macOS iOS and macOS
MSAL Java (preview) Java
MSAL Python (preview) Python

Authentication flows
Below are some of the different authentication flows provided by Microsoft Authentication Library
(MSAL). These flows can be used in a variety of different application scenarios.
MCT USE ONLY. STUDENT USE PROHIBITED 200  Module 6 Implement user authentication and authorization  

Flow Description
Authorization code Native and web apps securely obtain tokens in the
name of the user
Client credentials Service applications run without user interaction
On-behalf-of The application calls a service/web API, which in
turns calls Microsoft Graph
Implicit Used in browser-based applications
Device code Enables sign-in to a device by using another
device that has a browser
Integrated Windows Windows computers silently acquire an access
token when they are domain joined
Interactive Mobile and desktops applications call Microsoft
Graph in the name of a user
Username/password The application signs in a user by using their
username and password

Public client and confidential client applications


Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential
clients.
1. Confidential client applications are apps that run on servers (web apps, Web API apps, or even
service/daemon apps). A web app is the most common confidential client. The client ID is exposed
through the web browser, but the secret is passed only in the back channel and never directly ex-
posed. Uses the MSAL ConfidentialClientApplication class.
2. Public client applications are apps that run on devices or desktop computers or in a web browser.
They're not trusted to safely keep application secrets, so they only access Web APIs on behalf of the
user. (They support only public client flows.) Public clients can't hold configuration-time secrets, so
they don't have client secrets. Uses the MSAL PublicClientApplication class.

Differences between ADAL and MSAL


Active Directory Authentication Library (ADAL) integrates with the Azure AD for developers (v1.0) end-
point, where MSAL integrates with the Microsoft identity platform (v2.0) endpoint. The v1.0 endpoint
supports work accounts, but not personal accounts. The v2.0 endpoint is the unification of Microsoft
personal accounts and work accounts into a single authentication system. Additionally, with MSAL you
can also get authentications for Azure AD B2C.

Initialize client applications by using MSAL.NET


With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application
builders: PublicClientApplicationBuilder and ConfidentialClientApplicationBuilder.
They offer a powerful mechanism to configure the application either from the code, or from a configura-
tion file, or even by mixing both approaches.
Before initializing an application, you first need to register it so that your app can be integrated with the
Microsoft identity platform. After registration, you may need the following information (which can be
found in the Azure portal):
●● The client ID (a string representing a GUID)
MCT USE ONLY. STUDENT USE PROHIBITED
 Authentication using the Microsoft Authentication Library  201

●● The identity provider URL (named the instance) and the sign-in audience for your application. These
two parameters are collectively known as the authority.
●● The tenant ID if you are writing a line of business application solely for your organization (also named
single-tenant application).
●● The application secret (client secret string) or certificate (of type X509Certificate2) if it's a confidential
client app.
●● For web apps, and sometimes for public client apps (in particular when your app needs to use a
broker), you'll have also set the redirectUri where the identity provider will contact back your applica-
tion with the security tokens.

Initializing a public client application from code


The following code instantiates a public client application, signing-in users in the Microsoft Azure public
cloud, with their work and school accounts, or their personal Microsoft accounts.
var clientApp = PublicClientApplicationBuilder.Create(client_id)
.Build();

In the same way, the following code instantiates a confidential application (a Web app located at
https://myapp.azurewebsites.net) handling tokens from users in the Microsoft Azure public
cloud, with their work and school accounts, or their personal Microsoft accounts. The application is
identified with the identity provider by sharing a client secret:
string redirectUri = "https://myapp.azurewebsites.net";
IConfidentialClientApplication app = ConfidentialClientApplicationBuilder.Create(clientId)
.WithClientSecret(clientSecret)
.WithRedirectUri(redirectUri )
.Build();

Builder modifiers
In the code snippets using application builders, a number of .With methods can be applied as modifiers
(for example, .WithAuthority and .WithRedirectUri).

.WithAuthority modifier
The .WithAuthority modifier sets the application default authority to an Azure AD authority, with the
possibility of choosing the Azure Cloud, the audience, the tenant (tenant ID or domain name), or provid-
ing directly the authority URI.
var clientApp = PublicClientApplicationBuilder.Create(client_id)
.WithAuthority(AzureCloudInstance.AzurePublic, tenant_id)
.Build();

.WithRedirectUri modifier
The .WithRedirectUri modifier overrides the default redirect URI. In the case of public client applica-
tions, this will be useful for scenarios involving the broker.
MCT USE ONLY. STUDENT USE PROHIBITED 202  Module 6 Implement user authentication and authorization  

var clientApp = PublicClientApplicationBuilder.Create(client_id)


.WithAuthority(AzureCloudInstance.AzurePublic, tenant_id)
.WithRedirectUri("http://localhost")
.Build();

Modifiers common to public and confidential client appli-


cations
The table below lists some of the modifiers you can set on a public, or client confidential client.

Modifier Description
.WithAuthority() 7 overrides Sets the application default authority to an Azure
AD authority, with the possibility of choosing the
Azure Cloud, the audience, the tenant (tenant ID
or domain name), or providing directly the
authority URI.
.WithTenantId(string tenantId) Overrides the tenant ID, or the tenant description.
.WithClientId(string) Overrides the client ID.
.WithRedirectUri(string redirectUri) Overrides the default redirect URI. In the case of
public client applications, this will be useful for
scenarios involving the broker.
.WithComponent(string) Sets the name of the library using MSAL.NET (for
telemetry reasons).
.WithDebugLoggingCallback() If called, the application will call Debug.Write
simply enabling debugging traces.
.WithLogging() If called, the application will call a callback with
debugging traces.
.WithTelemetry(TelemetryCallback Sets the delegate used to send telemetry.
telemetryCallback)

Modifiers specific to confidential client applications


The modifiers you can set on a confidential client application builder are:

Modifier Description
.WithCertificate(X509Certificate2 cer- Sets the certificate identifying the application with
tificate) Azure AD.
.WithClientSecret(string clientSec- Sets the client secret (app password) identifying
ret) the application with Azure AD.

Demo: Interactive authentication by using


MSAL.NET
In this demo you'll learn how to perform the following actions:
●● Use the PublicClientApplicationBuilder class in MSAL.NET
●● Acquire a token interactively in a console application
MCT USE ONLY. STUDENT USE PROHIBITED
 Authentication using the Microsoft Authentication Library  203

Prerequisites
This demo is performed in Visual Studio Code. We'll be using information from the app we registered in
the Register an app with the Microsoft Identity platform demo.

Login to the Azure Portal


1. Login to the portal: https://portal.azure.com
2. Navigate to the app registered in the previous demo.
●● Search for App Registrations in the top search bar of the portal.
●● Select the az204appregdemo app your registered earlier in this module.
●● Make sure you select Overview in the left navigation.

Set up the console application


1. Open a PowerShell terminal.
2. Create a folder for the project and change in to the folder.
md az204-authdemo
cd az204-authdemo

3. Create the .NET console app.


dotnet new console

4. Open Visual Studio Code and open the az204-authdemo folder.


code .

Build the console app

Add packages and using statements


1. Add the Microsoft.Identity.Client package to the project in a terminal in VS Code.
dotnet add package Microsoft.Identity.Client

2. Add using statements to include Microsoft.Identity.Client and to enable async operations.


using System.Threading.Tasks;
using Microsoft.Identity.Client;

3. Change the Main method to enable async.


public static async Task Main(string[] args)
MCT USE ONLY. STUDENT USE PROHIBITED 204  Module 6 Implement user authentication and authorization  

Add code for the interactive authentication


1. We'll need two variables to hold the Application (client) and Directory (client) IDs. You can copy those
values from the portal. Add the code below and replace the string values with the appropriate values
from the portal.
private const string _clientId = "APPLICATION_CLIENT_ID";
private const string _tenantId = "DIRECTORY_TENANT_ID";

2. Use the PublicClientApplicationBuilder class to build out the authorization context.


var app = PublicClientApplicationBuilder
.Create(_clientId)
.WithAuthority(AzureCloudInstance.AzurePublic, _tenantId)
.WithRedirectUri("http://localhost")
.Build();

Code Description
.Create Creates a PublicClientApplicationBuilder
from a clientID.
.WithAuthority Adds a known Authority corresponding to an
ADFS server. In the code we're specifying the
Public cloud, and using the tenant for the app we
registered.

Acquire a token
When you registered the az204appregdemo app it automatically generated an API permission user.
read for Microsoft Graph. We'll use that permission to acquire a token.
1. Set the permission scope for the token request. Add the following code below the PublicClien-
tApplicationBuilder.
string[] scopes = { "user.read" };

2. Request the token and write the result out to the console.
AuthenticationResult result = await app.AcquireTokenInteractive(scopes).ExecuteAsync();

Console.WriteLine($"Token:\t{result.AccessToken}");

Run the application


1. In the VS Code terminal run dotnet build to check for errors, then dotnet run to run the app.
2. The app will open the default browser prompting you to select the account you want to authenticate
with. If there are multiple accounts listed select the one associated with the tenant used in the app.
3. If this is the first time you've authenticated to the registered app you will receive a Permissions
requested notification asking you to approve the app to read data associated with your account.
Select Accept.
MCT USE ONLY. STUDENT USE PROHIBITED
 Authentication using the Microsoft Authentication Library  205

4. You should see the following results in the console.


Token: eyJ0eXAiOiJKV1QiLCJub25jZSI6IlVhU.....

✔️ Note: We'll be reusing this project in the Retrieving profile information by using the Microsoft Graph
SDK demo in the next lesson.
MCT USE ONLY. STUDENT USE PROHIBITED 206  Module 6 Implement user authentication and authorization  

Using Microsoft Graph


Microsoft Graph overview

Microsoft Graph is fully replacing Azure Active Directory (Azure AD) Graph. For most production apps,
Microsoft Graph can already fully support Azure AD scenarios. In addition, Microsoft Graph supports
many new Azure AD datasets and features that are not available in Azure AD Graph.
●● The Microsoft Graph API offers a single endpoint, https://graph.microsoft.com, to provide
access to rich, people-centric data and insights exposed as resources of Microsoft 365 services. You
can use REST APIs or SDKs to access the endpoint and build apps that support scenarios spanning
across productivity, collaboration, education, security, identity, access, device management, and much
more.
●● Microsoft Graph connectors (preview) work in the incoming direction, delivering data external to the
Microsoft cloud into Microsoft Graph services and applications, to enhance Microsoft 365 experiences
such as Microsoft Search.
●● Microsoft Graph data connect provides a set of tools to streamline secure and scalable delivery of
Microsoft Graph data to popular Azure data stores. This cached data serves as data sources for Azure
development tools that you can use to build intelligent applications.

Microsoft Graph code examples


Microsoft Graph is a RESTful web API that enables you to access Microsoft Cloud service resources. After
you register your app and get authentication tokens for a user or service, you can make requests to the
Microsoft Graph API.
MCT USE ONLY. STUDENT USE PROHIBITED
 Using Microsoft Graph  207

Microsoft Graph REST API


●● One base URL for all queries:

●● Structure

●● https://graph.microsoft.com/{version}/{resource}?{query-parameters}
●● Basic API

●● https://graph.microsoft.com/v1.0/
●● Beta API

●● https://graph.microsoft.com/beta/
●● Relative resource URLs (not all inclusive):

●● /me

●● /me/messages
●● /me/drive
●● /user
●● /group

HTTP methods for calling the REST API


Microsoft Graph uses the HTTP method on your request to determine what your request is doing. The
API supports the following methods.

Method Description
GET Read data from a resource.
POST Create a new resource, or perform an action.
PATCH Update a resource with new values.
PUT Replace a resource with a new one.
DELETE Remove a resource.
●● For the CRUD methods GET and DELETE, no request body is required.
●● The POST, PATCH, and PUT methods require a request body, usually specified in JSON format, that
contains additional information, such as the values for properties of the resource.
var httpClient = new HttpClient();

httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);

string url = "https://graph.microsoft.com/v1.0/me";

string response = await httpClient.GetStringAsync(url);


MCT USE ONLY. STUDENT USE PROHIBITED 208  Module 6 Implement user authentication and authorization  

Microsoft Graph SDK


The Microsoft Graph client is designed to make it simple to make calls to Microsoft Graph.
●● SDK to interact with the Microsoft Graph using easy-to-parse classes and properties
●● Available on NuGet

●● Microsoft.Graph
●● Microsoft.Graph.Auth
The following code example shows how to create an instance of a Microsoft Graph client with an authen-
tication provider.
// Build a client application.
IPublicClientApplication publicClientApplication = PublicClientApplicationBuilder
.Create("INSERT-CLIENT-APP-ID")
.Build();
// Create an authentication provider by passing in a client application and graph scopes.
DeviceCodeProvider authProvider = new DeviceCodeProvider(publicClientApplication, graphScopes);
// Create a new instance of GraphServiceClient with the authentication provider.
GraphServiceClient graphClient = new GraphServiceClient(authProvider);

Microsoft Graph interactive authentication


InteractiveAuthenticationProvider provider = new InteractiveAuthenticationProvider(
clientApp,
scopes
);

GraphServiceClient client = new GraphServiceClient(provider);

Example of a Graph SDK query


User me = await client.Me
.Request()
.GetAsync();

More complex example


IUserMessagesCollectionPage messages = await client.Me.Messages
.Request()
.Select(m => new
{
m.Subject,
m.Sender
})
.OrderBy("receivedDateTime")
MCT USE ONLY. STUDENT USE PROHIBITED
 Using Microsoft Graph  209

.GetAsync();

Demo: Retrieving profile information by using


the Microsoft Graph SDK
In this demo you'll learn how to perform the following actions:
●● Use the following two libraries:

●● Microsoft.Graph: used to query Microsoft Graph


●● Microsoft.Graph.Auth: used to plug in to MSAL.NET for authentication
●● Retrieve user display name

Prerequisites
This demo is performed in Visual Studio Code. We'll be re-using az204-authdemo app we created in the
Interactive authentication by using MSAL.NET demo.

Open project in Visual Studio Code


1. Open the az204-authdemo project from the previous lesson.
2. Open the terminal in Visual Studio Code

Build the console app

Add packages and using statements


1. Add the Microsoft.Graph and Microsoft.Graph.Auth packages to the project. The Microsoft.Graph.
Auth is in preview at the time this was written, so we'll add a compatible preview version to the
project.
dotnet add package Microsoft.Graph
dotnet add package Microsoft.Graph.Auth --version 1.0.0-preview.2

2. Add using statements to include the libraries.


using Microsoft.Graph;
using Microsoft.Graph.Auth;

Clean up code no longer needed in the project


1. Delete the following code from Program.cs.
AuthenticationResult result = await app.AcquireTokenInteractive(scopes).ExecuteAsync();
Console.WriteLine($"Token:\t{result.AccessToken}");
MCT USE ONLY. STUDENT USE PROHIBITED 210  Module 6 Implement user authentication and authorization  

Add code to authenticate and retrieve profile information


1. Microsoft Graph requires an authentication provider. We'll use InteractiveAuthenticationPro-
vider and pass it the public client application app, and the IEnumerable scopes string.
var provider = new InteractiveAuthenticationProvider(app, scopes);

2. Add code for the Microsoft Graph client. The code below creates the Graph services client and passes
it the authentication provider.
var client = new GraphServiceClient(provider);

3. Add code to request the information from Microsoft Graph and write the DisplayName to the
console.
User me = await client.Me.Request().GetAsync();
Console.WriteLine($"Display Name:\t{me.DisplayName}");

Run the application


1. In the VS Code terminal run dotnet build to check for errors, then dotnet run to run the app.
2. The app will open the default browser prompting you to select the account you want to authenticate
with. If there are multiple accounts listed select the one associated with the tenant used in the app.
3. You should see the following results in the console.
Display Name: <display name associated with the account>
MCT USE ONLY. STUDENT USE PROHIBITED
 Authorizing data operations in Azure Storage  211

Authorizing data operations in Azure Storage


Authorizing access to data in Azure Storage
overview
Each time you access data in your storage account, your client makes a request over HTTP/HTTPS to
Azure Storage. Every request to a secure resource must be authorized, so that the service ensures that the
client has the permissions required to access the data.
The following list describes the options that Azure Storage offers for authorizing access to resources:
1. Azure Active Directory (Azure AD) integration for blobs, and queues. Azure AD provides role-
based access control (RBAC) for fine-grained control over a client's access to resources in a storage
account.
2. Shared Key authorization for blobs, files, queues, and tables. A client using Shared Key passes a
header with every request that is signed using the storage account access key.
3. Shared access signatures for blobs, files, queues, and tables. Shared access signatures (SAS) provide
limited delegated access to resources in a storage account. Adding constraints on the time interval for
which the signature is valid or on permissions it grants provides flexibility in managing access.
4. Anonymous public read access for containers and blobs. Authorization is not required.

How a shared access signature works


A shared access signature is a signed URI that points to one or more storage resources and includes a
token that contains a special set of query parameters. The token indicates how the resources may be
accessed by the client. One of the query parameters, the signature, is constructed from the SAS parame-
ters and signed with the key that was used to create the SAS. This signature is used by Azure Storage to
authorize access to the storage resource.

SAS signature
You can sign a SAS in one of two ways:
●● With a user delegation key that was created using Azure Active Directory (Azure AD) credentials. A
user delegation SAS is signed with the user delegation key.
To get the user delegation key and create the SAS, an Azure AD security principal must be assigned a
role-based access control (RBAC) role that includes the Microsoft.Storage/storageAccounts/
blobServices/generateUserDelegationKey action.
●● With the storage account key. Both a service SAS and an account SAS are signed with the storage
account key. To create a SAS that is signed with the account key, an application must have access to
the account key.

SAS token
The SAS token is a string that you generate on the client side, for example by using one of the Azure
Storage client libraries. The SAS token is not tracked by Azure Storage in any way. You can create an
unlimited number of SAS tokens on the client side. After you create a SAS, you can distribute it to client
applications that require access to resources in your storage account.
MCT USE ONLY. STUDENT USE PROHIBITED 212  Module 6 Implement user authentication and authorization  

When a client application provides a SAS URI to Azure Storage as part of a request, the service checks the
SAS parameters and signature to verify that it is valid for authorizing the request. If the service verifies
that the signature is valid, then the request is authorized. Otherwise, the request is declined with error
code 403 (Forbidden).

When to use a shared access signature


Use a SAS when you want to provide secure access to resources in your storage account to any client
who does not otherwise have permissions to those resources.
A common scenario where a SAS is useful is a service where users read and write their own data to your
storage account. In a scenario where a storage account stores user data, there are two typical design
patterns:
1. Clients upload and download data via a front-end proxy service, which performs authentication. This
front-end proxy service has the advantage of allowing validation of business rules, but for large
amounts of data or high-volume transactions, creating a service that can scale to match demand may
be expensive or difficult.

2. A lightweight service authenticates the client as needed and then generates a SAS. Once the client
application receives the SAS, they can access storage account resources directly with the permissions
defined by the SAS and for the interval allowed by the SAS. The SAS mitigates the need for routing all
data through the front-end proxy service.

Many real-world services may use a hybrid of these two approaches. For example, some data might be
processed and validated via the front-end proxy, while other data is saved and/or read directly using SAS.
Additionally, a SAS is required to authorize access to the source object in a copy operation in certain
scenarios:
●● When you copy a blob to another blob that resides in a different storage account, you must use a SAS
to authorize access to the source blob. You can optionally use a SAS to authorize access to the
destination blob as well.
●● When you copy a file to another file that resides in a different storage account, you must use a SAS to
authorize access to the source file. You can optionally use a SAS to authorize access to the destination
file as well.
●● When you copy a blob to a file, or a file to a blob, you must use a SAS to authorize access to the
source object, even if the source and destination objects reside within the same storage account.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  213

Lab and review questions


Lab scenario

As a new employee at your company, you signed in to your Microsoft 365 applications for the first time
and discovered that your profile information isn't accurate. You also noticed that the name and profile
picture when you sign in aren't correct. Rather than change these values manually, you have decided that
this is a good opportunity to learn the Microsoft identity platform and how you can use different libraries
such as the Microsoft Authentication Library (MSAL) and the Microsoft Graph SDK to change these values
in a programmatic manner.

Objectives
After you complete this lab, you will be able to:
●● Create a new application registration in Azure Active Directory (Azure AD).
●● Use the MSAL.NET library to implement the interactive authentication flow.
●● Obtain a token from the Microsoft identity platform by using the MSAL.NET library.
●● Query Microsoft Graph by using the Microsoft Graph SDK and the device code flow.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 6 review questions


Review Question 1
In the context of Azure Active Directory (Azure AD) "application" is frequently used as a conceptual term,
referring to not only the application software, but also its Azure AD registration and role in authentication/
authorization “conversations” at runtime.
In that context what role(s) can an app function?
†† Client role
†† Resource server role
†† Both client and resource server role
†† All of the above
MCT USE ONLY. STUDENT USE PROHIBITED 214  Module 6 Implement user authentication and authorization  

Review Question 2
What resources/objects are created when you register an Azure AD application in the portal?
Select all that apply.
†† Application object
†† App service plan
†† Service principal object
†† Hosting plan

Review Question 3
A shared access signature (SAS) is a signed URI that points to one or more storage resources and includes a
token that contains a special set of query parameters.
What are the two ways you can sign an SAS?
†† With a self-signed certificate
†† With the storage account key
†† With a user delegation key
†† With an HTTPS URI

Review Question 4
The Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential
clients.
Which of the two definitions below describe a public client application?
†† The app isn't trusted to safely keep application secrets, so it only accesses Web APIs on behalf of the
user. It can't hold configuration-time secrets, so it doesn't have client secrets.
†† The client ID is exposed through the web browser, but the secret is passed only in the back channel
and never directly exposed.

Review Question 5
With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application builders:
"PublicClientApplicationBuilder" and "ConfidentialClientApplicationBuilder".
True or false, all of the ".With" modifiers can be used for both the Public and Confidential builders.
†† True
†† False
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  215

Answers
Review Question 1
In the context of Azure Active Directory (Azure AD) "application" is frequently used as a conceptual term,
referring to not only the application software, but also its Azure AD registration and role in authentica-
tion/authorization “conversations” at runtime.
In that context what role(s) can an app function?
†† Client role
†† Resource server role
†† Both client and resource server role
■■ All of the above
Explanation
By definition, an application can function in these roles:
Review Question 2
What resources/objects are created when you register an Azure AD application in the portal?
Select all that apply.
■■ Application object
†† App service plan
■■ Service principal object
†† Hosting plan
Explanation
When you register an Azure AD application in the Azure portal, two objects are created in your Azure AD
tenant:
Review Question 3
A shared access signature (SAS) is a signed URI that points to one or more storage resources and in-
cludes a token that contains a special set of query parameters.
What are the two ways you can sign an SAS?
†† With a self-signed certificate
■■ With the storage account key
■■ With a user delegation key
†† With an HTTPS URI
Explanation
You can sign a SAS in one of two ways:
MCT USE ONLY. STUDENT USE PROHIBITED 216  Module 6 Implement user authentication and authorization  

Review Question 4
The Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential
clients.
Which of the two definitions below describe a public client application?
■■ The app isn't trusted to safely keep application secrets, so it only accesses Web APIs on behalf of the
user. It can't hold configuration-time secrets, so it doesn't have client secrets.
†† The client ID is exposed through the web browser, but the secret is passed only in the back channel
and never directly exposed.
Explanation
Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential clients.
Review Question 5
With MSAL.NET 3.x, the recommended way to instantiate an application is by using the application
builders: "PublicClientApplicationBuilder" and "ConfidentialClientApplicationBuilder".
True or false, all of the ".With" modifiers can be used for both the Public and Confidential builders.
†† True
■■ False
Explanation
Most of the .With modifiers can be used for both Public and Confidential app builders. There are a few that
are specific to the Confidential client: ".WithCertificate()", and ".WithClientSecret()".
MCT USE ONLY. STUDENT USE PROHIBITED
Module 7 Implement secure cloud solutions

Manage keys, secrets, and certificates by using


the KeyVault API
Azure Key Vault overview
Azure Key Vault helps solve the following problems:
●● Secrets Management - Azure Key Vault can be used to Securely store and tightly control access to
tokens, passwords, certificates, API keys, and other secrets
●● Key Management - Azure Key Vault can also be used as a Key Management solution. Azure Key Vault
makes it easy to create and control the encryption keys used to encrypt your data.
●● Certificate Management - Azure Key Vault is also a service that lets you easily provision, manage,
and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for
use with Azure and your internal connected resources.
●● Store secrets backed by Hardware Security Modules - The secrets and keys can be protected either
by software or FIPS 140-2 Level 2 validates HSMs

Key benefits of using Azure Key Vault


●● Centralized application secrets: Centralizing storage of application secrets in Azure Key Vault allows
you to control their distribution. For example, instead of storing the connection string in the app's
code you can store it securely in Key Vault. Your applications can securely access the information they
need by using URIs. These URIs allow the applications to retrieve specific versions of a secret.
●● Securely store secrets and keys: Secrets and keys are safeguarded by Azure, using industry-standard
algorithms, key lengths, and hardware security modules (HSMs). The HSMs used are Federal Informa-
tion Processing Standards (FIPS) 140-2 Level 2 validated. Authentication is done via Azure Active
Directory. Authorization may be done via role-based access control (RBAC) or Key Vault access policy.
RBAC is used when dealing with the management of the vaults and key vault access policy is used
when attempting to access data stored in a vault.
MCT USE ONLY. STUDENT USE PROHIBITED 218  Module 7 Implement secure cloud solutions  

●● Monitor access and use: You can monitor activity by enabling logging for your vaults. You can
configure Azure Key Vault to:
●● Archive to a storage account.
●● Stream to an event hub.
●● Send the logs to Azure Monitor logs.
●● Simplified administration of application secrets: Security information must be secured, it must
follow a life cycle, and it must be highly available. Azure Key Vault simplifies the process of meeting
these requirements by:
●● Removing the need for in-house knowledge of Hardware Security Modules
●● Scaling up on short notice to meet your organization’s usage spikes.
●● Replicating the contents of your Key Vault within a region and to a secondary region. Data replica-
tion ensures high availability and takes away the need of any action from the administrator to
trigger the failover.
●● Providing standard Azure administration options via the portal, Azure CLI and PowerShell.
●● Automating certain tasks on certificates that you purchase from Public CAs, such as enrollment
and renewal.

Azure Key Vault basic concepts


Azure Key Vault is a tool for securely storing and accessing secrets. A secret is anything that you want to
tightly control access to, such as API keys, passwords, or certificates. A vault is logical group of secrets.
Here are some important terms:
●● Vault owner: A vault owner can create a key vault and gain full access and control over it. The vault
owner can also set up auditing to log who accesses secrets and keys. Administrators can control the
key lifecycle.
●● Vault consumer: A vault consumer can perform actions on the assets inside the key vault when the
vault owner grants the consumer access. The available actions depend on the permissions granted.
●● Service principal: An Azure service principal is a security identity that user-created apps, services, and
automation tools use to access specific Azure resources.
●● Azure Active Directory (Azure AD): Azure AD is the Active Directory service for a tenant. Each
directory has one or more domains. A directory can have many subscriptions associated with it, but
only one tenant.
●● Azure tenant ID: A tenant ID is a unique way to identify an Azure AD instance within an Azure
subscription.
●● Managed identities: Azure Key Vault provides a way to securely store credentials and other keys and
secrets, but your code needs to authenticate to Key Vault to retrieve them. Using a managed identity
makes solving this problem simpler by giving Azure services an automatically managed identity in
Azure AD. You can use this identity to authenticate to Key Vault or any service that supports Azure AD
authentication, without having any credentials in your code.
MCT USE ONLY. STUDENT USE PROHIBITED
 Manage keys, secrets, and certificates by using the KeyVault API  219

Authentication
To do any operations with Key Vault, you first need to authenticate to it. There are three ways to authenti-
cate to Key Vault:
●● Managed identities for Azure resources: When you deploy an app on a virtual machine in Azure,
you can assign an identity to your virtual machine that has access to Key Vault. You can also assign
identities to other Azure resources. The benefit of this approach is that the app or service isn't
managing the rotation of the first secret. Azure automatically rotates the identity. We recommend this
approach as a best practice.
●● Service principal and certificate: You can use a service principal and an associated certificate that
has access to Key Vault. We don't recommend this approach because the application owner or
developer must rotate the certificate.
●● Service principal and secret: Although you can use a service principal and a secret to authenticate to
Key Vault, we don't recommend it. It's hard to automatically rotate the bootstrap secret that's used to
authenticate to Key Vault.

Authentication, requests and responses


Azure Key Vault supports JSON formatted requests and responses. Requests to the Azure Key Vault are
directed to a valid Azure Key Vault URL using HTTPS with some URL parameters and JSON encoded
request and response bodies.

Request URL
Key management operations use HTTP DELETE, GET, PATCH, PUT and HTTP POST and cryptographic
operations against existing key objects use HTTP POST. Clients that cannot support specific HTTP verbs
may also use HTTP POST using the X-HTTP-REQUEST header to specify the intended verb; requests that
do not normally require a body should include an empty body when using HTTP POST, for example when
using POST instead of DELETE.
To work with objects in the Azure Key Vault, the following are example URLs:
●● To CREATE a key called TESTKEY in a Key Vault use - PUT /keys/TESTKEY?api-version=<api_
version> HTTP/1.1
●● To IMPORT a key called IMPORTEDKEY into a Key Vault use - POST /keys/IMPORTEDKEY/im-
port?api-version=<api_version> HTTP/1.1
●● To GET a secret called MYSECRET in a Key Vault use - GET /secrets/MYSECRET?api-ver-
sion=<api_version> HTTP/1.1
●● To SIGN a digest using a key called TESTKEY in a Key Vault use - POST /keys/TESTKEY/sign?a-
pi-version=<api_version> HTTP/1.1
The authority for a request to a Key Vault is always as follows, https://{keyvault-name}.
vault.azure.net/
Keys are always stored under the /keys path, Secrets are always stored under the /secrets path.

API Version
The Azure Key Vault Service supports protocol versioning to provide compatibility with down-level clients,
although not all capabilities will be available to those clients. Clients must use the api-version query
string parameter to specify the version of the protocol that they support as there is no default.
MCT USE ONLY. STUDENT USE PROHIBITED 220  Module 7 Implement secure cloud solutions  

Azure Key Vault protocol versions follow a date numbering scheme using a {YYYY}.{MM}.{DD} format.

Authentication
All requests to Azure Key Vault MUST be authenticated. Azure Key Vault supports Azure Active Directory
access tokens that may be obtained using OAuth2.
Access tokens must be sent to the service using the HTTP Authorization header:
PUT /keys/MYKEY?api-version=<api_version> HTTP/1.1
Authorization: Bearer <access_token>

When an access token is not supplied, or when a token is not accepted by the service, an HTTP 401 error
will be returned to the client and will include the WWW-Authenticate header, for example:
401 Not Authorized
WWW-Authenticate: Bearer authorization="…", resource="…"

The parameters on the WWW-Authenticate header are:


●● authorization: The address of the OAuth2 authorization service that may be used to obtain an access
token for the request.
●● resource: The name of the resource to use in the authorization request.

Demo: Set and retrieve a secret from Azure Key


Vault by using Azure CLI
In this demo you'll learn how to perform the following actions:
●● Create a Key Vault
●● Add and retrieve a secret

Prerequisites
This demo is performed in either the Cloud Shell or a local Azure CLI installation.
●● Cloud Shell: be sure to select PowerShell as the shell.
●● If you need to install Azure CLI locally visit:

●● https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

Login to Azure
1. Choose to either:
●● Launch the Cloud Shell: https://shell.azure.com
●● Or, open a terminal and login to your Azure account using the az login command.
MCT USE ONLY. STUDENT USE PROHIBITED
 Manage keys, secrets, and certificates by using the KeyVault API  221

Create a Key Vault


1. Set variables. Let's set some variables for the CLI commands to use to reduce the amount of retyping.
Replace the <myLocation> variable string below with a region that makes sense for you. The Key
Vault name needs to be a globally unique name, and the script below generates a random string.
$myResourceGroup="az204vaultrg"
$myKeyVault="az204vault" + $(get-random -minimum 10000 -maximum 100000)
$myLocation="<myLocation>"

2. Create a resource group.


az group create --name $myResourceGroup --location $myLocation

3. Create a Key Vault.


az keyvault create --name $myKeyVault --resource-group $myResourceGroup --location $myLocation

✔️ Note: This can take a few minutes to run.

Add and retrieve a secret


To add a secret to the vault, you just need to take a couple of additional steps.
1. Create a secret. Let's add a password that could be used by an app. The password will be called
ExamplePassword and will store the value of hVFkk965BuUv in it.
az keyvault secret set --vault-name $myKeyVault --name "ExamplePassword" --value "hVFkk965BuUv"

2. Retrieve the secret.


az keyvault secret show --name "ExamplePassword" --vault-name $myKeyVault

This command will return some JSON. The last line will contain the password in plain text.
"value": "hVFkk965BuUv"

You have created a Key Vault, stored a secret, and retrieved it.

Clean up resources
When you no longer need the resources in this demo use the following command to delete the resource
group and associated Key Vault.
az group delete --name $myResourceGroup --no-wait --yes
MCT USE ONLY. STUDENT USE PROHIBITED 222  Module 7 Implement secure cloud solutions  

Implement Managed Identities for Azure re-


sources
Managed identities for Azure resources over-
view
A common challenge when building cloud applications is how to manage the credentials in your code for
authenticating to cloud services. Keeping the credentials secure is an important task. Ideally, the creden-
tials never appear on developer workstations and aren't checked into source control. Azure Key Vault pro-
vides a way to securely store credentials, secrets, and other keys, but your code has to authenticate to
Key Vault to retrieve them.
The managed identities for Azure resources feature in Azure Active Directory (Azure AD) solves this
problem. The feature provides Azure services with an automatically managed identity in Azure AD. You
can use the identity to authenticate to any service that supports Azure AD authentication, including Key
Vault, without any credentials in your code.

Terminology
The following terms are used throughout this section of the course:
●● Client ID - a unique identifier generated by Azure AD that is tied to an application and service
principal during its initial provisioning.
●● Principal ID - the object ID of the service principal object for your managed identity that is used to
grant role-based access to an Azure resource.
●● Azure Instance Metadata Service (IMDS) - a REST endpoint accessible to all IaaS VMs created via
the Azure Resource Manager. The endpoint is available at a well-known non-routable IP address
(169.254.169.254) that can be accessed only from within the VM.

Types of managed identities


There are two types of managed identities:
●● A system-assigned managed identity is enabled directly on an Azure service instance. When the
identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that's trusted by
the subscription of the instance. After the identity is created, the credentials are provisioned onto the
instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that
it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the
identity in Azure AD.
●● A user-assigned managed identity is created as a standalone Azure resource. Through a create
process, Azure creates an identity in the Azure AD tenant that's trusted by the subscription in use.
After the identity is created, the identity can be assigned to one or more Azure service instances. The
lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service
instances to which it's assigned.
Internally, managed identities are service principals of a special type, which are locked to only be used
with Azure resources. When the managed identity is deleted, the corresponding service principal is
automatically removed.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement Managed Identities for Azure resources  223

Characteristics of managed identities


The table below highlights some of the key differences between the two types of managed identities.

  System-assigned managed User-assigned managed identity


identity
Creation • Created as part of an Azure • Created as a stand-alone Azure
resource (for example, an Azure resource
virtual machine or Azure App
Service)
Lifecycle • Shared lifecycle with the Azure • Independent life-cycle. Must
resource that the managed be explicitly deleted.
identity is created with.
• When the parent resource is
deleted, the managed identity is
deleted as well.
Sharing across Azure resources • Cannot be shared. • Can be shared.
• It can only be associated with a • The same user-assigned
single Azure resource. managed identity can be
associated with more than one
Azure resource.
Common use cases • Workloads that are contained • Workloads that run on multiple
within a single Azure resource. resources and which can share a
• Workloads for which you need single identity.
independent identities. • Workloads that need pre-au-
thorization to a secure resource
as part of a provisioning flow.
• Workloads where resources are
recycled frequently, but permis-
sions should stay consistent.

What Azure services support managed identities?


Managed identities for Azure resources can be used to authenticate to services that support Azure AD
authentication. For a list of Azure services that support the managed identities for Azure resources
feature, visit Services that support managed identities for Azure resources1.

Managed identities and Azure VMs


Below are the flows detailing how the two types of managed identities work with with an Azure VM.

How a system-assigned managed identity works with an


Azure VM
1. Azure Resource Manager receives a request to enable the system-assigned managed identity on a
VM.
2. Azure Resource Manager creates a service principal in Azure AD for the identity of the VM. The service
principal is created in the Azure AD tenant that's trusted by the subscription.

1 https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-msi
MCT USE ONLY. STUDENT USE PROHIBITED 224  Module 7 Implement secure cloud solutions  

3. Azure Resource Manager configures the identity on the VM by updating the Azure Instance Metadata
Service identity endpoint with the service principal client ID and certificate.
4. After the VM has an identity, use the service principal information to grant the VM access to Azure
resources. To call Azure Resource Manager, use role-based access control (RBAC) in Azure AD to
assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the
specific secret or key in Key Vault.
5. Your code that's running on the VM can request a token from the Azure Instance Metadata service
endpoint, accessible only from within the VM: http://169.254.169.254/metadata/identity/
oauth2/token
●● The resource parameter specifies the service to which the token is sent. To authenticate to Azure
Resource Manager, use resource=https://management.azure.com/.
●● API version parameter specifies the IMDS version, use api-version=2018-02-01 or greater.
6. A call is made to Azure AD to request an access token (as specified in step 5) by using the client ID
and certificate configured in step 3. Azure AD returns a JSON Web Token (JWT) access token.
7. Your code sends the access token on a call to a service that supports Azure AD authentication.

How a user-assigned managed identity works with an Az-


ure VM
1. Azure Resource Manager receives a request to create a user-assigned managed identity.
2. Azure Resource Manager creates a service principal in Azure AD for the user-assigned managed
identity. The service principal is created in the Azure AD tenant that's trusted by the subscription.
3. Azure Resource Manager receives a request to configure the user-assigned managed identity on a VM
and updates the Azure Instance Metadata Service identity endpoint with the user-assigned managed
identity service principal client ID and certificate.
4. After the user-assigned managed identity is created, use the service principal information to grant the
identity access to Azure resources. To call Azure Resource Manager, use RBAC in Azure AD to assign
the appropriate role to the service principal of the user-assigned identity. To call Key Vault, grant your
code access to the specific secret or key in Key Vault.
✔️ Note: You can also do this step before step 3.
5. Your code that's running on the VM can request a token from the Azure Instance Metadata Service
identity endpoint, accessible only from within the VM: http://169.254.169.254/metadata/
identity/oauth2/token
●● The resource parameter specifies the service to which the token is sent. To authenticate to Azure
Resource Manager, use resource=https://management.azure.com/.
●● The client ID parameter specifies the identity for which the token is requested. This value is
required for disambiguation when more than one user-assigned identity is on a single VM.
●● The API version parameter specifies the Azure Instance Metadata Service version. Use api-ver-
sion=2018-02-01 or higher.
6. A call is made to Azure AD to request an access token (as specified in step 5) by using the client ID
and certificate configured in step 3. Azure AD returns a JSON Web Token (JWT) access token.
7. Your code sends the access token on a call to a service that supports Azure AD authentication.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement Managed Identities for Azure resources  225

Configuring managed identities on an Azure VM


by using Azure CLI
You can configure an Azure VM with a managed identity during, or after, the creation of the VM. Below
are some CLI examples showing the commands for both system- and user-assigned identities.

System-assigned managed identity


To create, or enable, an Azure VM with the system-assigned managed identity your account needs the
Virtual Machine Contributor role assignment. No additional Azure AD directory role assignments are
required.

Enable system-assigned managed identity during creation of


an Azure VM
The following example creates a VM named myVM with a system-assigned managed identity, as request-
ed by the --assign-identity parameter. The --admin-username and --admin-password
parameters specify the administrative user name and password account for virtual machine sign-in.
Update these values as appropriate for your environment:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image win2016datacenter \
--assign-identity \
--admin-username azureuser \
--admin-password myPassword12

Enable system-assigned managed identity on an existing


Azure VM
Use az vm identity assign command enable the system-assigned identity to an existing VM:
az vm identity assign -g myResourceGroup -n myVm

User-assigned managed identity


To assign a user-assigned identity to a VM during its creation, your account needs the Virtual Machine
Contributor and Managed Identity Operator role assignments. No additional Azure AD directory role
assignments are required.
Enabling user-assigned managed identities is a two-step process:
1. Create the user-assigned identity
2. Assign the identity to a VM
MCT USE ONLY. STUDENT USE PROHIBITED 226  Module 7 Implement secure cloud solutions  

Create a user-assigned identity


Create a user-assigned managed identity using az identity create. The -g parameter specifies the
resource group where the user-assigned managed identity is created, and the -n parameter specifies its
name.
az identity create -g myResourceGroup -n myUserAssignedIdentity

Assign a user-assigned managed identity during the creation


of an Azure VM
The following example creates a VM associated with the new user-assigned identity, as specified by the
--assign-identity parameter.
az vm create \
--resource-group <RESOURCE GROUP> \
--name <VM NAME> \
--image UbuntuLTS \
--admin-username <USER NAME> \
--admin-password <PASSWORD> \
--assign-identity <USER ASSIGNED IDENTITY NAME>

Assign a user-assigned managed identity to an existing Az-


ure VM
Assign the user-assigned identity to your VM using az vm identity assign.
az vm identity assign \
-g <RESOURCE GROUP> \
-n <VM NAME> \
--identities <USER ASSIGNED IDENTITY>
MCT USE ONLY. STUDENT USE PROHIBITED
 Secure app configuration data by using Azure App Configuration  227

Secure app configuration data by using Azure


App Configuration
Azure App Configuration overview
Azure App Configuration provides a service to centrally manage application settings and feature flags.
Modern programs, especially programs running in a cloud, generally have many components that are
distributed in nature. Spreading configuration settings across these components can lead to
hard-to-troubleshoot errors during an application deployment. Use App Configuration to store all the
settings for your application and secure their accesses in one place.
App Configuration offers the following benefits:
●● A fully managed service that can be set up in minutes
●● Flexible key representations and mappings
●● Tagging with labels
●● Point-in-time replay of settings
●● Dedicated UI for feature flag management
●● Comparison of two sets of configurations on custom-defined dimensions
●● Enhanced security through Azure-managed identities
●● Complete data encryptions, at rest or in transit
●● Native integration with popular frameworks
App Configuration complements Azure Key Vault, which is used to store application secrets. App Config-
uration makes it easier to implement the following scenarios:
●● Centralize management and distribution of hierarchical configuration data for different environments
and geographies
●● Dynamically change application settings without the need to redeploy or restart an application
●● Control feature availability in real-time

Use App Configuration


The easiest way to add an App Configuration store to your application is through a client library that
Microsoft provides. Based on the programming language and framework, the following best methods are
available to you.

Programming language and framework How to connect


.NET Core and ASP.NET Core App Configuration provider for .NET Core
.NET Framework and ASP.NET App Configuration builder for .NET
Java Spring App Configuration client for Spring Cloud
Others App Configuration REST API

Keys and values


Azure App Configuration stores configuration data as key-value pairs.
MCT USE ONLY. STUDENT USE PROHIBITED 228  Module 7 Implement secure cloud solutions  

Keys
Keys serve as the name for key-value pairs and are used to store and retrieve corresponding values. It's a
common practice to organize keys into a hierarchical namespace by using a character delimiter, such as /
or :. Use a convention that's best suited for your application. App Configuration treats keys as a whole. It
doesn't parse keys to figure out how their names are structured or enforce any rule on them.
Keys stored in App Configuration are case-sensitive, unicode-based strings. The keys app1 and App1 are
distinct in an App Configuration store. Keep this in mind when you use configuration settings within an
application because some frameworks handle configuration keys case-insensitively.
You can use any unicode character in key names entered into App Configuration except for *, ,, and \.
These characters are reserved. If you need to include a reserved character, you must escape it by using \
{Reserved Character}. There's a combined size limit of 10,000 characters on a key-value pair. This
limit includes all characters in the key, its value, and all associated optional attributes. Within this limit,
you can have many hierarchical levels for keys.

Design key namespaces


There are two general approaches to naming keys used for configuration data: flat or hierarchical. These
methods are similar from an application usage standpoint, but hierarchical naming offers a number of
advantages:
●● Easier to read. Instead of one long sequence of characters, delimiters in a hierarchical key name
function as spaces in a sentence.
●● Easier to manage. A key name hierarchy represents logical groups of configuration data.
●● Easier to use. It's simpler to write a query that pattern-matches keys in a hierarchical structure and
retrieves only a portion of configuration data.
Below are some examples of how you can structure your key names into a hierarchy:
●● Based on component services
AppName:Service1:ApiEndpoint
AppName:Service2:ApiEndpoint

●● Based on deployment regions


AppName:Region1:DbEndpoint
AppName:Region2:DbEndpoint

Label keys
Key values in App Configuration can optionally have a label attribute. Labels are used to differentiate key
values with the same key. A key app1 with labels A and B forms two separate keys in an App Configura-
tion store. By default, the label for a key value is empty, or null.
Label provides a convenient way to create variants of a key. A common use of labels is to specify multiple
environments for the same key:
Key = AppName:DbEndpoint & Label = Test
Key = AppName:DbEndpoint & Label = Staging
Key = AppName:DbEndpoint & Label = Production
MCT USE ONLY. STUDENT USE PROHIBITED
 Secure app configuration data by using Azure App Configuration  229

Version key values


App Configuration doesn't version key values automatically as they're modified. Use labels as a way to
create multiple versions of a key value. For example, you can input an application version number or a Git
commit ID in labels to identify key values associated with a particular software build.

Query key values


Each key value is uniquely identified by its key plus a label that can be null. You query an App Configu-
ration store for key values by specifying a pattern. The App Configuration store returns all key values that
match the pattern and their corresponding values and attributes.

Values
Values assigned to keys are also unicode strings. You can use all unicode characters for values. There's an
optional user-defined content type associated with each value. Use this attribute to store information, for
example an encoding scheme, about a value that helps your application to process it properly.
Configuration data stored in an App Configuration store, which includes all keys and values, is encrypted
at rest and in transit. App Configuration isn't a replacement solution for Azure Key Vault. Don't store
application secrets in it.

Managing app features by using Azure Configu-


ration Manager
Feature management is a modern software-development practice that decouples feature release from
code deployment and enables quick changes to feature availability on demand. It uses a technique called
feature flags (also known as feature toggles, feature switches, and so on) to dynamically administer a
feature's lifecycle.

Basic concepts
Here are several new terms related to feature management:
●● Feature flag: A feature flag is a variable with a binary state of on or off. The feature flag also has an
associated code block. The state of the feature flag triggers whether the code block runs or not.
●● Feature manager: A feature manager is an application package that handles the lifecycle of all the
feature flags in an application. The feature manager typically provides additional functionality, such as
caching feature flags and updating their states.
●● Filter: A filter is a rule for evaluating the state of a feature flag. A user group, a device or browser type,
a geographic location, and a time window are all examples of what a filter can represent.
An effective implementation of feature management consists of at least two components working in
concert:
●● An application that makes use of feature flags.
●● A separate repository that stores the feature flags and their current states.
How these components interact is illustrated in the following examples.
MCT USE ONLY. STUDENT USE PROHIBITED 230  Module 7 Implement secure cloud solutions  

Feature flag usage in code


The basic pattern for implementing feature flags in an application is simple. You can think of a feature
flag as a Boolean state variable used with an if conditional statement in your code:
if (featureFlag) {
// Run the following code
}

In this case, if featureFlag is set to True, the enclosed code block is executed; otherwise, it's skipped.
You can set the value of featureFlag statically, as in the following code example:
bool featureFlag = true;

You can also evaluate the flag's state based on certain rules:
bool featureFlag = isBetaUser();

A slightly more complicated feature flag pattern includes an else statement as well:
if (featureFlag) {
// This following code will run if the featureFlag value is true
} else {
// This following code will run if the featureFlag value is false
}

Feature flag declaration


Each feature flag has two parts: a name and a list of one or more filters that are used to evaluate if a
feature's state is on (that is, when its value is True). A filter defines a use case for when a feature should
be turned on.
When a feature flag has multiple filters, the filter list is traversed in order until one of the filters deter-
mines the feature should be enabled. At that point, the feature flag is on, and any remaining filter results
are skipped. If no filter indicates the feature should be enabled, the feature flag is off.
The feature manager supports appsettings.json as a configuration source for feature flags. The following
example shows how to set up feature flags in a JSON file:
"FeatureManagement": {
"FeatureA": true, // Feature flag set to on
"FeatureB": false, // Feature flag set to off
"FeatureC": {
"EnabledFor": [
{
"Name": "Percentage",
"Parameters": {
"Value": 50
}
}
]
}
MCT USE ONLY. STUDENT USE PROHIBITED
 Secure app configuration data by using Azure App Configuration  231

Feature flag repository


To use feature flags effectively, you need to externalize all the feature flags used in an application. This
approach allows you to change feature flag states without modifying and redeploying the application
itself.
Azure App Configuration is designed to be a centralized repository for feature flags. You can use it to
define different kinds of feature flags and manipulate their states quickly and confidently. You can then
use the App Configuration libraries for various programming language frameworks to easily access these
feature flags from your application.
MCT USE ONLY. STUDENT USE PROHIBITED 232  Module 7 Implement secure cloud solutions  

Lab and review questions


Lab: Access resource secrets more securely
across services

Lab scenario
Your company has a data-sharing business-to-business (B2B) agreement with another local business in
which you're expected to parse a file that's dropped off nightly. To keep things simple, the second
company has decided to drop the file as a Microsoft Azure Storage blob every night. You're now tasked
with devising a way to access the file and generate a secure URL that any internal system can use to
access the blob without exposing the file to the internet. You have decided to use Azure Key Vault to
store the credentials for the storage account and Azure Functions to write the code necessary to access
the file without storing credentials in plaintext or exposing the file to the internet.

Objectives
●● After you complete this lab, you will be able to:
●● Create an Azure key vault and store secrets in the key vault.
●● Create a server-assigned managed identity for an Azure App Service instance.
●● Create a Key Vault access policy for an Azure Active Directory identity or application.
●● Use the Storage .NET SDK to download a blob.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 7 review questions


Review Question 1
Azure Key Vault is a tool for securely storing and accessing secrets. A secret is anything that you want to
tightly control access to, such as API keys, passwords, or certificates. A vault is logical group of secrets.
True or false, Azure Key Vault requires that you always pass credentials through your code to access stored
secrets.
†† True
†† False
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  233

Review Question 2
Below are some characteristics of a managed identity:
●● Created as a stand-alone Azure resource
●● Independent life-cycle, must be explicitly deleted
●● Can be shared

Are the listed characteristics associated with a user- or system-assigned identity?


†† User-assigned identity
†† System-assigned identity

Review Question 3
The Azure Key Vault Service supports protocol versioning to provide compatibility with down-level clients,
although not all capabilities will be available to those clients.
True or False: If you don't specify an API version using the "api_version" parameter Key Vault will default to
the current version.
†† True
†† False

Review Question 4
Azure App Configuration provides a service to centrally manage application settings and feature flags.
Azure App Configuration stores configuration data as key-value pairs.
Which of the below are valid uses of the label attribute in key pairs?
(Select all that apply.)
†† Create multiple versions of a key value
†† Specify multiple environments for the same key
†† Assign a value to a key
†† Store application secrets

Review Question 5
Which of the options below matches the following definition?
A security identity that user-created apps, services, and automation tools use to access specific Azure
resources.
†† Vault owner
†† Vault consumer
†† Azure tenant ID
†† Service principal
MCT USE ONLY. STUDENT USE PROHIBITED 234  Module 7 Implement secure cloud solutions  

Answers
Review Question 1
Azure Key Vault is a tool for securely storing and accessing secrets. A secret is anything that you want to
tightly control access to, such as API keys, passwords, or certificates. A vault is logical group of secrets.
True or false, Azure Key Vault requires that you always pass credentials through your code to access
stored secrets.
†† True
■■ False
Explanation
You can use a managed identity to authenticate to Key Vault, or any service that supports Azure AD
authentication, without having any credentials in your code.
Review Question 2
Below are some characteristics of a managed identity:

Are the listed characteristics associated with a user- or system-assigned identity?


■■ User-assigned identity
†† System-assigned identity
Explanation
These are characteristics of a user-assigned identity. System-assigned identities are created as part of a
resource, are deleted when the resource is deleted, and can't be shared.
Review Question 3
The Azure Key Vault Service supports protocol versioning to provide compatibility with down-level clients,
although not all capabilities will be available to those clients.
True or False: If you don't specify an API version using the "api_version" parameter Key Vault will default
to the current version.
†† True
■■ False
Explanation
Clients must use the "api-version" query string parameter to specify the version of the protocol that they
support as there is no default.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  235

Review Question 4
Azure App Configuration provides a service to centrally manage application settings and feature flags.
Azure App Configuration stores configuration data as key-value pairs.
Which of the below are valid uses of the label attribute in key pairs?
(Select all that apply.)
■■ Create multiple versions of a key value
■■ Specify multiple environments for the same key
†† Assign a value to a key
†† Store application secrets
Explanation
Labels add a layer of metadata to the key-value pairs. You can use them to differentiate multiple versions of
the same key, including environments. Labels are not values. You should never store application secrets in
App Configuration.
Review Question 5
Which of the options below matches the following definition?
A security identity that user-created apps, services, and automation tools use to access specific Azure
resources.
†† Vault owner
†† Vault consumer
†† Azure tenant ID
■■ Service principal
Explanation
An Azure service principal is a security identity that user-created apps, services, and automation tools use to
access specific Azure resources.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 8 Implement API Management

API Management overview


API Management overview
API Management helps organizations publish APIs to external, partner, and internal developers to unlock
the potential of their data and services. Businesses everywhere are looking to extend their operations as a
digital platform, creating new channels, finding new customers and driving deeper engagement with
existing ones. API Management provides the core competencies to ensure a successful API program
through developer engagement, business insights, analytics, security, and protection. You can use Azure
API Management to take any backend and launch a full-fledged API program based on it.
The system is made up of the following components:
●● The API gateway is the endpoint that:
●● Accepts API calls and routes them to your backends.
●● Verifies API keys, JWT tokens, certificates, and other credentials.
●● Enforces usage quotas and rate limits.
●● Transforms your API on the fly without code modifications.
●● Caches backend responses where set up.
●● Logs call metadata for analytics purposes.
●● The Azure portal is the administrative interface where you set up your API program. Use it to:
●● Define or import API schema.
●● Package APIs into products.
●● Set up policies like quotas or transformations on the APIs.
●● Get insights from analytics.
●● Manage users.
MCT USE ONLY. STUDENT USE PROHIBITED 238  Module 8 Implement API Management  

●● The Developer portal serves as the main web presence for developers, where they can:
●● Read API documentation.
●● Try out an API via the interactive console.
●● Create an account and subscribe to get API keys.
●● Access analytics on their own usage.

Products
Products are how APIs are surfaced to developers. Products in API Management have one or more APIs,
and are configured with a title, description, and terms of use. Products can be Open or Protected.
Protected products must be subscribed to before they can be used, while open products can be used
without a subscription. Subscription approval is configured at the product level and can either require
administrator approval, or be auto-approved.

Groups
Groups are used to manage the visibility of products to developers. API Management has the following
immutable system groups:
●● Administrators - Azure subscription administrators are members of this group. Administrators
manage API Management service instances, creating the APIs, operations, and products that are used
by developers.
●● Developers - Authenticated developer portal users fall into this group. Developers are the customers
that build applications using your APIs. Developers are granted access to the developer portal and
build applications that call the operations of an API.
●● Guests - Unauthenticated developer portal users, such as prospective customers visiting the develop-
er portal of an API Management instance fall into this group. They can be granted certain read-only
access, such as the ability to view APIs but not call them.
In addition to these system groups, administrators can create custom groups or leverage external groups
in associated Azure Active Directory tenants.

Developers
Developers represent the user accounts in an API Management service instance. Developers can be
created or invited to join by administrators, or they can sign up from the Developer portal. Each develop-
er is a member of one or more groups, and can subscribe to the products that grant visibility to those
groups.

Policies
Policies are a powerful capability of API Management that allow the Azure portal to change the behavior
of the API through configuration. Policies are a collection of statements that are executed sequentially on
the request or response of an API. Popular statements include format conversion from XML to JSON and
call rate limiting to restrict the number of incoming calls from a developer, and many other policies are
available.
MCT USE ONLY. STUDENT USE PROHIBITED
 API Management overview  239

Demo: Create an APIM instance by using Azure


CLI
In this demo you'll learn how to perform the following actions:
●● Create an APIM instance
✔️ Note: In this demo we're creating an APIM instance in the Consumption plan. Currently this plan
doesn't support integration with AAD.

Prerequisites
This demo is performed in the Cloud Shell.
●● Cloud Shell: be sure to select PowerShell as the shell.

Login to Azure
1. Choose to either:
●● Launch the Cloud Shell: https://shell.azure.com
●● Or, login to the Azure portal1 and open open the cloud shell there.

Create an APIM instance


1. Create a resource group. The commands below will create a resource group named az204-apim-rg.
Enter a region that suits your needs.
$myLocation = Read-Host -Prompt "Enter the region (i.e. westus): "
az group create -n az204-apim-rg -l $myLocation

2. Create an APIM instance. The name of your APIM instance needs to be unique. The first line in the
example below generates a unique name. You also need to supply an email address.
$myEmail = Read-Host -Prompt "Enter an email address: "
$myAPIMname="az204-apim-" + $(get-random -minimum 10000 -maximum 100000)
az apim create -n $myAPIMname -l $myLocation `
--publisher-email $myEmail `
-g az204-apim-rg `
--publisher-name AZ204-APIM-Demo `
--sku-name Consumption

✔️ Note: Azure will send a notification to the email addres supplied above when the resource has
been provisioned.

Demo: Import an API by using the Azure Portal


In this demo you'll learn how to perform the following actions:
●● Import an API

1 https://portal.azure.com
MCT USE ONLY. STUDENT USE PROHIBITED 240  Module 8 Implement API Management  

●● Test the new API


✔️ Note: This demo relies on the successful deployment of the APIM instance created in the “Create an
APIM instance by using Azure CLI” demo.

Prerequisites
This demo is performed in the Azure Portal.

Login to the Azure Portal


1. Login to the portal: https://portal.azure.com

Go to your API Management instance


1. In the Azure portal, search for and select API Management services.

2. On the API Management screen, select the API Management instance you created in the Create an
APIM instance by using Azure CLI demo.

Import and publish a backend API


This section shows how to import and publish an OpenAPI specification backend API.
1. Select APIs from under API MANAGEMENT.
2. Select OpenAPI specification from the list and click Full in the pop-up.
MCT USE ONLY. STUDENT USE PROHIBITED
 API Management overview  241

You can set the API values during creation or later by going to the Settings tab. The red star next to a
field indicates that the field is required. Use the values from the table below to fill out the form.

Setting Value Description


OpenAPI Specification https://conferenceapi. References the service imple-
azurewebsites.net?for- menting the API. API manage-
mat=json ment forwards requests to this
address.
Display name Demo Conference API If you press tab after entering
the service URL, APIM will fill out
this field based on what is in the
json. This name is displayed in
the Developer portal.
Name demo-conference-api Provides a unique name for the
API. If you press tab after
entering the service URL, APIM
will fill out this field based on
what is in the json.
Description Provide an optional description If you press tab after entering
of the API. the service URL, APIM will fill out
this field based on what is in the
json.
URL scheme HTTPS Determines which protocols can
be used to access the API.
MCT USE ONLY. STUDENT USE PROHIBITED 242  Module 8 Implement API Management  

Setting Value Description


API URL suffix conference The suffix is appended to the
base URL for the API manage-
ment service. API Management
distinguishes APIs by their suffix
and therefore the suffix must be
unique for every API for a given
publisher.
Tags Leave blank Tags for organizing APIs. Tags
can be used for searching,
grouping, or filtering.
Products Leave Blank Products are associations of one
or more APIs. You can include a
number of APIs into a Product
and offer them to developers
through the developer portal.
Version this API? Leave unchecked For more information about
versioning, see Publish multiple
versions of your API (https://
docs.microsoft.com/en-us/azure/
api-management/api-manage-
ment-get-started-publish-ver-
sions)
3. Select Create.

Test the API


Operations can be called directly from the Azure portal, which provides a convenient way to view and test
the operations of an API.
1. Select the API you created in the previous step (from the APIs tab).
2. Press the Test tab.
3. Click on GetSpeakers. The page displays fields for query parameters, in this case none, and headers.
One of the headers is Ocp-Apim-Subscription-Key, for the subscription key of the product that
is associated with this API. The key is filled in automatically.
4. Press Send.
Backend responds with 200 OK and some data.
✔️ Note: We will continue to leverage this API in demos throughout the remainder of this module.

Demo: Create and publish a product


In this demo you'll learn how to perform the following actions:
●● Create and publish a product
✔️ Note: This demo relies on the successful completion of the Create an APIM instance by using Azure
CLI and Import an API by using the Azure Portal demos.
MCT USE ONLY. STUDENT USE PROHIBITED
 API Management overview  243

Prerequisites
This demo is performed in the Azure Portal.

Login to the Azure Portal


1. Login to the portal: https://portal.azure.com

Go to your API Management instance


1. In the Azure portal, search for and select API Management services.

2. On the API Management screen, select the API Management instance you created in the Create an
APIM instance by using Azure CLI demo.

Create and publish a product


1. Click on Products in the menu on the left to display the Products page.
2. Click + Add. When you add a product, you need to supply the following information:

Setting Value Description


Display name AZ204 API Demo The name as you want it to be
shown in the Developer portal.
Id az204-api-demo This is auto-populated when you
tab out of the Display name
field.
Description AZ204 Class Demo The Description field allows you
to provide detailed information
about the product such as its
purpose, the APIs it provides
access to, and other useful
information.
State Select Published Before the APIs in a product can
be called, the product must be
published. By default new
products are unpublished, and
are visible only to the Adminis-
trators group.
MCT USE ONLY. STUDENT USE PROHIBITED 244  Module 8 Implement API Management  

Setting Value Description


Requires subscription Leave unchecked Check Require subscription if a
user is required to subscribe to
use the product.
Requires approval Leave unchecked Check Require approval if you
want an administrator to review
and accept or reject subscription
attempts to this product. If the
box is unchecked, subscription
attempts are auto-approved.
Subscription count limit Leave blank To limit the count of multiple
simultaneous subscriptions,
enter the subscription limit.
Legal terms Leave blank You can include the terms of use
for the product which subscrib-
ers must accept in order to use
the product.
APIs Select Select API and add the Products are associations of one
Demo Conference API or more APIs. You can include a
number of APIs and offer them
to developers through the
developer portal.
3. Select Create to create the new product.
✔️ Note: The developer portal will not be available since the APIM instance was created using the
Consumption plan.
MCT USE ONLY. STUDENT USE PROHIBITED
 Defining policies for APIs  245

Defining policies for APIs


Policies in Azure API Management overview
In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher
to change the behavior of the API through configuration. Policies are a collection of Statements that are
executed sequentially on the request or response of an API.
Policies are applied inside the gateway which sits between the API consumer and the managed API. The
gateway receives all requests and usually forwards them unaltered to the underlying API. However a
policy can apply changes to both the inbound request and outbound response. Policy expressions can be
used as attribute values or text values in any of the API Management policies, unless the policy specifies
otherwise.

Understanding policy configuration


The policy definition is a simple XML document that describes a sequence of inbound and outbound
statements. The XML can be edited directly in the definition window.
The configuration is divided into inbound, backend, outbound, and on-error. The series of specified
policy statements is executes in order for a request and a response.
<policies>
<inbound>
<!-- statements to be applied to the request go here -->
</inbound>
<backend>
<!-- statements to be applied before the request is forwarded to
the backend service go here -->
</backend>
<outbound>
<!-- statements to be applied to the response go here -->
</outbound>
<on-error>
<!-- statements to be applied if there is an error condition go here -->
</on-error>
</policies>

If there is an error during the processing of a request, any remaining steps in the inbound, backend, or
outbound sections are skipped and execution jumps to the statements in the on-error section. By
placing policy statements in the on-error section you can review the error by using the context.
LastError property, inspect and customize the error response using the set-body policy, and config-
ure what happens if an error occurs.

Examples

Apply policies specified at different scopes


If you have a policy at the global level and a policy configured for an API, then whenever that particular
API is used both policies will be applied. API Management allows for deterministic ordering of combined
policy statements via the base element.
MCT USE ONLY. STUDENT USE PROHIBITED 246  Module 8 Implement API Management  

<policies>
<inbound>
<cross-domain />
<base />
<find-and-replace from="xyz" to="abc" />
</inbound>
</policies>

In the example policy definition above, the cross-domain statement would execute before any higher
policies which would in turn, be followed by the find-and-replace policy.

Filter response content


The policy defined in example below demonstrates how to filter data elements from the response
payload based on the product associated with the request.
The snippet assumes that response content is formatted as JSON and contains root-level properties
named “minutely”, "hourly", “daily”, "flags".
<policies>
<inbound>
<base />
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
<choose>
<when condition="@(context.Response.StatusCode == 200 && context.Product.Name.Equals("Start-
er"))">
<!-- NOTE that we are not using preserveContent=true when deserializing response body stream
into a JSON object since we don't intend to access it again. See details on https://docs.microsoft.com/
en-us/azure/api-management/api-management-transformation-policies#SetBody -->
<set-body>
@{
var response = context.Response.Body.As<JObject>();
foreach (var key in new [] {"minutely", "hourly", "daily", "flags"}) {
response.Property (key).Remove ();
}
return response.ToString();
}
</set-body>
</when>
</choose>
</outbound>
<on-error>
<base />
</on-error>
</policies>
MCT USE ONLY. STUDENT USE PROHIBITED
 Defining policies for APIs  247

Additional resources
●● Policy reference - for a full list of policy statements and their settings visit:
●● https://docs.microsoft.com/azure/api-management/api-management-policies
●● Policy samples - for more code examples visit:
●● https://docs.microsoft.com/azure/api-management/policy-samples

Demo: Transform your API by using policies


In this demo you'll learn how to perform the following actions:
●● Add a policy to an API to strip response headers
By stripping response headers you can hide the information about the technology stack that is running
on the backend.
✔️ Note: This demo relies on the successful completion of the Create an APIM instance by using Azure
CLI and Import an API by using the Azure Portal demos.

Prerequisites
This demo is performed in the Azure Portal.

Login to the Azure Portal


1. Login to the portal: https://portal.azure.com

Go to your API Management instance


1. In the Azure portal, search for and select API Management services.

2. On the API Management screen, select the API Management instance you created in the Create an
APIM instance by using Azure CLI demo.
3. Select APIs in the navigation pane.
4. Select the Demo Conference API.

Test the original response


1. Select the Test tab, on the top of the screen.
2. Select the GetSpeakers operation.
MCT USE ONLY. STUDENT USE PROHIBITED 248  Module 8 Implement API Management  

3. Press the Send button, at the bottom of the screen. The response includes the response headers
below:
x-aspnet-version: 4.0.30319
x-powered-by: ASP.NET

Define the policy


1. On the top of the screen, select Design tab.
2. Select All operations.
3. In the Outbound processing section, click the </> icon.
4. Modify your <outbound> code to look like this:
<outbound>
<set-header name="X-Powered-By" exists-action="delete" />
<set-header name="X-AspNet-Version" exists-action="delete" />
<base />
</outbound>

5. Select Save.
6. Inspect the Outbound process section and note there are two set-header policies listed.

Test the modified response


1. Select the Test tab, on the top of the screen.
2. Select the GetSpeakers operation.
3. Press the Send button, at the bottom of the screen.
The response no longer includes platform information related to your API.

API Management advanced policies


This part of the lesson provides a reference for the following API Management policies:
●● Control flow - Conditionally applies policy statements based on the results of the evaluation of
Boolean expressions.
●● Forward request - Forwards the request to the backend service.
●● Limit concurrency - Prevents enclosed policies from executing by more than the specified number of
requests at a time.
●● Log to Event Hub - Sends messages in the specified format to an Event Hub defined by a Logger
entity.
●● Mock response - Aborts pipeline execution and returns a mocked response directly to the caller.
●● Retry - Retries execution of the enclosed policy statements, if and until the condition is met. Execution
will repeat at the specified time intervals and up to the specified retry count.
MCT USE ONLY. STUDENT USE PROHIBITED
 Defining policies for APIs  249

Control flow
The choose policy applies enclosed policy statements based on the outcome of evaluation of Boolean
expressions, similar to an if-then-else or a switch construct in a programming language.
<choose>
<when condition="Boolean expression | Boolean constant">
<!— one or more policy statements to be applied if the above condition is true -->
</when>
<when condition="Boolean expression | Boolean constant">
<!— one or more policy statements to be applied if the above condition is true -->
</when>
<otherwise>
<!— one or more policy statements to be applied if none of the above conditions are true -->
</otherwise>
</choose>

The control flow policy must contain at least one <when/> element. The <otherwise/> element is
optional. Conditions in <when/> elements are evaluated in order of their appearance within the policy.
Policy statement(s) enclosed within the first <when/> element with condition attribute equals true will be
applied. Policies enclosed within the <otherwise/> element, if present, will be applied if all of the
<when/> element condition attributes are false.

Forward request
The forward-request policy forwards the incoming request to the backend service specified in the
request context. The backend service URL is specified in the API settings and can be changed using the
set backend service policy.
Removing this policy results in the request not being forwarded to the backend service and the policies in
the outbound section are evaluated immediately upon the successful completion of the policies in the
inbound section.
<forward-request timeout="time in seconds" follow-redirects="true | false"/>

Limit concurrency
The limit-concurrency policy prevents enclosed policies from executing by more than the specified
number of requests at any time. Upon exceeding that number, new requests will fail immediately with a
429 Too Many Requests status code.
<limit-concurrency key="expression" max-count="number">
<!— nested policy statements -->
</limit-concurrency>

Log to Event Hub


The log-to-eventhub policy sends messages in the specified format to an Event Hub defined by a
Logger entity. As its name implies, the policy is used for saving selected request or response context
information for online or offline analysis.
MCT USE ONLY. STUDENT USE PROHIBITED 250  Module 8 Implement API Management  

<log-to-eventhub logger-id="id of the logger entity" partition-id="index of the partition where messag-
es are sent" partition-key="value used for partition assignment">
Expression returning a string to be logged
</log-to-eventhub>

Mock response
The mock-response, as the name implies, is used to mock APIs and operations. It aborts normal
pipeline execution and returns a mocked response to the caller. The policy always tries to return respons-
es of highest fidelity. It prefers response content examples, whenever available. It generates sample
responses from schemas, when schemas are provided and examples are not. If neither examples or
schemas are found, responses with no content are returned.
<mock-response status-code="code" content-type="media type"/>

Retry
The retry policy executes its child policies once and then retries their execution until the retry condi-
tion becomes false or retry count is exhausted.
<retry>
condition="boolean expression or literal"
count="number of retry attempts"
interval="retry interval in seconds"
max-interval="maximum retry interval in seconds"
delta="retry interval delta in seconds"
first-fast-retry="boolean expression or literal">
<!-- One or more child policies. No restrictions -->
</retry>

Return response
The return-response policy aborts pipeline execution and returns either a default or custom response
to the caller. Default response is 200 OK with no body. Custom response can be specified via a context
variable or policy statements. When both are provided, the response contained within the context
variable is modified by the policy statements before being returned to the caller.
<return-response response-variable-name="existing context variable">
<set-header/>
<set-body/>
<set-status/>
</return-response>
MCT USE ONLY. STUDENT USE PROHIBITED
 Securing your APIs  251

Securing your APIs


Subscriptions in Azure API Management
When you publish APIs through API Management, it's easy and common to secure access to those APIs
by using subscription keys. Developers who need to consume the published APIs must include a valid
subscription key in HTTP requests when they make calls to those APIs. Otherwise, the calls are rejected
immediately by the API Management gateway. They aren't forwarded to the back-end services.
To get a subscription key for accessing APIs, a subscription is required. A subscription is essentially a
named container for a pair of subscription keys. Developers who need to consume the published APIs
can get subscriptions. And they don't need approval from API publishers. API publishers can also create
subscriptions directly for API consumers.
✔️ Note: API Management also supports other mechanisms for securing access to APIs, including:
OAuth2.0, Client certificates, and IP whitelisting.

Subscriptions and Keys


A subscription key is a unique auto-generated key that can be passed through in the headers of the client
request or as a query string parameter. The key is directly related to a subscription, which can be scoped
to different areas. Subscriptions give you granular control over permissions and policies.
The three main subscription scopes are:

Scope Details
All APIs Applies to every API accessible from the gateway
Single API This scope applies to a single imported API and all
of its endpoints
Product A product is a collection of one or more APIs that
you configure in API Management. You can assign
APIs to more than one product. Products can have
different access rules, usage quotas, and terms of
use.
Applications that call a protected API must include the key in every request.
You can regenerate these subscription keys at any time, for example, if you suspect that a key has been
shared with unauthorized users.

Every subscription has two keys, a primary and a secondary. Having two keys makes it easier when you do
need to regenerate a key. For example, if you want to change the primary key and avoid downtime, use
the secondary key in your apps.
MCT USE ONLY. STUDENT USE PROHIBITED 252  Module 8 Implement API Management  

For products where subscriptions are enabled, clients must supply a key when making calls to APIs in that
product. Developers can obtain a key by submitting a subscription request. If you approve the request,
you must send them the subscription key securely, for example, in an encrypted message. This step is a
core part of the API Management workflow.

Call an API with the subscription key


Applications must include a valid key in all HTTP requests when they make calls to API endpoints that are
protected by a subscription. Keys can be passed in the request header, or as a query string in the URL.
The default header name is Ocp-Apim-Subscription-Key, and the default query string is subscrip-
tion-key.
To test out your API calls, you can use the developer portal, or command-line tools, such as curl. Here's
an example of a GET request using the developer portal, which shows the subscription key header:

Here's how you can pass a key in the request header using curl:
curl --header "Ocp-Apim-Subscription-Key: <key string>" https://<apim gateway>.azure-api.net/api/path

Here's an example curl command that passes a key in the URL as a query string:
curl https://<apim gateway>.azure-api.net/api/path?subscription-key=<key string>

If the key is not passed in the header, or as a query string in the URL, you'll get a 401 Access Denied
response from the API gateway.
MCT USE ONLY. STUDENT USE PROHIBITED
 Securing your APIs  253

Using client certificates to secure access to an


API
Certificates can be used to provide TLS mutual authentication between the client and the API gateway.
You can configure the API Management gateway to allow only requests with certificates containing a
specific thumbprint. The authorization at the gateway level is handled through inbound policies.

TLS client authentication


With TLS client authentication, the API Management gateway can inspect the certificate contained within
the client request and check for properties like:

Property Reason
Certificate Authority (CA) Only allow certificates signed by a particular CA
Thumbprint Allow certificates containing a specified thumb-
print
Subject Only allow certificates with a specified subject
Expiration Date Only allow certificates that have not expired
These properties are not mutually exclusive and they can be mixed together to form your own policy
requirements. For instance, you can specify that the certificate passed in the request is signed by a certain
certificate authority and hasn't expired.
Client certificates are signed to ensure that they are not tampered with. When a partner sends you a
certificate, verify that it comes from them and not an imposter. There are two common ways to verify a
certificate:
●● Check who issued the certificate. If the issuer was a certificate authority that you trust, you can use the
certificate. You can configure the trusted certificate authorities in the Azure portal to automate this
process.
●● If the certificate is issued by the partner, verify that it came from them. For example, if they deliver the
certificate in person, you can be sure of its authenticity. These are known as self-signed certificates.

Accepting client certificates in the Consumption tier


The Consumption tier in API Management is designed to conform with serverless design principals. If you
build your APIs from serverless technologies, such as Azure Functions, this tier is a good fit. In the
Consumption tier, you must explicitly enable the use of client certificates, which you can do on the
Custom domains page. This step is not necessary in other tiers.
MCT USE ONLY. STUDENT USE PROHIBITED 254  Module 8 Implement API Management  

Certificate Authorization Policies


Create these policies in the inbound processing policy file within the API Management gateway:

Check the thumbprint of a client certificate


Every client certificate includes a thumbprint, which is a hash, calculated from other certificate properties.
The thumbprint ensures that the values in the certificate have not been altered since the certificate was
issued by the certificate authority. You can check the thumbprint in your policy. The following example
checks the thumbprint of the certificate passed in the request:
<choose>
<when condition="@(context.Request.Certificate == null || context.Request.Certificate.Thumbprint !=
"desired-thumbprint")" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
</choose>

Check the thumbprint against certificates uploaded to API


Management
In the previous example, only one thumbprint would work so only one certificate would be validated.
Usually, each customer or partner company would pass a different certificate with a different thumbprint.
To support this scenario, obtain the certificates from your partners and use the Client certificates page in
the Azure portal to upload them to the API Management resource. Then add this code to your policy:
<choose>
<when condition="@(context.Request.Certificate == null || !context.Request.Certificate.Verify() ||
!context.Deployment.Certificates.Any(c => c.Value.Thumbprint == context.Request.Certificate.Thumb-
print))" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
MCT USE ONLY. STUDENT USE PROHIBITED
 Securing your APIs  255

</choose>

Check the issuer and subject of a client certificate


This example checks the issuer and subject of the certificate passed in the request:
<choose>
<when condition="@(context.Request.Certificate == null || context.Request.Certificate.Issuer !=
"trusted-issuer" || context.Request.Certificate.SubjectName.Name != "expected-subject-name")" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
</choose>
MCT USE ONLY. STUDENT USE PROHIBITED 256  Module 8 Implement API Management  

Lab and review questions


Lab: Creating a multi-tier solution by using ser-
vices in Azure

Lab scenario
The developers in your company have successfully adopted and used the https://httpbin.org/ website
to test various clients that issue HTTP requests. Your company would like to use one of the publicly
available containers on Docker Hub to host the httpbin web application in an enterprise-managed
environment with a few caveats. First, developers who are issuing Representational State Transfer (REST)
queries should receive standard headers that are used throughout the company's applications. Second,
developers should be able to get responses by using JavaScript Object Notation (JSON) even if the API
that's used behind the scenes doesn't support the data format. You're tasked with using Microsoft Azure
API Management to create a proxy tier in front of the httpbin web application to implement your compa-
ny's policies.

Objectives
After you complete this lab, you will be able to:
●● Create a web application from a Docker Hub container image.
●● Create an API Management account.
●● Configure an API as a proxy for another Azure service with header and payload manipulation.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 8 review questions


Review Question 1
API Management helps organizations publish APIs to external, partner, and internal developers to unlock
the potential of their data and services.
Which of the below options is used to set up policies like quotas?
†† API gateway
†† Developer portal
†† Azure portal
†† Product definition
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  257

Review Question 2
In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher
to change the behavior of the API through configuration.
Which of the options below accurately reflects how policies are applied?
†† On inbound requests
†† On outbound responses
†† On the backend
†† All of the above

Review Question 3
A control flow policy applies policy statements based on the results of the evaluation of Boolean expressions.
True or False: A control flow policy must contain at least one "otherwise" element.
†† True
†† False

Review Question 4
The "return-response" policy aborts pipeline execution and returns either a default or custom response to
the caller.
What is the default response code?
†† 100
†† 400 OK
†† 200 OK
†† None of the above
MCT USE ONLY. STUDENT USE PROHIBITED 258  Module 8 Implement API Management  

Answers
Review Question 1
API Management helps organizations publish APIs to external, partner, and internal developers to unlock
the potential of their data and services.
Which of the below options is used to set up policies like quotas?
†† API gateway
†† Developer portal
■■ Azure portal
†† Product definition
Explanation
The Azure portal is the administrative interface where you set up your API program. Use it to:
Review Question 2
In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher
to change the behavior of the API through configuration.
Which of the options below accurately reflects how policies are applied?
†† On inbound requests
†† On outbound responses
†† On the backend
■■ All of the above
Explanation
A configuration can be divided into inbound, backend, outbound, and on-error. The series of specified policy
statements is executes in order for a request and a response.
Review Question 3
A control flow policy applies policy statements based on the results of the evaluation of Boolean expres-
sions.
True or False: A control flow policy must contain at least one "otherwise" element.
†† True
■■ False
Explanation
The control flow policy must contain at least one "when" element. The "otherwise" element is optional.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  259

Review Question 4
The "return-response" policy aborts pipeline execution and returns either a default or custom response to
the caller.
What is the default response code?
†† 100
†† 400 OK
■■ 200 OK
†† None of the above
Explanation
The default response is "200 OK" with no body.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 9 Develop App Service Logic Apps

Azure Logic Apps overview


Azure Logic Apps overview
Azure Logic Apps is a cloud service that helps you schedule, automate, and orchestrate tasks, business
processes, and workflows when you need to integrate apps, data, systems, and services across enterprises
or organizations. Logic Apps simplifies how you design and build scalable solutions for app integration,
data integration, system integration, enterprise application integration (EAI), and business-to-business
(B2B) communication, whether in the cloud, on premises, or both.
For example, here are just a few workloads that you can automate with logic apps:
●● Process and route orders across on-premises systems and cloud services.
●● Move uploaded files from an SFTP or FTP server to Azure Storage.
●● Send email notifications with Office 365 when events happen in various systems, apps, and services.
To build enterprise integration solutions with Azure Logic Apps, you can choose from a growing gallery
with hundreds of ready-to-use connectors, which include services such as Azure Service Bus, Azure
Functions, Azure Storage, SQL Server, Office 365, Dynamics, Salesforce, BizTalk, SAP, Oracle DB, file shares,
and more. Connectors provide triggers, actions, or both for creating logic apps that securely access and
process data in real time.

How does Logic Apps work?


Every logic app workflow starts with a trigger, which fires when a specific event happens, or when new
available data meets specific criteria. Many triggers provided by the connectors in Logic Apps include
basic scheduling capabilities so that you can set up how regularly your workloads run. For more complex
scheduling or advanced recurrences, you can use a Recurrence trigger as the first step in any workflow.
Each time that the trigger fires, the Logic Apps engine creates a logic app instance that runs the actions
in the workflow. These actions can also include data conversions and flow controls, such as conditional
statements, switch statements, loops, and branching. For example, this logic app starts with a Dynamics
365 trigger with the built-in criteria “When a record is updated”. If the trigger detects an event that
MCT USE ONLY. STUDENT USE PROHIBITED 262  Module 9 Develop App Service Logic Apps  

matches this criteria, the trigger fires and runs the workflow's actions. Here, these actions include XML
transformation, data updates, decision branching, and email notifications.

You can build your logic apps visually with the Logic Apps Designer, available in the Azure portal through
your browser and in Visual Studio and Visual Studio Code. For more custom logic apps, you can create or
edit logic app definitions in JavaScript Object Notation (JSON) by working in “code view” mode. You can
also use Azure PowerShell commands and Azure Resource Manager templates for select tasks. Logic apps
deploy and run in the cloud on Azure.

Connectors for Azure Logic Apps


Connectors provide quick access from Azure Logic Apps to events, data, and actions across other apps,
services, systems, protocols, and platforms. By using connectors in your logic apps, you expand the
capabilities for your cloud and on-premises apps to perform tasks with the data that you create and
already have.
Connectors are available either as built-in triggers and actions or as managed connectors:
●● Built-ins: These built-in triggers and actions are “native” to Azure Logic Apps and help you create
logic apps that run on custom schedules, communicate with other endpoints, receive and respond to
requests, and call Azure functions, Azure API Apps (Web Apps), your own APIs managed and pub-
lished with Azure API Management, and nested logic apps that can receive requests. You can also use
built-in actions that help you organize and control your logic app's workflow, and also work with data.
✔️ Note: Logic apps within an integration service environment (ISE)
can directly access resources in an Azure virtual network.
When you use an ISE, built-in triggers and actions that display
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Logic Apps overview  263

the Core label run in the same ISE as your logic apps.
Logic apps, built-in triggers, and built-in actions that run in your
ISE use a pricing plan different from the consumption-based pricing plan.
●● Managed connectors: Deployed and managed by Microsoft, these connectors provide triggers and
actions for accessing cloud services, on-premises systems, or both, including Office 365, Azure Blob
Storage, SQL Server, Dynamics, Salesforce, SharePoint, and more. Some connectors specifically
support business-to-business (B2B) communication scenarios and require an integration account
that's linked to your logic app. Before using certain connectors, you might have to first create connec-
tions, which are managed by Azure Logic Apps.
You can also identify connectors by using these categories, although some connectors can cross multiple
categories. For example, SAP is an Enterprise connector and an on-premises connector:

Category Description
Managed API connectors Create logic apps that use services such as Azure
Blob Storage, Office 365, Dynamics, Power BI,
OneDrive, Salesforce, SharePoint Online, and many
more.
On-premises connectors After you install and set up the on-premises data
gateway, these connectors help your logic apps
access on-premises systems such as SQL Server,
SharePoint Server, Oracle DB, file shares, and
others.
Integration account connectors Available when you create and pay for an integra-
tion account, these connectors transform and
validate XML, encode and decode flat files, and
process business-to-business (B2B) messages with
AS2, EDIFACT, and X12 protocols.

Triggers and actions


Connectors can provide triggers, actions, or both.
A trigger is the first step in any logic app, usually
specifying the event that fires the trigger and starts
running your logic app. Some triggers regularly check for the specified event or data and then fire when
they detect the specified event or data. Other triggers wait but fire instantly when a specific event
happens or when new data is available. Triggers also pass along any required data to your logic app.
After a trigger fires, Azure Logic Apps creates an instance of your
logic app and starts running the actions in your logic app's workflow.
Actions are the steps that follow the trigger and perform tasks in your
logic app's workflow. For example, you can create a logic app that gets
customer data from a SQL database and process that data in later actions.
Here are the general kinds of triggers that Azure Logic Apps provides:
●● Recurrence trigger: This trigger runs on a specified schedule
and isn't tightly associated with a particular service or system.
●● Polling trigger: This trigger regularly polls a specific service
or system based on the specified schedule, checking for new data or
whether a specific event happened. If new data is available or
MCT USE ONLY. STUDENT USE PROHIBITED 264  Module 9 Develop App Service Logic Apps  

the specific event happened, the trigger creates and runs a new instance
of your logic app, which can now use the data that's passed as input.
●● Push trigger: This trigger waits and listens for new data or
for an event to happen. When new data is available or when the
event happens, the trigger creates and runs new instance of your
logic app, which can now use the data that's passed as input.

Connector configuration
Each connector's triggers and actions provide their own properties for you to configure. Many connectors
also require that you first create a connection to the target service or system and provide authentication
credentials or other configuration details before you can use a trigger or action in your logic app. For
example, you must authorize a connection to a Twitter account for accessing data or to post on your
behalf.

Custom APIs and connectors


To call APIs that run custom code or aren't available as connectors, you can extend the Logic Apps
platform by creating custom API Apps. You can also create custom connectors for any REST or SOAP-
based APIs, which make those APIs available to any logic app in your Azure subscription.

Scheduling Azure Logic Apps


Logic Apps helps you create and run automated recurring tasks and processes on a schedule. By creating
a logic app workflow that starts with a built-in Recurrence trigger or Sliding Window trigger, which are
Schedule-type triggers, you can run tasks immediately, at a later time, or on a recurring interval. With the
Recurrence trigger, you can also set up complex schedules and advanced recurrences for running tasks.

Schedule triggers
You can start your logic app workflow by using the Recurrence trigger or Sliding Window trigger, which
isn't associated with any specific service or system. These triggers start and run your workflow based on
your specified recurrence where you select the interval and frequency. You can also set the start date and
time as well as the time zone. Each time that a trigger fires, Logic Apps creates and runs a new workflow
instance for your logic app.
Here are the differences between these triggers:
●● Recurrence: Runs your workflow at regular time intervals based on your specified schedule. If recur-
rences are missed, the Recurrence trigger doesn't process the missed recurrences but restarts recur-
rences with the next scheduled interval.
●● Sliding Window: Runs your workflow at regular time intervals that handle data in continuous chunks.
If recurrences are missed, the Sliding Window trigger goes back and processes the missed recurrenc-
es.
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Logic Apps overview  265

Schedule actions
After any action in your logic app workflow, you can use the Delay and Delay Until actions to make your
workflow wait before the next action runs.
●● Delay: Wait to run the next action for the specified number of time units, such as seconds, minutes,
hours, days, weeks, or months.
●● Delay until: Wait to run the next action until the specified date and time.

Demo: Create a Logic App by using the Azure


Portal
In this demo you'll learn how to perform the following actions:
●● Create a Logic App workflow with:

●● A trigger that monitors an RSS feed


●● An action that sends an email when the feed updates

Prerequisites
This demo is performed in the Azure Portal.
●● To get a free account visit: https://azure.microsoft.com/free/
●● An email account from an email provider that's supported by Logic Apps, such as Office 365 Outlook,
Outlook.com, or Gmail. For other providers, review the connectors list here1. This quickstart uses an
Office 365 Outlook account. If you use a different email account, the general steps stay the same, but
your UI might slightly differ.

Login to Azure
1. Login to the Azure portal https://portal.azure.com.

Create a Logic App


1. From the Azure home page, in the search box, find and select Logic Apps.
2. On the Logic Apps page, select Add.
3. On the Logic App pane, provide details about your logic app as shown below. After you're done,
select Create. Replace <Azure-subscription-name> and <Azure-region> with values to fit your needs.

Property Value
Name az204-logicapp-demo
Subscription <Azure-subscription-name>
Resource group az204-logicapp-demo-rg
Location <Azure-region>
Log Analytics Off

1 https://docs.microsoft.com/connectors/
MCT USE ONLY. STUDENT USE PROHIBITED 266  Module 9 Develop App Service Logic Apps  

✔️ Note: Deployment can take a few minutes to complete.

Add the RSS trigger


1. When deployment is complete navigate to your newly created Logic App.
2. In the Templates section select Blank Logic App.
3. In the Logic App Designer, under the search box, select All.
4. In the search box, enter rss to find the RSS connector. From the triggers list, select the When a feed
item is published trigger.
5. Provide the information below for your trigger:

Property Value Description


The RSS feed URL http://feeds.reuters. The link for the RSS feed that
com/reuters/topNews you want to monitor
Interval 1 The number of intervals to wait
between checks
Frequency Minute The unit of time for each interval
between checks
Together, the interval and frequency define the schedule for your logic app's trigger. This logic app
checks the feed every minute.
6. To collapse the trigger details for now, click inside the trigger's title bar.

7. Save your logic app. On the designer toolbar, select Save.


Your logic app is now live but doesn't do anything other than check the RSS feed. Next we'll add an
action that responds when the trigger fires.

Add the send email action


1. Under the When a feed item is published trigger, select New step.
2. Under Choose an action and the search box, select All.
3. In the Search connectors and actions field, type Outlook.
✔️ Note: If you do not have an Outlook.com email account and prefer not to create one, you can
change the connectors search filter to send an email and select another email provider such as Gmail
and Office 365 Outlook.
4. Select Send an email.
5. If your selected email connector prompts you to authenticate your identity, complete that step now to
create a connection between your logic app and your email service.
6. In the Send an email action enter the information in the table below:
MCT USE ONLY. STUDENT USE PROHIBITED
 Azure Logic Apps overview  267

Field Value
To For testing purposes, you can use your email
address.
Subject Logic App Demo Update
Body Select Feed published on from the Add dynamic
content list.
7. Save your logic app.

Run your logic app


The logic app will automatically run on the specified schedule. If you want to run it manually, select Run
in the Designer toolbar.
✔️ Note: It may take a while for the RSS feed to update. Monitor the email inbox for the notification of a
new item.

Azure Logic Apps and Enterprise Integration


Pack
For business-to-business (B2B) solutions and seamless communication between organizations, you can
build automated scalable enterprise integration workflows by using the Enterprise Integration Pack (EIP)
with Azure Logic Apps. Although organizations use different protocols and formats, they can exchange
messages electronically. The EIP transforms different formats into a format that your organizations'
systems can process and supports industry-standard protocols, including AS2, X12, and EDIFACT. You can
also secure messages with both encryption and digital signatures. The EIP supports these enterprise
integration connectors and these industry standards:
●● Electronic Data Interchange (EDI)
●● Enterprise Application Integration (EAI)
If you're familiar with Microsoft BizTalk Server or Azure BizTalk Services, the EIP follows similar concepts,
making the features easy to use. However, one major difference is that the EIP is architecturally based on
“integration accounts” to simplify the storage and management of artifacts used in B2B communications.
These accounts are cloud-based containers that store all your artifacts, such as partners, agreements,
schemas, maps, and certificates.
Here are the high-level steps to get started building B2B logic apps:
MCT USE ONLY. STUDENT USE PROHIBITED 268  Module 9 Develop App Service Logic Apps  

Creating custom connectors for Logic Apps


Creating custom connectors
While Azure Logic Apps, Microsoft Flow, and PowerApps offer over 180+ connectors to connect to
Microsoft and non-Microsoft services, you may want to communicate with services that are not available
as prebuilt connectors. Custom connectors address this scenario by allowing you to to create (and even
share) a connector with its own triggers and actions.
The following diagram shows the high-level tasks involved in creating and using custom connectors:

Build your API


A custom connector is a wrapper around a REST API (Logic Apps also supports SOAP APIs) that allows
Logic Apps, Microsoft Flow, or PowerApps to communicate with that REST or SOAP API. These APIs can
be:
●● Public (visible on the public internet) such as Spotify, Slack, Rackspace, or an API you manage
●● Private (visible only to your network)
For public APIs that you plan to create and manage, consider using one of these Microsoft Azure prod-
ucts:
●● Azure Functions
●● Azure Web Apps
●● Azure API Apps
For private APIs, Microsoft offers on-premises data connectivity through an on-premises data gateway.
These gateways are supported by Logic Apps, Microsoft Flow, and PowerApps.

Secure your API


Use one of these standard authentication methods for your APIs and connectors (Azure Active Directory
is recommended):
●● Generic OAuth 2.0
●● OAuth 2.0 for specific services, including Azure Active Directory (Azure AD), Dropbox, GitHub, and
SalesForce
●● Basic authentication
●● API Key
You can set up Azure AD authentication for your API in the Azure portal so you do not have to implement
authentication or you can require and enforce authentication in your API's code. For more information
about Azure AD for custom connectors, see Secure your API and connector with Azure AD.
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating custom connectors for Logic Apps  269

Describe the API and define the custom connector


Once you have an API with authenticated access, the next thing to do is to describe your API so that
Logic Apps, Microsoft Flow, or PowerApps can communicate with your API. The following approaches are
supported:
●● An OpenAPI definition (formerly known as a Swagger file)

●● Create a custom connector from an OpenAPI definition


●● OpenAPI documentation
●● A Postman collection

●● Create a Postman collection


●● Create a custom connector from a Postman collection
●● Postman documentation.
●● Start from scratch using the custom connector portal (Microsoft Flow and PowerApps only)
OpenAPI definitions and Postman collections use different formats, but both are language-agnostic,
machine-readable documents that describe your API. You can generate these documents from various
tools based on the language and platform used by your API. Behind the scenes, Logic Apps, Flow, and
PowerApps use OpenAPI to define connectors.

Use your connector in a Logic App, Flow, or PowerApps


app
Custom connectors are used the same way Microsoft-managed connector are used. You will need to
create a connection to your API and then you can use that connection to call any operations that you
have exposed in your custom connector.
Connectors created in Microsoft Flow are available in PowerApps. Likewise, connectors created in Power-
Apps are available in Microsoft Flow. This is not true for connectors created in Logic Apps, but you can
reuse the OpenAPI definition or Postman collection to recreate the connector in any of these services.

Share your connector


You can share your connector with users in your organization in the same way that you share resources in
Logic Apps, Microsoft Flow, or PowerApps. Sharing is optional, but you may have scenarios where you
want to share your connectors with other users.

Certify your connector


If you would like to share your connector with all users of Logic Apps, Microsoft Flow, and PowerApps,
you can submit your connector for Microsoft certification. Microsoft will review your connector, check for
technical and content compliance, and validates functionality.
MCT USE ONLY. STUDENT USE PROHIBITED 270  Module 9 Develop App Service Logic Apps  

Lab and review questions


Lab: Automate business processes with Logic
Apps

Lab scenario
Your organization keeps a collection of JSON files that it uses to configure third-party products in a
Server Message Block (SMB) file share in Microsoft Azure. As part of a regular auditing practice, the
operations team would like to call a simple HTTP endpoint and retrieve the current list of configuration
files. You have decided to implement this functionality using a no-code solution based on Azure API
Management service and Logic Apps.

Objectives
●● After you complete this lab, you will be able to:
●● Create a Logic App workflow.
●● Manage products and APIs in a Logic App.
●● Use Azure API Management as a proxy for a Logic App.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 9 review questions


Review Question 1
Connectors provide quick access from Azure Logic Apps to events, data, and actions across other apps,
services, systems, protocols, and platforms.
Which of the following features do connectors provide?
†† Trigger
†† Pulse
†† Action
†† None of these
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  271

Review Question 2
The following sentence describes a general type of trigger.
This trigger waits and listens for new data or for an event to happen.
What type of trigger is described above?
†† Polling trigger
†† Recurrence trigger
†† Push trigger
†† None of the above

Review Question 3
The following sentence describes a general category of connectors.
Create logic apps that use services such as Azure Blob Storage, Office 365, Dynamics, Power BI, OneDrive,
Salesforce, SharePoint Online, and many more.
Which general category is described above?
†† Integration account connectors
†† On-premises connectors
†† Managed API connectors
†† Azure AD connector
MCT USE ONLY. STUDENT USE PROHIBITED 272  Module 9 Develop App Service Logic Apps  

Answers
Review Question 1
Connectors provide quick access from Azure Logic Apps to events, data, and actions across other apps,
services, systems, protocols, and platforms.
Which of the following features do connectors provide?
■■ Trigger
†† Pulse
■■ Action
†† None of these
Explanation
Connectors can provide triggers, actions, or both. A trigger is the first step in any logic app, usually specify-
ing the event that fires the trigger and starts running your logic app.
Review Question 2
The following sentence describes a general type of trigger.
This trigger waits and listens for new data or for an event to happen.
What type of trigger is described above?
†† Polling trigger
†† Recurrence trigger
■■ Push trigger
†† None of the above
Explanation
A push trigger waits and listens for new data or for an event to happen. When new data is available or
when the event happens, the trigger creates and runs new instance of your logic app, which can now use
the data that's passed as input.
Review Question 3
The following sentence describes a general category of connectors.
Create logic apps that use services such as Azure Blob Storage, Office 365, Dynamics, Power BI, OneDrive,
Salesforce, SharePoint Online, and many more.
Which general category is described above?
†† Integration account connectors
†† On-premises connectors
■■ Managed API connectors
†† Azure AD connector
Explanation
Managed API connectors are geared toward logic apps that use services such as Azure Blob Storage, Office
365, Dynamics, Power BI, OneDrive, Salesforce, SharePoint Online, and many more.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 10 Develop event-based solutions

Implement solutions that use Azure Event Grid


Azure Event Grid overview
Azure Event Grid allows you to easily build applications with event-based architectures. First, select the
Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to
send the event to. Event Grid has built-in support for events coming from Azure services, like storage
blobs and resource groups. Event Grid also has support for your own events, using custom topics.
You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and
make sure your events are reliably delivered.
This image below shows how Event Grid connects sources and handlers, and isn't a comprehensive list of
supported integrations.
MCT USE ONLY. STUDENT USE PROHIBITED 274  Module 10 Develop event-based solutions  

Concepts in Azure Event Grid


There are five concepts in Azure Event Grid you need to understand to help you get started, and are
described in more detail below:
●● Events - What happened.
●● Event sources - Where the event took place.
●● Topics - The endpoint where publishers send events.
●● Event subscriptions - The endpoint or built-in mechanism to route events, sometimes to more than
one handler. Subscriptions are also used by handlers to intelligently filter incoming events.
●● Event handlers - The app or service reacting to the event.

Events
An event is the smallest amount of information that fully describes something that happened in the
system. Every event has common information like: source of the event, time the event took place, and
unique identifier. Every event also has specific information that is only relevant to the specific type of
event. For example, an event about a new file being created in Azure Storage has details about the file,
such as the lastTimeModified value. Or, an Event Hubs event has the URL of the Capture file.
An event of size up to 64 KB is covered by General Availability (GA) Service Level Agreement (SLA). The
support for an event of size up to 1 MB is currently in preview. Events over 64 KB are charged in 64-KB
increments.

Event sources
An event source is where the event happens. Each event source is related to one or more event types. For
example, Azure Storage is the event source for blob created events. IoT Hub is the event source for
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Grid  275

device created events. Your application is the event source for custom events that you define. Event
sources are responsible for sending events to Event Grid.

Topics
The event grid topic provides an endpoint where the source sends events. The publisher creates the
event grid topic, and decides whether an event source needs one topic or more than one topic. A topic is
used for a collection of related events. To respond to certain types of events, subscribers decide which
topics to subscribe to.
System topics are built-in topics provided by Azure services. You don't see system topics in your Azure
subscription because the publisher owns the topics, but you can subscribe to them. To subscribe, you
provide information about the resource you want to receive events from. As long as you have access to
the resource, you can subscribe to its events.
Custom topics are application and third-party topics. When you create or are assigned access to a custom
topic, you see that custom topic in your subscription.

Event subscriptions
A subscription tells Event Grid which events on a topic you're interested in receiving. When creating the
subscription, you provide an endpoint for handling the event. You can filter the events that are sent to
the endpoint. You can filter by event type, or subject pattern. Set an expiration for event subscriptions
that are only needed for a limited time and you don't want to worry about cleaning up those subscrip-
tions.

Event handlers
From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes
some further action to process the event. Event Grid supports several handler types. You can use a
supported Azure service or your own webhook as the handler. Depending on the type of handler, Event
Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event
handlers, the event is retried until the handler returns a status code of 200 – OK. For Azure Storage
Queue, the events are retried until the Queue service successfully processes the message push into the
queue.

Azure Event Grid event schema


Events consist of a set of five required string properties and a required data object. The properties are
common to all events from any publisher. The data object has properties that are specific to each pub-
lisher. For system topics, these properties are specific to the resource provider, such as Azure Storage or
Azure Event Hubs.
Event sources send events to Azure Event Grid in an array, which can have several event objects. When
posting events to an event grid topic, the array can have a total size of up to 1 MB. Each event in the
array is limited to 64 KB (General Availability) or 1 MB (preview). If an event or the array is greater than
the size limits, you receive the response 413 Payload Too Large.
Event Grid sends the events to subscribers in an array that has a single event.

Event schema
The following example shows the properties that are used by all event publishers:
MCT USE ONLY. STUDENT USE PROHIBITED 276  Module 10 Develop event-based solutions  

[
{
"topic": string,
"subject": string,
"id": string,
"eventType": string,
"eventTime": string,
"data":{
object-unique-to-each-publisher
},
"dataVersion": string,
"metadataVersion": string
}
]

Event properties
All events have the same following top-level data:

Property Type Description


topic string Full resource path to the event
source. This field isn't writeable.
Event Grid provides this value.
subject string Publisher-defined path to the
event subject.
eventType string One of the registered event
types for this event source.
eventTime string The time the event is generated
based on the provider's UTC
time.
id string Unique identifier for the event.
data object Event data specific to the
resource provider.
dataVersion string The schema version of the data
object. The publisher defines the
schema version.
metadataVersion string The schema version of the event
metadata. Event Grid defines the
schema of the top-level proper-
ties. Event Grid provides this
value.
For custom topics, the event publisher determines the data object. The top-level data should have the
same fields as standard resource-defined events.
When publishing events to custom topics, create subjects for your events that make it easy for subscrib-
ers to know whether they're interested in the event. Subscribers use the subject to filter and route events.
Consider providing the path for where the event happened, so subscribers can filter by segments of that
path. The path enables subscribers to narrowly or broadly filter events. For example, if you provide a
three segment path like /A/B/C in the subject, subscribers can filter by the first segment /A to get a
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Grid  277

broad set of events. Those subscribers get events with subjects like /A/B/C or /A/D/E. Other subscrib-
ers can filter by /A/B to get a narrower set of events.
Sometimes your subject needs more detail about what happened. For example, the Storage Accounts
publisher provides the subject /blobServices/default/containers/<container-name>/
blobs/<file> when a file is added to a container. A subscriber could filter by the path /blobServic-
es/default/containers/testcontainer to get all events for that container but not other contain-
ers in the storage account. A subscriber could also filter or route by the suffix .txt to only work with text
files.

Event Grid security and authentication


Azure Event Grid has three types of authentication:
●● WebHook event delivery
●● Event subscriptions
●● Custom topic publishing

WebHook Event delivery


Webhooks are one of the many ways to receive events from Azure Event Grid. When a new event is ready,
Event Grid service POSTs an HTTP request to the configured endpoint with the event in the request body.
Like many other services that support webhooks, Event Grid requires you to prove ownership of your
Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a
malicious user from flooding your endpoint with events.
When you use any of the three Azure services listed below, the Azure infrastructure automatically handles
this validation:
●● Azure Logic Apps with Event Grid Connector
●● Azure Automation via webhook
●● Azure Functions with Event Grid Trigger
If you're using any other type of endpoint, such as an HTTP trigger based Azure function, your endpoint
code needs to participate in a validation handshake with Event Grid. Event Grid supports two ways of
validating the subscription.
1. ValidationCode handshake (programmatic): If you control the source code for your endpoint, this
method is recommended. At the time of event subscription creation, Event Grid sends a subscription
validation event to your endpoint. The schema of this event is similar to any other Event Grid event.
The data portion of this event includes a validationCode property. Your application verifies that
the validation request is for an expected event subscription, and echoes the validation code to Event
Grid. This handshake mechanism is supported in all Event Grid versions.
2. ValidationURL handshake (manual): In certain cases, you can't access the source code of the
endpoint to implement the ValidationCode handshake. For example, if you use a third-party service
(like Zapier1 or IFTTT2), you can't programmatically respond with the validation code.

1 https://zapier.com/
2 https://ifttt.com/
MCT USE ONLY. STUDENT USE PROHIBITED 278  Module 10 Develop event-based solutions  

Event subscription
To subscribe to an event, you must prove that you have access to the event source and handler. Proving
that you own a WebHook was covered in the preceding section. If you're using an event handler that isn't
a WebHook (such as an event hub or queue storage), you need write access to that resource. This
permissions check prevents an unauthorized user from sending events to your resource.
You must have the Microsoft.EventGrid/EventSubscriptions/Write permission on the resource that is
the event source. You need this permission because you're writing a new subscription at the scope of the
resource. The required resource differs based on whether you're subscribing to a system topic or custom
topic.

Topic Type Description


System topics Need permission to write a new event subscription
at the scope of the resource publishing the event.
The format of the resource is: /subscriptions/
{subscription-id}/resourceGroups/
{resource-group-name}/providers/
{resource-provider}/{resource-type}/
{resource-name}
Custom topics Need permission to write a new event subscription
at the scope of the event grid topic. The format of
the resource is: /subscriptions/{subscrip-
tion-id}/resourceGroups/{resource-
group-name}/providers/Microsoft.
EventGrid/topics/{topic-name}

Custom topic publishing


Custom topics use either Shared Access Signature (SAS) or key authentication. SAS is recommended, but
key authentication provides simple programming and is compatible with many existing webhook publish-
ers.
You include the authentication value in the HTTP header. For SAS, use aeg-sas-token for the header
value. For key authentication, use aeg-sas-key for the header value.

Key authentication
Key authentication is the simplest form of authentication. Use the format: aeg-sas-key: <your key>

SAS tokens
SAS tokens for Event Grid include the resource, an expiration time, and a signature. The format of the SAS
token is: r={resource}&e={expiration}&s={signature}.
The resource is the path for the event grid topic to which you're sending events. For example, a valid
resource path is: https://<yourtopic>.<region>.eventgrid.azure.net/eventGrid/api/
events
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Grid  279

Management Access Control


Azure Event Grid allows you to control the level of access given to different users to do various manage-
ment operations such as list event subscriptions, create new ones, and generate keys. Event Grid uses
Azure's role-based access control (RBAC).

Operation types
Event Grid supports the following actions:
●● Microsoft.EventGrid/*/read
●● Microsoft.EventGrid/*/write
●● Microsoft.EventGrid/*/delete
●● Microsoft.EventGrid/eventSubscriptions/getFullUrl/action
●● Microsoft.EventGrid/topics/listKeys/action
●● Microsoft.EventGrid/topics/regenerateKey/action
The last three operations return potentially secret information, which gets filtered out of normal read
operations. It's recommended that you restrict access to these operations.

Built-in roles
Event Grid provides two built-in roles for managing event subscriptions. They are important when
implementing event domains because they give users the permissions they need to subscribe to topics in
your event domain. These roles are focused on event subscriptions and don't grant access for actions
such as creating topics.
●● EventGrid EventSubscription Contributor: manage Event Grid subscription operations
●● EventGrid EventSubscription Reader: read Event Grid subscriptions

Event filtering for Event Grid subscriptions


When creating an event subscription, you have three options for filtering:
●● Event types
●● Subject begins with or ends with
●● Advanced fields and operators

Event type filtering


By default, all event types for the event source are sent to the endpoint. You can decide to send only
certain event types to your endpoint. For example, you can get notified of updates to your resources, but
not notified for other operations like deletions. In that case, filter by the Microsoft.Resources.
ResourceWriteSuccess event type. Provide an array with the event types, or specify All to get all
event types for the event source.
The JSON syntax for filtering by event type is:
"filter": {
"includedEventTypes": [
"Microsoft.Resources.ResourceWriteFailure",
MCT USE ONLY. STUDENT USE PROHIBITED 280  Module 10 Develop event-based solutions  

"Microsoft.Resources.ResourceWriteSuccess"
]
}

Subject filtering
For simple filtering by subject, specify a starting or ending value for the subject. For example, you can
specify the subject ends with .txt to only get events related to uploading a text file to storage account.
Or, you can filter the subject begins with /blobServices/default/containers/testcontainer
to get all events for that container but not other containers in the storage account.
The JSON syntax for filtering by subject is:
"filter": {
"subjectBeginsWith": "/blobServices/default/containers/mycontainer/log",
"subjectEndsWith": ".jpg"
}

Advanced filtering
To filter by values in the data fields and specify the comparison operator, use the advanced filtering
option. In advanced filtering, you specify the:
●● operator type - The type of comparison.
●● key - The field in the event data that you're using for filtering. It can be a number, boolean, or string.
●● value or values - The value or values to compare to the key.
The JSON syntax for using advanced filters is:
"filter": {
"advancedFilters": [
{
"operatorType": "NumberGreaterThanOrEquals",
"key": "Data.Key1",
"value": 5
},
{
"operatorType": "StringContains",
"key": "Subject",
"values": ["container1", "container2"]
}
]
}

Demo: Route custom events to web endpoint by


using the Azure CLI commands and Event Grid
In this demo you will learn how to:
●● Enable an Event Grid resource provider
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Grid  281

●● Create a custom topic


●● Create a message endpoint
●● Subscribe to a custom topic
●● Send an event to a custom topic

Prerequisites
This demo is performed in the Cloud Shell.

Login to Azure
1. Login in to the Azure Portal: https://portal.azure.com and launch the Cloud Shell. Be sure to select
Bash as the shell.
2. Create a resource group, replace <myRegion> with a location that makes sense for you.
myLocation=<myRegion>
az group create -n az204-egdemo-rg -l $myLocation

Enable Event Grid resource provider


✔️ Note: These step is only needed on subscriptions that haven't previously used Event Grid.
1. Register the Event Grid resource provider by using the az provider register command.
az provider register --namespace Microsoft.EventGrid

It can take a few minutes for the registration to complete. To check the status run the command
below.
az provider show --namespace Microsoft.EventGrid --query "registrationState"

Create a custom topic


1. Create a custom topic by using the az eventgrid topic create command. The script below
creates a unique topic name, the name must be unique because it's part of the DNS entry.
let rNum=$RANDOM*$RANDOM
myTopicName="az204-egtopic-${rNum}"
az eventgrid topic create --name $myTopicName -l $myLocation -g az204-egdemo-rg

Create a message endpoint


Before subscribing to the custom topic, we need to create the endpoint for the event message. Typically,
the endpoint takes actions based on the event data. The script below uses a pre-built web app that
displays the event messages. The deployed solution includes an App Service plan, an App Service web
app, and source code from GitHub. It also generates a unique name for the site.
1. Create a message endpoint. The deployment may take a few minutes to complete.
MCT USE ONLY. STUDENT USE PROHIBITED 282  Module 10 Develop event-based solutions  

mySiteName="az204-egsite-${rNum}"
mySiteURL="https://${mySiteName}.azurewebsites.net"
az group deployment create \
-g az204-egdemo-rg \
--template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/
master/azuredeploy.json" \
--parameters siteName=$mySiteName hostingPlanName=viewerhost
echo "Your web app URL: ${mySiteURL}"

2. Navigate to the URL generated at the end of the script above to ensure the web app is running. You
should see the site with no messages currently displayed.
✔️ Note: Leave the browser running, it is used to show updates.

Subscribe to a custom topic


You subscribe to an event grid topic to tell Event Grid which events you want to track and where to send
those events.
1. Subscribe to a custom topic by using the az eventgrid event-subscription create com-
mand. The script below will grab the needed subscription ID from your account and use in the
creation of the event subscription.
endpoint="${mySiteURL}/api/updates"
subId=$(az account show --subscription "" | jq -r '.id')

az eventgrid event-subscription create \


--source-resource-id "/subscriptions/$subId/resourceGroups/az204-egdemo-rg/providers/Microsoft.
EventGrid/topics/$myTopicName" \
--name demoViewerSub \
--endpoint $endpoint

2. View your web app again, and notice that a subscription validation event has been sent to it. Select
the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can
verify that it wants to receive event data. The web app includes code to validate the subscription.

Send an event to your custom topic


Trigger an event to see how Event Grid distributes the message to your endpoint.
1. Retrieve URL and key for the custom topic.
endpoint=$(az eventgrid topic show --name $myTopicName -g az204-egdemo-rg --query "endpoint"
--output tsv)
key=$(az eventgrid topic key list --name $myTopicName -g az204-egdemo-rg --query "key1" --out-
put tsv)

2. Create event data to send. Typically, an application or Azure service would send the event data, we're
creating data for the purposes of the demo.
event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/motorcy-
cles", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "make": "Contoso", "model":
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Grid  283

"Northwind"},"dataVersion": "1.0"} ]'

3. Use curl to send event to the topic.


curl -X POST -H "aeg-sas-key: $key" -d "$event" $endpoint

4. View your web app to see the event you just sent. Select the eye icon to expand the event data. Event
Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web
app includes code to validate the subscription.
{
"id": "29078",
"eventType": "recordInserted",
"subject": "myapp/vehicles/motorcycles",
"eventTime": "2019-12-02T22:23:03+00:00",
"data": {
"make": "Contoso",
"model": "Northwind"
},
"dataVersion": "1.0",
"metadataVersion": "1",
"topic": "/subscriptions/{subscription-id}/resourceGroups/az204-egdemo-rg/providers/Microsoft.
EventGrid/topics/az204-egtopic-589377852"
}
MCT USE ONLY. STUDENT USE PROHIBITED 284  Module 10 Develop event-based solutions  

Implement solutions that use Azure Event


Hubs
Azure Event Hubs overview
Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process
millions of events per second. Data sent to an event hub can be transformed and stored by using any
real-time analytics provider or batching/storage adapters.
Event Hubs represents the “front door” for an event pipeline, often called an event ingestor in solution
architectures. An event ingestor is a component or service that sits between event publishers and event
consumers to decouple the production of an event stream from the consumption of those events. Event
Hubs provides a unified streaming platform with time retention buffer, decoupling event producers from
event consumers.
The table below highlights key features of the Azure Event Hubs service:

Feature Description
Fully managed PaaS Event Hubs is a fully managed Platform-as-a-Ser-
vice (PaaS) with little configuration or manage-
ment overhead, so you focus on your business
solutions. Event Hubs for Apache Kafka ecosys-
tems gives you the PaaS Kafka experience without
having to manage, configure, or run your clusters.
Real-time and batch processing Event Hubs uses a partitioned consumer model,
enabling multiple applications to process the
stream concurrently and letting you control the
speed of processing.
Scalable Scaling options, like Auto-inflate, scale the number
of throughput units to meet your usage needs.
Rich ecosystem Event Hubs for Apache Kafka ecosystems enables
Apache Kafka (1.0 and later) clients and applica-
tions to talk to Event Hubs. You do not need to set
up, configure, and manage your own Kafka
clusters.

Key architecture components


Event Hubs contains the following key components:
●● Event producers: Any entity that sends data to an event hub. Event publishers can publish events
using HTTPS or AMQP 1.0 or Apache Kafka (1.0 and above)
●● Partitions: Each consumer only reads a specific subset, or partition, of the message stream.
●● Consumer groups: A view (state, position, or offset) of an entire event hub. Consumer groups enable
consuming applications to each have a separate view of the event stream. They read the stream
independently at their own pace and with their own offsets.
●● Throughput units: Pre-purchased units of capacity that control the throughput capacity of Event
Hubs.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Hubs  285

●● Event receivers: Any entity that reads event data from an event hub. All Event Hubs consumers
connect via the AMQP 1.0 session. The Event Hubs service delivers events through a session as they
become available. All Kafka consumers connect via the Kafka protocol 1.0 and later.

Capture events through Azure Event Hubs


Azure Event Hubs enables you to automatically capture the streaming data in Event Hubs in an Azure
Blob storage or Azure Data Lake Storage account of your choice, with the added flexibility of specifying a
time or size interval. Setting up Capture is fast, there are no administrative costs to run it, and it scales
automatically with Event Hubs throughput units.

How Event Hubs Capture works


Event Hubs is a time-retention durable buffer for telemetry ingress, similar to a distributed log. The key to
scaling in Event Hubs is the partitioned consumer model. Each partition is an independent segment of
data and is consumed independently. Over time this data ages off, based on the configurable retention
period. As a result, a given event hub never gets “too full.”
Event Hubs Capture enables you to specify your own Azure Blob storage account and container, or Azure
Data Lake Store account, which are used to store the captured data. These accounts can be in the same
region as your event hub or in another region, adding to the flexibility of the Event Hubs Capture feature.
Captured data is written in Apache Avro format: a compact, fast, binary format that provides rich data
structures with inline schema. This format is widely used in the Hadoop ecosystem, Stream Analytics, and
Azure Data Factory. More information about working with Avro is available later in this article.

Capture windowing
Event Hubs Capture enables you to set up a window to control capturing. This window is a minimum size
and time configuration with a “first wins policy,” meaning that the first trigger encountered causes a
capture operation. Each partition captures independently and writes a completed block blob at the time
of capture, named for the time at which the capture interval was encountered. The storage naming
convention is as follows:
{Namespace}/{EventHub}/{PartitionId}/{Year}/{Month}/{Day}/{Hour}/{Minute}/
{Second}

Note that the date values are padded with zeroes; an example filename might be:
https://mystorageaccount.blob.core.windows.net/mycontainer/mynamespace/
myeventhub/0/2017/12/08/03/03/17.avro

Scaling to throughput units


Event Hubs traffic is controlled by throughput units. A single throughput unit allows 1 MB per second or
1000 events per second of ingress and twice that amount of egress. Standard Event Hubs can be config-
ured with 1-20 throughput units, and you can purchase more with a quota increase support request.
Usage beyond your purchased throughput units is throttled. Event Hubs Capture copies data directly
from the internal Event Hubs storage, bypassing throughput unit egress quotas and saving your egress
for other processing readers, such as Stream Analytics or Spark.
MCT USE ONLY. STUDENT USE PROHIBITED 286  Module 10 Develop event-based solutions  

Once configured, Event Hubs Capture runs automatically when you send your first event, and continues
running. To make it easier for your downstream processing to know that the process is working, Event
Hubs writes empty files when there is no data. This process provides a predictable cadence and marker
that can feed your batch processors.

Use Azure Event Hubs from Apache Kafka appli-


cations
Event Hubs provides a Kafka endpoint that can be used by your existing Kafka based applications as an
alternative to running your own Kafka cluster. Event Hubs supports Apache Kafka protocol 1.0 and later,
and works with your existing Kafka applications, including MirrorMaker.
You may start using the Kafka endpoint from your applications with no code change but a minimal
configuration change. You update the connection string in configurations to point to the Kafka endpoint
exposed by your event hub instead of pointing to your Kafka cluster. Then, you can start streaming
events from your applications that use the Kafka protocol into Event Hubs.

Kafka and Event Hub conceptual mapping


Kafka Concept Event Hubs Concept
Cluster Namespace
Topic Event Hub
Partition Partition
Consumer Group Consumer Group
Offset Offset

Key differences between Kafka and Event Hubs


While Apache Kafka is software, which you can run wherever you choose, Event Hubs is a cloud service
similar to Azure Blob Storage. There are no servers or networks to manage and no brokers to configure.
You create a namespace, which is an FQDN in which your topics live, and then create Event Hubs or topics
within that namespace. For more information about Event Hubs and namespaces, see Event Hubs
features. As a cloud service, Event Hubs uses a single stable virtual IP address as the endpoint, so clients
do not need to know about the brokers or machines within a cluster.

Event Hubs Dedicated overview


Event Hubs clusters offer single-tenant deployments for customers with the most demanding streaming
needs. This single-tenant offering has a guaranteed 99.99% SLA and is available only on our Dedicated
pricing tier. An Event Hubs cluster can ingress millions of events per second with guaranteed capacity and
sub-second latency.
Namespaces and event hubs created within the Dedicated cluster include all features of the Standard
offering and more, but without any ingress limits. It also includes the popular Event Hubs Capture feature
at no additional cost, allowing you to automatically batch and log data streams to Azure Storage or Azure
Data Lake.
Clusters are provisioned and billed by Capacity Units (CUs), a pre-allocated amount of CPU and memory
resources. You can purchase 1, 2, 4, 8, 12, 16 or 20 CUs for each cluster. How much you can ingest and
stream per CU depends on a variety of factors, such as the number of producers and consumers, payload
shape, egress rate.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Hubs  287

✔️ Note: All Event Hubs clusters are Kafka-enabled by default and support Kafka endpoints that can be
used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect
your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.

Dedicated Event Hubs benefits


Dedicated Event Hubs offers three compelling benefits for customers who need enterprise-level capacity:
●● Single-tenancy: A Dedicated cluster guarantees capacity at full scale, and can ingress up to gigabytes
of streaming data with fully durable storage and sub-second latency to accommodate any burst in
traffic.
●● Access to features: The Dedicated offering includes features like Capture at no additional cost, as
well as exclusive access to upcoming features like Bring Your Own Key (BYOK). The service also
manages load balancing, OS updates, security patches and partitioning for the customer, so that you
can spend less time on infrastructure maintenance and more time on building client-side features.
●● Cost Savings: At high ingress volumes (>100 TUs), a cluster costs significantly less per hour than
purchasing a comparable quantity of throughput units in the Standard offering.

Azure Event Hubs authentication and security


model
The Azure Event Hubs security model meets the following requirements:
●● Only clients that present valid credentials can send data to an event hub.
●● A client cannot impersonate another client.
●● A rogue client can be blocked from sending data to an event hub.

Client authentication
The Event Hubs security model is based on a combination of Shared Access Signature (SAS) tokens and
event publishers. An event publisher defines a virtual endpoint for an event hub. The publisher can only
be used to send messages to an event hub. It is not possible to receive messages from a publisher.
Typically, an event hub employs one publisher per client. All messages that are sent to any of the publish-
ers of an event hub are enqueued within that event hub. Publishers enable fine-grained access control
and throttling.
Each Event Hubs client is assigned a unique token, which is uploaded to the client. The tokens are
produced such that each unique token grants access to a different unique publisher. A client that pos-
sesses a token can only send to one publisher, but no other publisher. If multiple clients share the same
token, then each of them shares a publisher.
All tokens are signed with a SAS key. Typically, all tokens are signed with the same key. Clients are not
aware of the key; this prevents other clients from manufacturing tokens.

Create the SAS key


When creating an Event Hubs namespace, the service automatically generates a 256-bit SAS key named
RootManageSharedAccessKey. This rule has an associated pair of primary and secondary keys that
grant send, listen, and manage rights to the namespace. You can also create additional keys. It is recom-
MCT USE ONLY. STUDENT USE PROHIBITED 288  Module 10 Develop event-based solutions  

mended that you produce a key that grants send permissions to the specific event hub. For the remain-
der of this topic, it is assumed that you named this key EventHubSendKey.
The following example creates a send-only key when creating the event hub:
// Create namespace manager.
string serviceNamespace = "YOUR_NAMESPACE";
string namespaceManageKeyName = "RootManageSharedAccessKey";
string namespaceManageKey = "YOUR_ROOT_MANAGE_SHARED_ACCESS_KEY";
Uri uri = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, string.Empty);
TokenProvider td = TokenProvider.CreateSharedAccessSignatureTokenProvider(namespaceManageKey-
Name, namespaceManageKey);
NamespaceManager nm = new NamespaceManager(namespaceUri, namespaceManageTokenProvider);

// Create event hub with a SAS rule that enables sending to that event hub
EventHubDescription ed = new EventHubDescription("MY_EVENT_HUB") { PartitionCount = 32 };
string eventHubSendKeyName = "EventHubSendKey";
string eventHubSendKey = SharedAccessAuthorizationRule.GenerateRandomKey();
SharedAccessAuthorizationRule eventHubSendRule = new SharedAccessAuthorizationRule(eventHub-
SendKeyName, eventHubSendKey, new[] { AccessRights.Send });
ed.Authorization.Add(eventHubSendRule);
nm.CreateEventHub(ed);

Generate tokens
You can generate tokens using the SAS key. You must produce only one token per client. Tokens can then
be produced using the following method. All tokens are generated using the EventHubSendKey key.
The ‘resource’ parameter corresponds to the URI endpoint of the service(event hub in this case).
public static string SharedAccessSignatureTokenProvider.GetSharedAccessSignature(string keyName,
string sharedAccessKey, string resource, TimeSpan tokenTimeToLive)

When calling this method, the URI should be specified as //<NAMESPACE>.servicebus.windows.


net/<EVENT_HUB_NAME>/publishers/<PUBLISHER_NAME>. For all tokens, the URI is identical, with
the exception of PUBLISHER_NAME, which should be different for each token. Ideally, PUBLISHER_NAME
represents the ID of the client that receives that token.
This method generates a token with the following structure:
SharedAccessSignature sr={URI}&sig={HMAC_SHA256_SIGNATURE}&se={EXPIRATION_TIME}&skn={KEY_
NAME}

The token expiration time is specified in seconds from Jan 1, 1970. Typically, the tokens have a lifespan
that resembles or exceeds the lifespan of the client. If the client has the capability to obtain a new token,
tokens with a shorter lifespan can be used.

Sending data
Once the tokens have been created, each client is provisioned with its own unique token.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Hubs  289

When the client sends data into an event hub, it tags its send request with the token. To prevent an
attacker from eavesdropping and stealing the token, the communication between the client and the
event hub must occur over an encrypted channel.

Blacklisting clients
If a token is stolen by an attacker, the attacker can impersonate the client whose token has been stolen.
Blacklisting a client renders that client unusable until it receives a new token that uses a different publish-
er.

Authentication of back-end applications


To authenticate back-end applications that consume the data generated by Event Hubs clients, Event
Hubs employs a security model that is similar to the model that is used for Service Bus topics. An Event
Hubs consumer group is equivalent to a subscription to a Service Bus topic. A client can create a consum-
er group if the request to create the consumer group is accompanied by a token that grants manage
privileges for the event hub, or for the namespace to which the event hub belongs. A client is allowed to
consume data from a consumer group if the receive request is accompanied by a token that grants
receive rights on that consumer group, the event hub, or the namespace to which the event hub belongs.
The current version of Service Bus does not support SAS rules for individual subscriptions. The same
holds true for Event Hubs consumer groups. SAS support will be added for both features in the future.
In the absence of SAS authentication for individual consumer groups, you can use SAS keys to secure all
consumer groups with a common key. This approach enables an application to consume data from any of
the consumer groups of an event hub.

.NET Programming guide for Azure Event Hubs


This section of the lesson discusses some common scenarios in writing code using Azure Event Hubs.

Event publishers
You send events to an event hub either using HTTP POST or via an AMQP 1.0 connection. The choice of
which to use and when depends on the specific scenario being addressed. AMQP 1.0 connections are
metered as brokered connections in Service Bus and are more appropriate in scenarios with frequent
higher message volumes and lower latency requirements, as they provide a persistent messaging chan-
nel.
When using the .NET managed APIs, the primary constructs for publishing data to Event Hubs are the
EventHubClient and EventData classes. EventHubClient provides the AMQP communication
channel over which events are sent to the event hub. The EventData class represents an event, and is
used to publish messages to an event hub. This class includes the body, some metadata, and header
information about the event. Other properties are added to the EventData object as it passes through
an event hub.
The .NET classes that support Event Hubs are provided in the Microsoft.Azure.EventHubs NuGet
package.
MCT USE ONLY. STUDENT USE PROHIBITED 290  Module 10 Develop event-based solutions  

Creating an Event Hubs client


The primary class for interacting with Event Hubs is Microsoft.Azure.EventHubs.EventHubCli-
ent. You can instantiate this class using the CreateFromConnectionString method, as shown in the
following example:
private const string EventHubConnectionString = "Event Hubs namespace connection string";
private const string EventHubName = "event hub name";

var connectionStringBuilder = new EventHubsConnectionStringBuilder(EventHubConnectionString)


{
EntityPath = EventHubName

};
eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());

Sending events to an event hub


You send events to an event hub by creating an EventHubClient instance and sending it asynchro-
nously via the SendAsync method. This method takes a single EventData instance parameter and
asynchronously sends it to an event hub.

Event serialization
The EventData class has two overloaded constructors that take a variety of parameters, bytes or a byte
array, that represent the event data payload. When using JSON with EventData, you can use Encod-
ing.UTF8.GetBytes() to retrieve the byte array for a JSON-encoded string. For example:
for (var i = 0; i < numMessagesToSend; i++)
{
var message = $"Message {i}";
Console.WriteLine($"Sending message: {message}");
await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
}

Partition key
When sending event data, you can specify a value that is hashed to produce a partition assignment. You
specify the partition using the PartitionSender.PartitionID property. However, the decision to
use partitions implies a choice between availability and consistency.

Availability considerations
Using a partition key is optional, and you should consider carefully whether or not to use one. If you
don't specify a partition key when publishing an event, a round-robin assignment is used. In many cases,
using a partition key is a good choice if event ordering is important. When you use a partition key, these
partitions require availability on a single node, and outages can occur over time.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Event Hubs  291

Another consideration is handling delays in processing events. In some cases, it might be better to drop
data and retry than to try to keep up with processing, which can potentially cause further downstream
processing delays.
Given these availability considerations, in these scenarios you might choose one of the following error
handling strategies:
●● Stop (stop reading from Event Hubs until things are fixed)
●● Drop (messages aren’t important, drop them)
●● Retry (retry the messages as you see fit)

Batch event send operations


Sending events in batches can help increase throughput. You can use the CreateBatch API to create a
batch to which data objects can later be added for a SendAsync call.

Send asynchronously and send at scale


You send events to an event hub asynchronously. Sending asynchronously increases the rate at which a
client is able to send events. SendAsync returns a Task object. You can use the RetryPolicy class on
the client to control client retry options.

Event consumers
The EventProcessorHost class processes data from Event Hubs. You should use this implementation
when building event readers on the .NET platform. EventProcessorHost provides a thread-safe,
multi-process, safe runtime environment for event processor implementations that also provides check-
pointing and partition lease management.
To use the EventProcessorHost class, you can implement IEventProcessor. This interface contains
four methods:
●● OpenAsync
●● CloseAsync
●● ProcessEventsAsync
●● ProcessErrorAsync
To start event processing, instantiate EventProcessorHost, providing the appropriate parameters for
your event hub. For example:
var eventProcessorHost = new EventProcessorHost(
EventHubName,
PartitionReceiver.DefaultConsumerGroupName,
EventHubConnectionString,
StorageConnectionString,
StorageContainerName);

Then, call RegisterEventProcessorAsync to register your IEventProcessor implementation with


the runtime:
await eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>();
MCT USE ONLY. STUDENT USE PROHIBITED 292  Module 10 Develop event-based solutions  

At this point, the host attempts to acquire a lease on every partition in the event hub using a “greedy”
algorithm. These leases last for a given timeframe and must then be renewed. As new nodes, worker
instances in this case, come online, they place lease reservations and over time the load shifts between
nodes as each attempts to acquire more leases.

Publisher revocation
In addition to the advanced run-time features of EventProcessorHost, Event Hubs enables publisher
revocation in order to block specific publishers from sending event to an event hub. These features are
useful if a publisher token has been compromised, or a software update is causing them to behave
inappropriately. In these situations, the publisher's identity, which is part of their SAS token, can be
blocked from publishing events.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Notification Hubs  293

Implement solutions that use Azure Notifica-


tion Hubs
Azure Notification Hubs overview
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send
notifications to any platform (iOS, Android, Windows, Kindle, Baidu, etc.) from any backend (cloud or
on-premises).

What are push notifications?


Push notifications is a form of app-to-user communication where users of mobile apps are notified of
certain desired information, usually in a pop-up or dialog box. Users can generally choose to view or
dismiss the message. Choosing the former opens the mobile application that communicated the notifica-
tion.
Push notifications are vital for consumer apps in increasing app engagement and usage, and for enter-
prise apps in communicating up-to-date business information. It's the best app-to-user communication
because it is energy-efficient for mobile devices, flexible for the notifications senders, and available when
corresponding applications are not active.

How push notifications work


Push notifications are delivered through platform-specific infrastructures called Platform Notification
Systems (PNSes). They offer barebone push functionalities to deliver a message to a device with a
provided handle, and have no common interface. To send a notification to all customers across the iOS,
Android, and Windows versions of an app, the developer must work with Apple Push Notification Ser-
vice(APNS), Firebase Cloud Messaging(FCM), and Windows Notification Service(WNS).
At a high level, here is how push works:
1. The client app decides it wants to receive notification. Hence, it contacts the corresponding PNS to
retrieve its unique and temporary push handle. The handle type depends on the system (for example,
WNS has URIs while APNS has tokens).
2. The client app stores this handle in the app back-end or provider.
3. To send a push notification, the app back-end contacts the PNS using the handle to target a specific
client app.
4. The PNS forwards the notification to the device specified by the handle.
MCT USE ONLY. STUDENT USE PROHIBITED 294  Module 10 Develop event-based solutions  

The challenges of push notifications


PNSes are powerful. However, they leave much work to the app developer to implement even common
push notification scenarios, such as broadcasting push notifications to segmented users.
Pushing notifications requires complex infrastructure that is unrelated to the application's main business
logic. Some of the infrastructural challenges are:
●● Platform dependency
●● The backend needs to have complex and hard-to-maintain platform-dependent logic to send
notifications to devices on various platforms as PNSes are not unified.
●● Scale
●● Per PNS guidelines, device tokens must be refreshed upon every app launch. The backend is
dealing with a large amount of traffic and database access just to keep the tokens up-to-date.
When the number of devices grows to hundreds and thousands of millions, the cost of creating
and maintaining this infrastructure is massive.
●● Most PNSes do not support broadcast to multiple devices. A simple broadcast to a million devices
results in a million calls to the PNSes. Scaling this amount of traffic with minimal latency is nontriv-
ial.
●● Routing
●● Though PNSes provide a way to send messages to devices, most apps notifications are targeted at
users or interest groups. The backend must maintain a registry to associate devices with interest
groups, users, properties, etc. This overhead adds to the time to market and maintenance costs of
an app.

Azure Notification Hubs security model


This topic describes the security model of Azure Notification Hubs.

Shared Access Signature Security (SAS)


Notification Hubs implements an entity-level security scheme called a Shared Access Signature (SAS).
Each rule contains a name, a key value (shared secret), and a set of rights.
When creating a hub, two rules are automatically created: one with Listen rights (that the client app uses)
and one with all rights (that the app backend uses):
●● DefaultListenSharedAccessSignature: grants Listen permission only.
●● DefaultFullSharedAccessSignature: grants Listen, Manage, and Send permissions. This policy is to
be used only in your app backend. Do not use it in client applications; use a policy with only Listen
access.
Apps should not embed the key value in client apps; instead, have the client app retrieve it from the app
backend at startup.

Security claims
Similar to other entities, Notification Hub operations are allowed for three security claims:
●● Listen: Create/Update, Read, and Delete single registrations
●● Send: Send messages to the Notification Hub
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Notification Hubs  295

●● Manage: CRUDs on Notification Hubs (including updating PNS credentials, and security keys), and
read registrations based on tags
Notification Hubs accept claims granted by Microsoft Azure Access Control tokens, and by signature
tokens generated with shared keys configured directly on the Notification Hub.
Notification Hubs accepts SAS tokens generated with shared keys configured directly on the hub.
It is not possible to send a notification to more than one namespace. Namespaces are logical containers
for Notification Hubs and are not involved in sending notifications.

Registration management
The topic describes registrations at a high level, then introduces the two main patterns for registering
devices: registering from the device directly to the notification hub, and registering through an applica-
tion backend.

Device registration
Device registration with a Notification Hub is accomplished using a Registration or Installation.

Registrations
A registration associates the Platform Notification Service (PNS) handle for a device with tags and
possibly a template. The PNS handle could be a ChannelURI, device token, or FCM registration id. Tags
are used to route notifications to the correct set of device handles. For more information, see Routing
and Tag Expressions. Templates are used to implement per-registration transformation.
Note: Azure Notification Hubs supports a maximum of 60 tags per registration.

Installations
An Installation is an enhanced registration that includes a bag of push related properties. It is the latest
and best approach to registering your devices. However, it is not supported by client-side .NET SDK
(Notification Hub SDK for backend operations) as of yet. This means if you are registering from the client
device itself, you would have to use the Notification Hubs REST API approach to support installations. If
you are using a backend service, you should be able to use Notification Hub SDK for backend operations.
The following are some key advantages to using installations:
●● Creating or updating an installation is fully idempotent. So you can retry it without any concerns
about duplicate registrations.
●● The installation model supports a special tag format ($InstallationId:{INSTALLATION_ID})
that enables sending a notification directly to the specific device.
●● Using installations also enables you to do partial registration updates. The partial update of an
installation is requested with a PATCH method using the JSON-Patch standard.
Registrations and installations must contain a valid PNS handle for each device/channel. Because PNS
handles can only be obtained in a client app on the device, one pattern is to register directly on that
device with the client app. On the other hand, security considerations and business logic related to tags
might require you to manage device registration in the app back-end.
MCT USE ONLY. STUDENT USE PROHIBITED 296  Module 10 Develop event-based solutions  

Templates
If you want to use Templates, the device installation also holds all templates associated with that device in
a JSON format. The template names help target different templates for the same device.
Each template name maps to a template body and an optional set of tags. Moreover, each platform can
have additional template properties. For Windows Store (using WNS) and Windows Phone 8 (using
MPNS), an additional set of headers can be part of the template. In the case of APNs, you can set an
expiry property to either a constant or to a template expression.

Registration management from the device


When managing device registration from client apps, the backend is only responsible for sending
notifications. Client apps keep PNS handles up-to-date, and register tags. The following picture illustrates
this pattern.

The device first retrieves the PNS handle from the PNS, then registers with the notification hub directly.
After the registration is successful, the app backend can send a notification targeting that registration.
We'll provide more more information about how to send notifications in the next topic in the lesson.
In this case, you use only Listen rights to access your notification hubs from the device.

Registration management from a backend


Managing registrations from the backend requires writing additional code. The app from the device must
provide the updated PNS handle to the backend every time the app starts (along with tags and tem-
plates), and the backend must update this handle on the notification hub.
The advantages of managing registrations from the backend include the ability to modify tags to regis-
trations even when the corresponding app on the device is inactive, and to authenticate the client app
before adding a tag to its registration.

Using templates for registrations


Templates enable a client application to specify the exact format of the notifications it wants to receive.
Using templates, an app can realize several different benefits, including the following:
●● A platform-agnostic backend
●● Personalized notifications
●● Client-version independence
●● Easy localization
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Notification Hubs  297

Using templates cross-platform


The standard way to send push notifications is to send, for each notification that is to be sent, a specific
payload to platform notification services (WNS, APNS). For example, to send an alert to APNS, the
payload is a JSON object of the following form:
{"aps": {"alert" : "Hello!" }}

To send a similar toast message on a Windows Store application, the XML payload is as follows:
<toast>
<visual>
<binding template=\"ToastText01\">
<text id=\"1\">Hello!</text>
</binding>
</visual>
</toast>

You can create similar payloads for MPNS (Windows Phone) and FCM (Android) platforms.
This requirement forces the app backend to produce different payloads for each platform, and effectively
makes the backend responsible for part of the presentation layer of the app. Some concerns include
localization and graphical layouts (especially for Windows Store apps that include notifications for various
types of tiles).
The Notification Hubs template feature enables a client app to create special registrations, called tem-
plate registrations, which include, in addition to the set of tags, a template. The Notification Hubs
template feature enables a client app to associate devices with templates whether you are working with
Installations (preferred) or Registrations. Given the preceding payload examples, the only platform-inde-
pendent information is the actual alert message (Hello!). A template is a set of instructions for the
Notification Hub on how to format a platform-independent message for the registration of that specific
client app. In the preceding example, the platform-independent message is a single property: message
= Hello!.
The template for the iOS client app registration is as follows:
{"aps": {"alert": "$(message)"}}

The corresponding template for the Windows Store client app is:
<toast>
<visual>
<binding template=\"ToastText01\">
<text id=\"1\">$(message)</text>
</binding>
</visual>
</toast>

Notice that the actual message is substituted for the expression $(message). This expression instructs
the Notification Hub, whenever it sends a message to this particular registration, to build a message that
follows it and switches in the common value.
If you are working with Installation model, the installation “templates” key holds a JSON of multiple
templates. If you are working with Registration model, the client application can create multiple registra-
MCT USE ONLY. STUDENT USE PROHIBITED 298  Module 10 Develop event-based solutions  

tions in order to use multiple templates; for example, a template for alert messages and a template for
tile updates. Client applications can also mix native registrations (registrations with no template) and
template registrations.
The Notification Hub sends one notification for each template without considering whether they belong
to the same client app. This behavior can be used to translate platform-independent notifications into
more notifications. For example, the same platform-independent message to the Notification Hub can be
seamlessly translated in a toast alert and a tile update, without requiring the backend to be aware of it.
Some platforms (for example, iOS) might collapse multiple notifications to the same device if they are
sent in a short period of time.

Template expression language


Templates are limited to XML or JSON document formats. Also, you can only place expressions in particu-
lar places; for example, node attributes or values for XML, string property values for JSON.
The following table shows the language allowed in templates:

Expression Description
$(prop) Reference to an event property with the given
name. Property names are not case-sensitive. This
expression resolves into the property’s
text value or into an empty string if the property is
not
present.
$(prop, n) As above, but the text is explicitly clipped at n
characters, for example $(title, 20) clips the
contents of the title property at 20
characters.
.(prop, n) As above, but the text is suffixed with three dots
as it is clipped. The total size of the clipped
string and the suffix does not exceed n
characters. .(title, 20) with an input property of
“This is the
title line” results in This is the title...
%(prop) Similar to $(name) except that the output is
URI-encoded.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Notification Hubs  299

Expression Description
#(prop) Used in JSON templates (for example, for iOS and
Android templates).

This function works exactly


the same as $(prop) previously specified,
except when used in JSON templates (for example,
Apple
templates). In this case, if this function is
not surrounded by “{‘,’}” (for example, ‘myJson-
Property’
: ‘#(name)’), and it evaluates to a number
in Javascript format, for example, regexp:
(0|([1-9][0-9]*))(.[0-9]+)?((e|E)(+|-)?[0-
9]+)?, then the output JSON is a number.

For example,
‘badge: ‘#(name)’ becomes ‘badge’ : 40
(and not ‘40‘).
‘text’ or “text” A literal. Literals contain arbitrary text enclosed in
single or double quotes.
expr1 + expr2 The concatenation operator joining two expres-
sions into a single string.
The expressions can be any of the preceding forms.
When using concatenation, the entire expression must be surrounded with {}. For example, {$(prop)
+ ‘ - ’ + $(prop2)}.
For example, the following template is not a valid XML template:
<tile>
<visual>
<binding $(property)>
<text id="1">Seattle, WA</text>
</binding>
</visual>
</tile>

As explained earlier, when using concatenation, expressions must be wrapped in curly brackets. For
example:
<tile>
<visual>
<binding template="ToastText01">
<text id="1">{'Hi, ' + $(name)}</text>
</binding>
</visual>
</tile>
MCT USE ONLY. STUDENT USE PROHIBITED 300  Module 10 Develop event-based solutions  

Next
●● In the next topic we'll be covering routing and tag expressions.

Routing and tag expressions


Tag expressions enable you to target specific sets of devices, or more specifically registrations, when
sending a push notification through Notification Hubs.

Targeting specific registrations


The only way to target specific notification registrations is to associate tags with them, then target those
tags. As discussed in Registration Management, in order to receive push notifications an app has to
register a device handle on a notification hub. Once a registration is created on a notification hub, the
application backend can send push notifications to it. The application backend can choose the registra-
tions to target with a specific notification in the following ways:
1. Broadcast: all registrations in the notification hub receive the notification.
2. Tag: all registrations that contain the specified tag receive the notification.
3. Tag expression: all registrations whose set of tags match the specified expression receive the notifi-
cation.

Tags
A tag can be any string, up to 120 characters, containing alphanumeric and the following non-alphanu-
meric characters: ‘_’, ‘@’, ‘#’, ‘.’, ‘:’, ‘-’. The following example shows an application from which you can
receive toast notifications about specific music groups. In this scenario, a simple way to route notifica-
tions is to label registrations with tags that represent the different artists, as in the following picture:
You can send notifications to tags using the send notifications methods of the Microsoft.Azure.
NotificationHubs.NotificationHubClient class in the Microsoft Azure Notification Hubs SDK. You
can also use Node.js, or the Push Notifications REST APIs.
Tags do not have to be pre-provisioned and can refer to multiple app-specific concepts. For example,
users of this example application can comment on bands and want to receive toasts, not only for the
comments on their favorite bands, but also for all comments from their friends, regardless of the band on
which they are commenting.

Tag expressions
There are cases in which a notification has to target a set of registrations that is identified not by a single
tag, but by a Boolean expression on tags.
Consider a sports application that sends a reminder to everyone in Anytown about a game between the
HomeTeam and VisitingTeam. If the client app registers tags about interest in teams and location, then
the notification should be targeted to everyone in Anytown who is interested in either the HomeTeam or
the VisitingTeam. This condition can be expressed with the following Boolean expression:
(follows_HomeTeam || follows_VisitingTeam) && location_Anytown )
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Notification Hubs  301

Tag expressions can contain all Boolean operators, such as AND (&&), OR (||), and NOT (!). They can also
contain parentheses. Tag expressions are limited to 20 tags if they contain only ORs; otherwise they are
limited to 6 tags.
MCT USE ONLY. STUDENT USE PROHIBITED 302  Module 10 Develop event-based solutions  

Lab and review questions


Lab: Publishing and subscribing to Event Grid
events

Lab scenario
Your company builds a human resources (HR) system used by various customers around the world. While
the system works fine today, your development managers have decided to begin re-architecting the solu-
tion by decoupling application components. This decision was driven by a desire to make any future
development simpler through modularity. As the developer who manages component communication,
you have decided to introduce Microsoft Azure Event Grid as your solution-wide messaging platform.

Objectives
After you complete this lab, you will be able to:
●● Create an Event Grid topic.
●● Use the Azure Event Grid viewer to subscribe to a topic and illustrate published messages.
●● Publish a message from a Microsoft .NET application.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 10 review questions


Review Question 1
Regarding Azure Event Grid, which of the below represents the smallest amount of information that fully
describes something happening in the system?
†† Event subscription
†† Topics
†† Event handlers
†† Events
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  303

Review Question 2
Azure Event Grid has three types of authentication. Select the valid types of authentication from the list
below.
†† Custom topic publishing
†† Event Grid login
†† Event subscription
†† WebHook event delivery

Review Question 3
Azure Event Hubs is a big data streaming platform and event ingestion service. What type of service is it?
†† IaaS
†† SaaS
†† PaaS

Review Question 4
In .NET programming which of the below is the primary class for interacting with Event Hubs?
†† Microsoft.Azure.EventHubs.EventHubClient
†† Microsoft.Azure.EventHubs
†† Microsoft.Azure.Events
†† Microsoft.Azure.EventHubClient

Review Question 5
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send
notifications to any platform.
In the list below, which two methods accomplish a device registration?
†† Registration
†† Service authentication
†† Installation
†† Broadcast
MCT USE ONLY. STUDENT USE PROHIBITED 304  Module 10 Develop event-based solutions  

Answers
Review Question 1
Regarding Azure Event Grid, which of the below represents the smallest amount of information that fully
describes something happening in the system?
†† Event subscription
†† Topics
†† Event handlers
■■ Events
Explanation
An event is the smallest amount of information that fully describes something that happened in the system.
Every event has common information like: source of the event, time the event took place, and unique
identifier.
Review Question 2
Azure Event Grid has three types of authentication. Select the valid types of authentication from the list
below.
■■ Custom topic publishing
†† Event Grid login
■■ Event subscription
■■ WebHook event delivery
Explanation
Azure Event Grid has three types of authentication:
Review Question 3
Azure Event Hubs is a big data streaming platform and event ingestion service. What type of service is it?
†† IaaS
†† SaaS
■■ PaaS
Explanation
Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management
overhead.
Review Question 4
In .NET programming which of the below is the primary class for interacting with Event Hubs?
■■ Microsoft.Azure.EventHubs.EventHubClient
†† Microsoft.Azure.EventHubs
†† Microsoft.Azure.Events
†† Microsoft.Azure.EventHubClient
Explanation
The primary class for interacting with Event Hubs is Microsoft.Azure.EventHubs.EventHubClient. You can
instantiate this class using the CreateFromConnectionString method.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  305

Review Question 5
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that allows you to send
notifications to any platform.
In the list below, which two methods accomplish a device registration?
■■ Registration
†† Service authentication
■■ Installation
†† Broadcast
Explanation
Device registration with a Notification Hub is accomplished using a Registration or Installation.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 11 Develop message-based solutions

Implement solutions that use Azure Service


Bus
Azure Service Bus overview
Microsoft Azure Service Bus is a fully managed enterprise integration message broker. Service Bus can
decouple applications and services. Service Bus offers a reliable and secure platform for asynchronous
transfer of data and state.
Data is transferred between different applications and services using messages. A message is in binary
format and can contain JSON, XML, or just text.
Some common messaging scenarios are:
●● Messaging. Transfer business data, such as sales or purchase orders, journals, or inventory movements.
●● Decouple applications. Improve reliability and scalability of applications and services. Client and service
don't have to be online at the same time.
●● Topics and subscriptions. Enable 1:n relationships between publishers and subscribers.
●● Message sessions. Implement workflows that require message ordering or message deferral.

Namespaces
A namespace is a container for all messaging components. Multiple queues and topics can be in a single
namespace, and namespaces often serve as application containers.

Queues
Messages are sent to and received from queues. Queues store messages until the receiving application is
available to receive and process them.
Messages in queues are ordered and timestamped on arrival. Once accepted, the message is held safely
in redundant storage. Messages are delivered in pull mode, only delivering messages when requested.
MCT USE ONLY. STUDENT USE PROHIBITED 308  Module 11 Develop message-based solutions  

Topics
You can also use topics to send and receive messages. While a queue is often used for point-to-point
communication, topics are useful in publish/subscribe scenarios.
Topics can have multiple, independent subscriptions. A subscriber to a topic can receive a copy of each
message sent to that topic. Subscriptions are named entities. Subscriptions persist, but can expire or
autodelete.
You may not want individual subscriptions to receive all messages sent to a topic. If so, you can use rules
and filters to define conditions that trigger optional actions. You can filter specified messages and set or
modify message properties.

Advanced features
Service Bus includes advanced features that enable you to solve more complex messaging problems. The
following table describes several of these features.

Feature Description
Message sessions To create a first-in, first-out (FIFO) guarantee in
Service Bus, use sessions. Message sessions enable
joint and ordered handling of unbounded se-
quences of related messages.
Autoforwarding The autoforwarding feature chains a queue or sub-
scription to another queue or topic that is in the
same namespace.
Dead-letter queue Service Bus supports a dead-letter queue (DLQ). A
DLQ holds messages that can't be delivered to any
receiver. Service Bus lets you remove messages
from the DLQ and inspect them.
Scheduled delivery You can submit messages to a queue or topic for
delayed processing. You can schedule a job to
become available for processing by a system at a
certain time.
Message deferral A queue or subscription client can defer retrieval
of a message until a later time. The message
remains in the queue or subscription, but it's set
aside.
Batching Client-side batching enables a queue or topic
client to delay sending a message for a certain
period of time.
Transactions A transaction groups two or more operations
together into an execution scope. Service Bus
supports grouping operations against a single
messaging entity within the scope of a single
transaction. A message entity can be a queue,
topic, or subscription.
Filtering and actions Subscribers can define which messages they want
to receive from a topic. These messages are
specified in the form of one or more named
subscription rules.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Service Bus  309

Feature Description
Autodelete on idle Autodelete on idle enables you to specify an idle
interval after which a queue is automatically
deleted. The minimum duration is 5 minutes.
Duplicate detection An error could cause the client to have a doubt
about the outcome of a send operation. Duplicate
detection enables the sender to resend the same
message, or for the queue or topic to discard any
duplicate copies.
Security protocols Service Bus supports security protocols such as
Shared Access Signatures (SAS), Role Based Access
Control (RBAC) and Managed identities for Azure
resources.
Geo-disaster recovery When Azure regions or datacenters experience
downtime, Geo-disaster recovery enables data
processing to continue operating in a different
region or datacenter.
Security Service Bus supports standard AMQP 1.0 and
HTTP/REST protocols.

Integration
Service Bus fully integrates with the following Azure services:
●● Event Grid
●● Logic Apps
●● Azure Functions
●● Dynamics 365
●● Azure Stream Analytics

Service Bus queues, topics, and subscriptions


The messaging entities that form the core of the messaging capabilities in Service Bus are queues, topics
and subscriptions, and rules/actions.

Queues
Queues offer First In, First Out (FIFO) message delivery to one or more competing consumers. That is,
receivers typically receive and process messages in the order in which they were added to the queue, and
only one message consumer receives and processes each message. A key benefit of using queues is to
achieve “temporal decoupling” of application components.
A related benefit is “load leveling,” which enables producers and consumers to send and receive messag-
es at different rates. In many applications, the system load varies over time; however, the processing time
required for each unit of work is typically constant. Intermediating message producers and consumers
with a queue means that the consuming application only has to be provisioned to be able to handle aver-
age load instead of peak load.
MCT USE ONLY. STUDENT USE PROHIBITED 310  Module 11 Develop message-based solutions  

Using queues to intermediate between message producers and consumers provides an inherent loose
coupling between the components. Because producers and consumers are not aware of each other, a
consumer can be upgraded without having any effect on the producer.

Create queues
You create queues using the Azure portal, PowerShell, CLI, or Resource Manager templates. You then
send and receive messages using a QueueClient object.

Receive modes
You can specify two different modes in which Service Bus receives messages: ReceiveAndDelete or
PeekLock.
In the ReceiveAndDelete mode, the receive operation is single-shot; that is, when Service Bus receives
the request, it marks the message as being consumed and returns it to the application. ReceiveAndDelete
mode is the simplest model and works best for scenarios in which the application can tolerate not
processing a message if a failure occurs.
In PeekLock mode, the receive operation becomes two-stage, which makes it possible to support
applications that cannot tolerate missing messages. When Service Bus receives the request, it finds the
next message to be consumed, locks it to prevent other consumers from receiving it, and then returns it
to the application. After the application finishes processing the message (or stores it reliably for future
processing), it completes the second stage of the receive process by calling CompleteAsync on the
received message. When Service Bus sees the CompleteAsync call, it marks the message as being
consumed.
If the application is unable to process the message for some reason, it can call the AbandonAsync
method on the received message (instead of CompleteAsync). This method enables Service Bus to
unlock the message and make it available to be received again, either by the same consumer or by
another competing consumer. Secondly, there is a timeout associated with the lock and if the application
fails to process the message before the lock timeout expires (for example, if the application crashes), then
Service Bus unlocks the message and makes it available to be received again (essentially performing an
AbandonAsync operation by default).
In the event that the application crashes after processing the message, but before the CompleteAsync
request is issued, the message is redelivered to the application when it restarts. This process is often
called At Least Once processing; that is, each message is processed at least once. However, in certain
situations the same message may be redelivered. If the scenario cannot tolerate duplicate processing,
then additional logic is required in the application to detect duplicates, which can be achieved based
upon the MessageId property of the message, which remains constant across delivery attempts. This
feature is known as Exactly Once processing.

Topics and subscriptions


In contrast to queues, in which each message is processed by a single consumer, topics and subscriptions
provide a one-to-many form of communication, in a publish/subscribe pattern. Useful for scaling to large
numbers of recipients, each published message is made available to each subscription registered with the
topic. Messages are sent to a topic and delivered to one or more associated subscriptions, depending on
filter rules that can be set on a per-subscription basis. The subscriptions can use additional filters to
restrict the messages that they want to receive. Messages are sent to a topic in the same way they are
sent to a queue, but messages are not received from the topic directly. Instead, they are received from
subscriptions. A topic subscription resembles a virtual queue that receives copies of the messages that
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Service Bus  311

are sent to the topic. Messages are received from a subscription identically to the way they are received
from a queue.

Create topics and subscriptions


Creating a topic is similar to creating a queue, as described in the previous section. You then send
messages using the TopicClient class. To receive messages, you create one or more subscriptions to
the topic. Similar to queues, messages are received from a subscription using a SubscriptionClient
object instead of a QueueClient object. Create the subscription client, passing the name of the topic,
the name of the subscription, and (optionally) the receive mode as parameters.

Rules and actions


In many scenarios, messages that have specific characteristics must be processed in different ways. To
enable this processing, you can configure subscriptions to find messages that have desired properties
and then perform certain modifications to those properties. While Service Bus subscriptions see all
messages sent to the topic, you can only copy a subset of those messages to the virtual subscription
queue. This filtering is accomplished using subscription filters. Such modifications are called filter actions.
When a subscription is created, you can supply a filter expression that operates on the properties of the
message, both the system properties (for example, Label) and custom application properties (for exam-
ple, StoreName.) The SQL filter expression is optional in this case; without a SQL filter expression, any
filter action defined on a subscription will be performed on all the messages for that subscription.

Messages, payloads, and serialization in Azure


Service Bus
Messages carry a payload as well as metadata, in the form of key-value pair properties, describing the
payload and giving handling instructions to Service Bus and applications. Occasionally, that metadata
alone is sufficient to carry the information that the sender wants to communicate to receivers, and the
payload remains empty.
The object model of the official Service Bus clients for .NET and Java reflect the abstract Service Bus
message structure, which is mapped to and from the wire protocols Service Bus supports.
A Service Bus message consists of a binary payload section that Service Bus never handles in any form on
the service-side, and two sets of properties. The broker properties are predefined by the system. These
predefined properties either control message-level functionality inside the broker, or they map to
common and standardized metadata items. The user properties are a collection of key-value pairs that
can be defined and set by the application.
The abstract message model enables a message to be posted to a queue via HTTP (actually always
HTTPS) and can be retrieved via AMQP. In either case, the message looks normal in the context of the
respective protocol. The broker properties are translated as needed, and the user properties are mapped
to the most appropriate location on the respective protocol message model. In HTTP, user properties
map directly to and from HTTP headers; in AMQP they map to and from the application-properties map.
MCT USE ONLY. STUDENT USE PROHIBITED 312  Module 11 Develop message-based solutions  

Message routing and correlation


A subset of the broker properties described previously, specifically To, ReplyTo, ReplyToSessionId,
MessageId, CorrelationId, and SessionId, are used to help applications route messages to
particular destinations. To illustrate this, consider a few patterns:
●● Simple request/reply: A publisher sends a message into a queue and expects a reply from the
message consumer. To receive the reply, the publisher owns a queue into which it expects replies to
be delivered. The address of that queue is expressed in the ReplyTo property of the outbound
message. When the consumer responds, it copies the MessageId of the handled message into the
CorrelationId property of the reply message and delivers the message to the destination indicat-
ed by the ReplyTo property. One message can yield multiple replies, depending on the application
context.
●● Multicast request/reply: As a variation of the prior pattern, a publisher sends the message into a
topic and multiple subscribers become eligible to consume the message. Each of the subscribers
might respond in the fashion described previously. This pattern is used in discovery or roll-call
scenarios and the respondent typically identifies itself with a user property or inside the payload. If
ReplyTo points to a topic, such a set of discovery responses can be distributed to an audience.
●● Multiplexing: This session feature enables multiplexing of streams of related messages through a
single queue or subscription such that each session (or group) of related messages, identified by
matching SessionId values, are routed to a specific receiver while the receiver holds the session
under lock. Read more about the details of sessions here.
●● Multiplexed request/reply: This session feature enables multiplexed replies, allowing several pub-
lishers to share a reply queue. By setting ReplyToSessionId, the publisher can instruct the con-
sumer(s) to copy that value into the SessionId property of the reply message. The publishing queue
or topic does not need to be session-aware. As the message is sent, the publisher can then specifically
wait for a session with the given SessionId to materialize on the queue by conditionally accepting a
session receiver.
Routing inside of a Service Bus namespace can be realized using auto-forward chaining and topic
subscription rules. Routing across namespaces can be realized using Azure LogicApps. As indicated in the
previous list, the To property is reserved for future use and may eventually be interpreted by the broker
with a specially enabled feature. Applications that wish to implement routing should do so based on user
properties and not lean on the To property; however, doing so now will not cause compatibility issues.

Payload serialization
When in transit or stored inside of Service Bus, the payload is always an opaque, binary block. The
ContentType property enables applications to describe the payload, with the suggested format for the
property values being a MIME content-type description according to IETF RFC2045; for example, appli-
cation/json;charset=utf-8.
Unlike the Java or .NET Standard variants, the .NET Framework version of the Service Bus API supports
creating BrokeredMessage instances by passing arbitrary .NET objects into the constructor.
When using the legacy SBMP protocol, those objects are then serialized with the default binary serializer,
or with a serializer that is externally supplied. When using the AMQP protocol, the object is serialized into
an AMQP object. The receiver can retrieve those objects with the GetBody() method, supplying the
expected type. With AMQP, the objects are serialized into an AMQP graph of ArrayList and IDic-
tionary<string,object> objects, and any AMQP client can decode them.
While this hidden serialization magic is convenient, applications should take explicit control of object
serialization and turn their object graphs into streams before including them into a message, and do the
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Service Bus  313

reverse on the receiver side. This yields interoperable results. It should also be noted that while AMQP
has a powerful binary encoding model, it is tied to the AMQP messaging ecosystem and HTTP clients will
have trouble decoding such payloads.

Demo: Use .NET Core to send and receive mes-


sages from a Service Bus queue
In this demo you will learn how to:
●● Create a Service Bus namespace, and queue, using the Azure CLI.
●● Create a .NET Core console application to send a set of messages to the queue.
●● Create a .NET Core console application to receive those messages from the queue.

Prerequisites
This demo is performed in the Cloud Shell, and in Visual Studio Code. The code examples below rely on
the Microsoft.Azure.ServiceBus NuGet package.

Login to Azure
1. Login in to the Azure Portal: https://portal.azure.com and launch the Cloud Shell. Be sure to select
Bash as the shell.
2. Create a resource group, replace <myRegion> with a location that makes sense for you. Copy the first
line by itself and edit the value.
myLocation=<myRegion>
myResourceGroup="az204-svcbusdemo-rg"
az group create -n $myResourceGroup -l $myLocation

Create the Service Bus namespace and queue


1. Create a Service Bus messaging namespace with a unique name, the script below will generate a
unique name for you. It will take a few minutes for the command to finish.
namespaceName=az204svcbus$RANDOM
az servicebus namespace create \
--resource-group $myResourceGroup \
--name $namespaceName \
--location $myLocation

2. Create a Service Bus queue


az servicebus queue create --resource-group $myResourceGroup \
--namespace-name $namespaceName \
--name az204-queue

3. Get the connection string for the namespace


connectionString=$(az servicebus namespace authorization-rule keys list \
--resource-group $myResourceGroup \
MCT USE ONLY. STUDENT USE PROHIBITED 314  Module 11 Develop message-based solutions  

--namespace-name $namespaceName \
--name RootManageSharedAccessKey \
--query primaryConnectionString --output tsv)
echo $connectionString

After the last command runs, copy and paste the connection string to a temporary location such as
Notepad. You will need it in the next step.

Create console app to send messages to the queue


1. Set up the new console app on your local machine
●● Create a new folder named az204svcbusSend.
●● Open a terminal in the new folder and run dotnet new console
●● Run the dotnet add package Microsoft.Azure.ServiceBus command to ensure you
have the packages you need.
●● Launch Visual Studio Code and open the new folder.
2. In Program.cs, add the following using statements at the top of the namespace definition, before the
class declaration:
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;

3. Within the Program class, declare the following variables. Set the ServiceBusConnectionString
variable to the connection string that you obtained when creating the namespace:
const string ServiceBusConnectionString = "<your_connection_string>";
const string QueueName = "az204-queue";
static IQueueClient queueClient;

4. Replace the default contents of Main() with the following line of code:
public static async Task Main(string[] args)
{
const int numberOfMessages = 10;
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);

Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after sending all the messages.");
Console.Write-
Line("======================================================");

// Send messages.
await SendMessagesAsync(numberOfMessages);

Console.ReadKey();
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Service Bus  315

await queueClient.CloseAsync();
}

5. Directly after Main(), add the following asynchronous MainAsync() method that calls the send
messages method:
static async Task MainAsync()
{
const int numberOfMessages = 10;
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);

Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after sending all the messages.");
Console.Write-
Line("======================================================");

// Send messages.
await SendMessagesAsync(numberOfMessages);

Console.ReadKey();

await queueClient.CloseAsync();
}

6. Directly after the MainAsync() method, add the following SendMessagesAsync() method that
performs the work of sending the number of messages specified by numberOfMessagesToSend
(currently set to 10):
static async Task SendMessagesAsync(int numberOfMessagesToSend)
{
try
{
for (var i = 0; i < numberOfMessagesToSend; i++)
{
// Create a new message to send to the queue.
string messageBody = $"Message {i}";
var message = new Message(Encoding.UTF8.GetBytes(messageBody));

// Write the body of the message to the console.


Console.WriteLine($"Sending message: {messageBody}");

// Send the message to the queue.


await queueClient.SendAsync(message);
}
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
}
MCT USE ONLY. STUDENT USE PROHIBITED 316  Module 11 Develop message-based solutions  

7. Save the file and run the following commands in the terminal.
dotnet build
dotnet run

8. Login to the Azure Portal and navigate to the az204-queue you created earlier and select Overview to
show the Essentials screen.
Notice that the Active Message Count value for the queue is now 10. Each time you run the sender
application without retrieving the messages (as described in the next section), this value increases by 10.

Step 3: Write code to receive messages to the queue


1. Set up the new console app
●● Create a new folder named az204svcbusRec.
●● Open a terminal in the new folder and run dotnet new console
●● Run the dotnet add package Microsoft.Azure.ServiceBus command to ensure you
have the packages you need.
●● Launch Visual Studio Code and open the new folder.
2. In Program.cs, add the following using statements at the top of the namespace definition, before the
class declaration:
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;

3. Within the Program class, declare the following variables. Set the ServiceBusConnectionString
variable to the connection string that you obtained when creating the namespace:
const string ServiceBusConnectionString = "<your_connection_string>";
const string QueueName = "az204-queue";
static IQueueClient queueClient;

4. Replace the Main() method with the following:


public static async Task Main(string[] args)
{
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);

Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
Console.Write-
Line("======================================================");

// Register the queue message handler and receive messages in a loop


RegisterOnMessageHandlerAndReceiveMessages();
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Service Bus  317

Console.ReadKey();

await queueClient.CloseAsync();
}

5. Directly after Main(), add the following asynchronous MainAsync() method that calls the Regis-
terOnMessageHandlerAndReceiveMessages() method:
static async Task MainAsync()
{
queueClient = new QueueClient(ServiceBusConnectionString, QueueName);

Console.Write-
Line("======================================================");
Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
Console.Write-
Line("======================================================");

// Register the queue message handler and receive messages in a loop


RegisterOnMessageHandlerAndReceiveMessages();

Console.ReadKey();

await queueClient.CloseAsync();
}

6. Directly after the MainAsync() method, add the following method that registers the message
handler and receives the messages sent by the sender application:
static void RegisterOnMessageHandlerAndReceiveMessages()
{
// Configure the message handler options in terms of exception handling, number of concurrent
messages to deliver, etc.
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
// Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for sim-
plicity.
// Set it according to how many messages the application wants to process in parallel.
MaxConcurrentCalls = 1,

// Indicates whether the message pump should automatically complete the messages after
returning from user callback.
// False below indicates the complete operation is handled by the user callback as in Process-
MessagesAsync().
AutoComplete = false
};

// Register the function that processes messages.


queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
MCT USE ONLY. STUDENT USE PROHIBITED 318  Module 11 Develop message-based solutions  

7. Directly after the previous method, add the following ProcessMessagesAsync() method to
process the received messages:
static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
// Process the message.
Console.WriteLine($"Received message: SequenceNumber:{message.SystemProperties.Sequence-
Number} Body:{Encoding.UTF8.GetString(message.Body)}");

// Complete the message so that it is not received again.


// This can be done only if the queue Client is created in ReceiveMode.PeekLock mode (which is the
default).
await queueClient.CompleteAsync(message.SystemProperties.LockToken);

// Note: Use the cancellationToken passed as necessary to determine if the queueClient has already
been closed.
// If queueClient has already been closed, you can choose to not call CompleteAsync() or Aban-
donAsync() etc.
// to avoid unnecessary exceptions.
}

8. Finally, add the following method to handle any exceptions that might occur:
// Use this handler to examine the exceptions received on the message pump.
static Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
{
Console.WriteLine($"Message handler encountered an exception {exceptionReceivedEventArgs.
Exception}.");
var context = exceptionReceivedEventArgs.ExceptionReceivedContext;
Console.WriteLine("Exception context for troubleshooting:");
Console.WriteLine($"- Endpoint: {context.Endpoint}");
Console.WriteLine($"- Entity Path: {context.EntityPath}");
Console.WriteLine($"- Executing Action: {context.Action}");
return Task.CompletedTask;
}

9. Save the file and run the following commands in the terminal.
●● dotnet build
●● dotnet run
10. Check the portal again. Notice that the Active Message Count value is now 0. You may need to
refresh the portal page.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Queue Storage queues  319

Implement solutions that use Azure Queue


Storage queues
Azure Queue storage overview
Azure Queue storage is a service for storing large numbers of messages that can be accessed from
anywhere in the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to
64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage
account.
Common uses of Queue storage include:
●● Creating a backlog of work to process asynchronously
●● Passing messages from an Azure web role to an Azure worker role

Queue service components


The Queue service contains the following components:

●● URL format: Queues are addressable using the following URL format:
https://<storage account>.queue.core.windows.net/<queue>
The following URL addresses a queue in the diagram:
https://myaccount.queue.core.windows.net/images-to-download
●● Storage account: All access to Azure Storage is done through a storage account.
●● Queue: A queue contains a set of messages. All messages must be in a queue. Note that the queue
name must be all lowercase.
●● Message: A message, in any format, of up to 64 KB. The maximum time that a message can remain in
the queue is seven days.

Creating and managing messages in Azure


Queue storage by using .NET
In this topic we'll be covering how to create queues and manage messages in Azure Queue storage by
showing code snippets from a .NET project.
The code examples rely on the following NuGet packages:
●● Microsoft Azure Storage Common Client Library for .NET: This package provides programmatic access
to data resources in your storage account.
MCT USE ONLY. STUDENT USE PROHIBITED 320  Module 11 Develop message-based solutions  

●● Microsoft Azure Storage Queue Library for .NET: This client library enables working with the Microsoft
Azure Storage Queue service for storing messages that may be accessed by a client.
●● Microsoft Azure Configuration Manager library for .NET: This package provides a class for parsing a
connection string in a configuration file, regardless of where your application is running.

Create the Queue service client


The CloudQueueClient class enables you to retrieve queues stored in Queue storage. Here's one way
to create the service client:
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

Create a queue
This example shows how to create a queue if it does not already exist:
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a container.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Create the queue if it doesn't already exist


queue.CreateIfNotExists();

Insert a message into a queue


To insert a message into an existing queue, first create a new CloudQueueMessage. Next, call the
AddMessage method. A CloudQueueMessage can be created from either a string (in UTF-8 format) or
a byte array. Here is code which creates a queue (if it doesn't exist) and inserts the message ‘Hello,
World’:
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Create the queue if it doesn't already exist.


queue.CreateIfNotExists();

// Create a message and add it to the queue.


MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Queue Storage queues  321

CloudQueueMessage message = new CloudQueueMessage("Hello, World");


queue.AddMessage(message);

Peek at the next message


You can peek at the message in the front of a queue without removing it from the queue by calling the
PeekMessage method.
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Peek at the next message


CloudQueueMessage peekedMessage = queue.PeekMessage();

// Display message.
Console.WriteLine(peekedMessage.AsString);

Change the contents of a queued message


You can change the contents of a message in-place in the queue. If the message represents a work task,
you could use this feature to update the status of the work task. The following code updates the queue
message with new contents, and sets the visibility timeout to extend another 60 seconds. This saves the
state of work associated with the message, and gives the client another minute to continue working on
the message.
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Get the message from the queue and update the message contents.
CloudQueueMessage message = queue.GetMessage();
message.SetMessageContent2("Updated contents.", false);
queue.UpdateMessage(message,
TimeSpan.FromSeconds(60.0), // Make it invisible for another 60 seconds.
MessageUpdateFields.Content | MessageUpdateFields.Visibility);
MCT USE ONLY. STUDENT USE PROHIBITED 322  Module 11 Develop message-based solutions  

De-queue the next message


Your code de-queues a message from a queue in two steps. When you call GetMessage, you get the
next message in a queue. A message returned from GetMessage becomes invisible to any other code
reading messages from this queue. By default, this message stays invisible for 30 seconds. To finish
removing the message from the queue, you must also call DeleteMessage.
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Get the next message


CloudQueueMessage retrievedMessage = queue.GetMessage();

//Process the message in less than 30 seconds, and then delete the message
queue.DeleteMessage(retrievedMessage);

Get the queue length


You can get an estimate of the number of messages in a queue. The FetchAttributes method asks
the Queue service to retrieve the queue attributes, including the message count. The ApproximateMes-
sageCount property returns the last value retrieved by the FetchAttributes method, without calling
the Queue service.
// Retrieve storage account from connection string.
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Fetch the queue attributes.


queue.FetchAttributes();

// Retrieve the cached approximate message count.


int? cachedMessageCount = queue.ApproximateMessageCount;

// Display number of messages.


Console.WriteLine("Number of messages in queue: " + cachedMessageCount);
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement solutions that use Azure Queue Storage queues  323

Delete a queue
To delete a queue and all the messages contained in it, call the Delete method on the queue object.
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create the queue client.


CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

// Retrieve a reference to a queue.


CloudQueue queue = queueClient.GetQueueReference("myqueue");

// Delete the queue.


queue.Delete();
MCT USE ONLY. STUDENT USE PROHIBITED 324  Module 11 Develop message-based solutions  

Lab and review questions


Lab: Asynchronously processing messages by
using Azure Storage Queues

Lab scenario
You're studying various ways to communicate between isolated service components in Microsoft Azure,
and you have decided to evaluate the Azure Storage service and its Queue service offering. As part of this
evaluation, you'll build a prototype application in .NET that can send and receive messages so that you
can measure the complexity involved in using this service. To help you with your evaluation, you've also
decided to use Azure Storage Explorer as the queue message producer/consumer throughout your tests.

Objectives
After you complete this lab, you will be able to:
●● Add Azure.Storage libraries from NuGet.
●● Create a queue in .NET.
●● Produce a new message in the queue by using .NET.
●● Consume a message from the queue by using .NET.
●● Manage a queue by using Storage Explorer.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 11 review questions


Review Question 1
Which of the following advanced features of Azure Service Bus is used to guarantee a first-in, first-out (FIFO)
guarantee?
†† Transactions
†† Scheduled delivery
†† Message sessions
†† Batching
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  325

Review Question 2
In Azure Service Bus which of the following receive modes is best for scenarios in which the application can
tolerate not processing a message if a failure occurs?
†† PeekLock
†† ReceiveAndDelete
†† All of the above
†† None of the above

Review Question 3
In Azure Queue storage, what is the maximum time that a message can remain in queue?
†† 5 days
†† 6 days
†† 7 days
†† 10 days

Review Question 4
In Azure Queue storage, which two methods below are used to de-queue a message?
†† GetMessage
†† UpdateMessage
†† DeleteMessage
†† PeekMessage
MCT USE ONLY. STUDENT USE PROHIBITED 326  Module 11 Develop message-based solutions  

Answers
Review Question 1
Which of the following advanced features of Azure Service Bus is used to guarantee a first-in, first-out
(FIFO) guarantee?
†† Transactions
†† Scheduled delivery
■■ Message sessions
†† Batching
Explanation
To create a first-in, first-out (FIFO) guarantee in Service Bus, use sessions. Message sessions enable joint and
ordered handling of unbounded sequences of related messages.
Review Question 2
In Azure Service Bus which of the following receive modes is best for scenarios in which the application
can tolerate not processing a message if a failure occurs?
†† PeekLock
■■ ReceiveAndDelete
†† All of the above
†† None of the above
Explanation
ReceiveAndDelete mode is the simplest model and works best for scenarios in which the application can
tolerate not processing a message if a failure occurs.
Review Question 3
In Azure Queue storage, what is the maximum time that a message can remain in queue?
†† 5 days
†† 6 days
■■ 7 days
†† 10 days
Explanation
The maximum time that a message can remain in the queue is seven days.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  327

Review Question 4
In Azure Queue storage, which two methods below are used to de-queue a message?
■■ GetMessage
†† UpdateMessage
■■ DeleteMessage
†† PeekMessage
Explanation
Your code de-queues a message from a queue in two steps. When you call GetMessage, you get the next
message in a queue. A message returned from GetMessage becomes invisible to any other code reading
messages from this queue. By default, this message stays invisible for 30 seconds. To finish removing the
message from the queue, you must also call DeleteMessage.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 12 Monitor and optimize Azure solu-
tions

Overview of monitoring in Azure


Azure Monitor overview
Azure Monitor maximizes the availability and performance of your applications and services by delivering
a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and
on-premises environments. It helps you understand how your applications are performing and proactive-
ly identifies issues affecting them and the resources they depend on.

Monitoring data platform


All data collected by Azure Monitor fits into one of two fundamental types, metrics and logs. Metrics are
numerical values that describe some aspect of a system at a particular point in time. They are lightweight
and capable of supporting near real-time scenarios. Logs contain different kinds of data organized into
records with different sets of properties for each type. Telemetry such as events and traces are stored as
logs in addition to performance data so that it can all be combined for analysis.
Log data collected by Azure Monitor can be analyzed with queries to quickly retrieve, consolidate, and
analyze collected data. You can create and test queries using Log Analytics in the Azure portal and then
either directly analyze the data using these tools or save queries for use with visualizations or alert rules.
Azure Monitor uses a version of the Kusto query language used by Azure Data Explorer that is suitable for
simple log queries but also includes advanced functionality such as aggregations, joins, and smart
analytics.
MCT USE ONLY. STUDENT USE PROHIBITED 330  Module 12 Monitor and optimize Azure solutions  

What data does Azure Monitor collect?


Azure Monitor can collect data from a variety of sources. You can think of monitoring data for your
applications in tiers ranging from your application, any operating system and services it relies on, down
to the platform itself. Azure Monitor collects data from each of the following tiers:
●● Application monitoring data: Data about the performance and functionality of the code you have
written, regardless of its platform.
●● Guest OS monitoring data: Data about the operating system on which your application is running.
This could be running in Azure, another cloud, or on-premises.
●● Azure resource monitoring data: Data about the operation of an Azure resource.
●● Azure subscription monitoring data: Data about the operation and management of an Azure
subscription, as well as data about the health and operation of Azure itself.
●● Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as
Azure Active Directory.
As soon as you create an Azure subscription and start adding resources such as virtual machines and web
apps, Azure Monitor starts collecting data. Activity logs record when resources are created or modified.
Metrics tell you how the resource is performing and the resources that it's consuming.
Extend the data you're collecting into the actual operation of the resources by enabling diagnostics and
adding an agent to compute resources. This will collect telemetry for the internal operation of the
resource and allow you to configure different data sources to collect logs and metrics from Windows and
Linux guest operating system.
Enable monitoring for your App Services application or VM and virtual machine scale set application, to
enable Application Insights to collect detailed information about your application including page views,
application requests, and exceptions. Further verify the availability of your application by configuring an
availability test to simulate user traffic.

Overview of alerts in Azure Monitor


Alerts proactively notify you when important conditions are found in your monitoring data. They allow
you to identify and address issues before the users of your system notice them. Alert rules are separated
from alerts and the actions taken when an alert fires. The alert rule captures the target and criteria for
alerting. The alert rule can be in an enabled or a disabled state. Alerts only fire when enabled.
The following are key attributes of an alert rule:
●● Target Resource: Defines the scope and signals available for alerting. A target can be any Azure
resource. Example targets: a virtual machine, a storage account, a virtual machine scale set, a Log
Analytics workspace, or an Application Insights resource. For certain resources (like virtual machines),
you can specify multiple resources as the target of the alert rule.
●● Signal: Emitted by the target resource. Signals can be of the following types: metric, activity log,
Application Insights, and log.
●● Criteria: A combination of signal and logic applied on a target resource.
●● Alert Name: A specific name for the alert rule configured by the user.
●● Alert Description: A description for the alert rule configured by the user.
●● Severity: The severity of the alert after the criteria specified in the alert rule is met. Severity can range
from 0 to 4.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of monitoring in Azure  331

●● Action: A specific action taken when the alert is fired.

What you can alert on


You can alert on metrics and logs, as described in monitoring data sources1. These include but are not
limited to:
●● Metric values
●● Log search queries
●● Activity log events
●● Health of the underlying Azure platform
●● Tests for website availability
Previously, Azure Monitor metrics, Application Insights, Log Analytics, and Service Health had separate
alerting capabilities. Over time, Azure improved and combined both the user interface and different
methods of alerting. This consolidation is still in process. As a result, there are still some alerting capabili-
ties not yet in the new alerts system.

Application Insights overview


Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management
(APM) service for web developers on multiple platforms. Use it to monitor your live web application. It
will automatically detect performance anomalies. It includes powerful analytics tools to help you diag-
nose issues and to understand what users actually do with your app. It's designed to help you continu-
ously improve performance and usability. It works for apps on a wide variety of platforms including .NET,
Node.js and Java EE, hosted on-premises, hybrid, or any public cloud.

How Application Insights works


You install a small instrumentation package in your application, and set up an Application Insights
resource in the Microsoft Azure portal. The instrumentation monitors your app and sends telemetry data
to Azure Monitor. (The application can run anywhere - it doesn't have to be hosted in Azure.)
You can instrument not only the web service application, but also any background components, and the
JavaScript in the web pages themselves.

1 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-sources
MCT USE ONLY. STUDENT USE PROHIBITED 332  Module 12 Monitor and optimize Azure solutions  

In addition, you can pull in telemetry from the host environments such as performance counters, Azure
diagnostics, or Docker logs. You can also set up web tests that periodically send synthetic requests to
your web service.
All these telemetry streams are integrated into Azure Monitor. In the Azure portal, you can apply power-
ful analytic and search tools to the raw data.
The impact on your app's performance is very small. Tracking calls are non-blocking, and are batched and
sent in a separate thread.

What does Application Insights monitor?


Application Insights is aimed at the development team, to help you understand how your app is perform-
ing and how it's being used. It monitors:
●● Request rates, response times, and failure rates - Find out which pages are most popular, at what
times of day, and where your users are. See which pages perform best. If your response times and
failure rates go high when there are more requests, then perhaps you have a resourcing problem.
●● Dependency rates, response times, and failure rates - Find out whether external services are
slowing you down.
●● Exceptions - Analyze the aggregated statistics, or pick specific instances and drill into the stack trace
and related requests. Both server and browser exceptions are reported.
●● Page views and load performance - reported by your users' browsers.
●● AJAX calls from web pages - rates, response times, and failure rates.
●● User and session counts.
●● Performance counters from your Windows or Linux server machines, such as CPU, memory, and
network usage.
●● Host diagnostics from Docker or Azure.
●● Diagnostic trace logs from your app - so that you can correlate trace events with requests.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of monitoring in Azure  333

●● Custom events and metrics that you write yourself in the client or server code, to track business
events such as items sold or games won.
MCT USE ONLY. STUDENT USE PROHIBITED 334  Module 12 Monitor and optimize Azure solutions  

Instrument an app for monitoring


Telemetry channels in Application Insights
Telemetry channels are an integral part of the Azure Application Insights SDKs. They manage buffering
and transmission of telemetry to the Application Insights service. The .NET and .NET Core versions of the
SDKs have two built-in telemetry channels: InMemoryChannel and ServerTelemetryChannel.
Telemetry channels are responsible for buffering telemetry items and sending them to the Application
Insights service, where they're stored for querying and analysis. A telemetry channel is any class that
implements the Microsoft.ApplicationInsights.ITelemetryChannel interface.
The Send(ITelemetry item) method of a telemetry channel is called after all telemetry initializers
and telemetry processors are called. So, any items dropped by a telemetry processor won't reach the
channel. Send() doesn't typically send the items to the back end instantly. Typically, it buffers them in
memory and sends them in batches, for efficient transmission.

Built-in telemetry channels


The Application Insights .NET and .NET Core SDKs ship with two built-in channels:
●● InMemoryChannel: A lightweight channel that buffers items in memory until they're sent. Items are
buffered in memory and flushed once every 30 seconds, or whenever 500 items are buffered. This
channel offers minimal reliability guarantees because it doesn't retry sending telemetry after a failure.
This channel also doesn't keep items on disk, so any unsent items are lost permanently upon applica-
tion shutdown (graceful or not). This channel implements a Flush() method that can be used to
force-flush any in-memory telemetry items synchronously. This channel is well suited for short-run-
ning applications where a synchronous flush is ideal.
This channel is part of the larger Microsoft.ApplicationInsights NuGet package and is the default
channel that the SDK uses when nothing else is configured.
●● ServerTelemetryChannel: A more advanced channel that has retry policies and the capability to
store data on a local disk. This channel retries sending telemetry if transient errors occur. This channel
also uses local disk storage to keep items on disk during network outages or high telemetry volumes.
Because of these retry mechanisms and local disk storage, this channel is considered more reliable
and is recommended for all production scenarios. This channel is the default for ASP.NET and ASP.NET
Core applications that are configured according to the official documentation. This channel is opti-
mized for server scenarios with long-running processes. The Flush() method that's implemented by
this channel isn't synchronous.
This channel is shipped as the Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel
NuGet package and is acquired automatically when you use either the Microsoft.ApplicationIn-
sights.Web or Microsoft.ApplicationInsights.AspNetCore NuGet package.

Configure a telemetry channel


You configure a telemetry channel by setting it to the active telemetry configuration. For ASP.NET
applications, configuration involves setting the telemetry channel instance to TelemetryConfigura-
tion.Active, or by modifying ApplicationInsights.config. For ASP.NET Core applications,
configuration involves adding the channel to the Dependency Injection Container.
The following sections show examples of configuring the StorageFolder setting for the channel in
various application types. StorageFolder is just one of the configurable settings.
MCT USE ONLY. STUDENT USE PROHIBITED
 Instrument an app for monitoring  335

Operational details of ServerTelemetryChannel


ServerTelemetryChannel stores arriving items in an in-memory buffer. The items are serialized,
compressed, and stored into a Transmission instance once every 30 seconds, or when 500 items have
been buffered. A single Transmission instance contains up to 500 items and represents a batch of
telemetry that's sent over a single HTTPS call to the Application Insights service.
By default, a maximum of 10 Transmission instances can be sent in parallel. If telemetry is arriving at
faster rates, or if the network or the Application Insights back end is slow, Transmission instances are
stored in memory. The default capacity of this in-memory Transmission buffer is 5 MB. When the
in-memory capacity has been exceeded, Transmission instances are stored on local disk up to a limit
of 50 MB. Transmission instances are stored on local disk also when there are network problems. Only
those items that are stored on a local disk survive an application crash. They're sent whenever the
application starts again.

Which channel should I use?


ServerTelemetryChannel is recommended for most production scenarios involving long-running
applications. The Flush() method implemented by ServerTelemetryChannel isn't synchronous,
and it also doesn't guarantee sending all pending items from memory or disk. If you use this channel in
scenarios where the application is about to shut down, we recommend that you introduce some delay
after calling Flush(). The exact amount of delay that you might require isn't predictable. It depends on
factors like how many items or Transmission instances are in memory, how many are on disk, how
many are being transmitted to the back end, and whether the channel is in the middle of exponential
back-off scenarios.
If you need to do a synchronous flush, we recommend that you use InMemoryChannel.

Instrumenting for distributed tracing


The advent of modern cloud and microservices architectures has given rise to simple, independently
deployable services that can help reduce costs while increasing availability and throughput. But while
these movements have made individual services easier to understand as a whole, they’ve made overall
systems more difficult to reason about and debug.
In monolithic architectures, we’ve gotten used to debugging with call stacks. Call stacks are brilliant tools
for showing the flow of execution (Method A called Method B, which called Method C), along with details
and parameters about each of those calls.
Distributed tracing is the equivalent of call stacks for modern cloud and microservices architectures, with
the addition of a simplistic performance profiler thrown in. In Azure Monitor, we provide two experiences
for consuming distributed trace data. The first is our transaction diagnostics view, which is like a call stack
with a time dimension added in. The transaction diagnostics view provides visibility into one single
transaction/request, and is helpful for finding the root cause of reliability issues and performance bottle-
necks on a per request basis.
Azure Monitor also offers an application map view which aggregates many transactions to show a
topological view of how the systems interact, and what the average performance and error rates are.

How to Enable Distributed Tracing


Enabling distributed tracing across the services in an application is as simple as adding the proper SDK or
library to each service, based on the language the service was implemented in.
MCT USE ONLY. STUDENT USE PROHIBITED 336  Module 12 Monitor and optimize Azure solutions  

Enabling via Application Insights SDKs


The Application Insights SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed
tracing natively.
With the proper Application Insights SDK installed and configured, tracing information is automatically
collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full
list of supported technologies is available in the Dependency auto-collection documentation.
Additionally, any technology can be tracked manually with a call to TrackDependency on the Teleme-
tryClient.

Enable via OpenCensus


In addition to the Application Insights SDKs, Application Insights also supports distributed tracing
through OpenCensus. OpenCensus is an open source, vendor-agnostic, single distribution of libraries to
provide metrics collection and distributed tracing for services. It also enables the open source community
to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB.

Instrumenting web pages for Application In-


sights
If you add Application Insights to your page script, you get timings of page loads and AJAX calls, counts,
and details of browser exceptions and AJAX failures, as well as users and session counts. All these can be
segmented by page, client OS and browser version, geo location, and other dimensions. You can set
alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can
track how the different features of your web page application are used.
Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your
web service is Java or ASP.NET, you can use the server-side SDKs in conjunction with the client-side
JavaScript SDK to get an end-to-end understanding of your app's performance.

Adding the JavaScript SDK


Add the Application Insights JavaScript SDK to your web page or app via one of the following two
options:
●● npm Setup
●● JavaScript Snippet

npm based setup


import { ApplicationInsights } from '@microsoft/applicationinsights-web'

const appInsights = new ApplicationInsights({ config: {


instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE'
/* ...Other Configuration Options... */
} });
appInsights.loadAppInsights();
appInsights.trackPageView(); // Manually call trackPageView to establish the current user/session/
pageview
MCT USE ONLY. STUDENT USE PROHIBITED
 Instrument an app for monitoring  337

Snippet based setup


If your app does not use npm, you can directly instrument your webpages with Application Insights by
pasting a snippet at the top of each your pages. Preferably, it should be the first script in your <head>
section so that it can monitor any potential issues with all of your dependencies. The example below has
been truncated for readability.
<script type="text/javascript">
var sdkInstance="appInsightsSDK";window[sdkInstance]="appInsights";var aiName=window[sdkIn-
stance],...(
{
instrumentationKey:"INSTRUMENTATION_KEY"
}
);window[aiName]=aisdk,aisdk.queue&&0===aisdk.queue.length&&aisdk.trackPageView({});
</script>

For a snippet you can copy and paste please visit https://docs.microsoft.com/en-us/azure/az-
ure-monitor/app/javascript#snippet-based-setup.

Sending telemetry to the Azure portal


By default the Application Insights JavaScript SDK autocollects a number of telemetry items that are
helpful in determining the health of your application and the underlying user experience. These include:
●● Uncaught exceptions in your app, including information on
●● Stack trace
●● Exception details and message accompanying the error
●● Line & column number of error
●● URL where error was raised
●● Network Dependency Requests made by your app XHR and Fetch (fetch collection is disabled by
default) requests, include information on
●● URL of dependency source
●● Command & Method used to request the dependency
●● Duration of the request
●● Result code and success status of the request
●● ID (if any) of user making the request
●● Correlation context (if any) where request is made
●● User information (for example, Location, network, IP)
●● Device information (for example, Browser, OS, version, language, resolution, model)
●● Session information

Telemetry initializers
Telemetry initializers are used to modify the contents of collected telemetry before being sent from the
user's browser. They can also be used to prevent certain telemetry from being sent, by returning false.
MCT USE ONLY. STUDENT USE PROHIBITED 338  Module 12 Monitor and optimize Azure solutions  

Multiple telemetry initializers can be added to your Application Insights instance, and they are executed
in order of adding them.
The input argument to addTelemetryInitializer is a callback that takes a ITelemetryItem as an
argument and returns a boolean or void. If returning false, the telemetry item is not sent, else it
proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.

Single Page Applications


By default, this SDK will not handle state-based route changing that occurs in single page applications. To
enable automatic route change tracking for your single page application, you can add enableAutoRou-
teTracking: true to your setup configuration.
Currently, we offer a separate React plugin which you can initialize with this SDK. It will also accomplish
route change tracking for you, as well as collect other React specific telemetry.

Demo: Instrument an ASP.NET Core app for


monitoring in Application Insights
In this demo you will learn how to:
●● Instrument a ASP.NET Core app to send server-side telemetry
●● Instrument a ASP.NET Core app to send client-side telemetry

Prerequisites
This demo is performed in the Cloud Shell, and in Visual Studio Code. The code examples below rely on
the Microsoft.ApplicationInsights.AspNetCore NuGet package.

Login to Azure
1. Login in to the Azure Portal: https://portal.azure.com and launch the Cloud Shell. Be sure to select
PowerShell as the shell.
2. Create a resource group and Application Insights instance with a location that makes sense for you.
$myLocation = Read-Host -Prompt "Enter the region (i.e. westus): "
$myResourceGroup = "az204appinsights-rg"
$myAppInsights = "az204appinsights"

# Create the resource group


New-AzResourceGroup -Name $myResourceGroup -Location $myLocation

# Create App Insights instance


New-AzApplicationInsights -ResourceGroupName $myResourceGroup -Name $myAppInsights
-location $myLocation

3. Save the InstrumentationKey value to Notepad for use later.


4. Navigate to the new Application Insights resource and select Live Metrics Stream. We'll be going
back to this page to view the metrics being sent later.
MCT USE ONLY. STUDENT USE PROHIBITED
 Instrument an app for monitoring  339

Create an ASP.NET Core app


1. In a terminal or command window create a new folder for the project, and change in to the new
folder.
md aspnetcoredemo
cd aspnetcoredemo

2. Create a new ASP.Net Core web app.


dotnet new webapp

3. Launch Visual Studio Code in the context of the new folder.


code . --new-window

4. Install the Application Insights SDK NuGet package for ASP.NET Core by running the following
command in a VS Code terminal.
dotnet add package Microsoft.ApplicationInsights.AspNetCore --version 2.8.2

The following lines should appear in the projects .csproj file.


<ItemGroup>
<PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.8.0" />
</ItemGroup>

Enable server-side telemetry in the app


1. Add services.AddApplicationInsightsTelemetry(); to the ConfigureServices()
method in your Startup class, as in this example:
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// The following line enables Application Insights telemetry collection.
services.AddApplicationInsightsTelemetry();

// This code adds other services for your application.


services.AddRazorPages();
}

2. Set up the instrumentation key.


The following code sample shows how to specify an instrumentation key in appsettings.json.
Update the code with the key you saved earlier.
{
"ApplicationInsights": {
"InstrumentationKey": "putinstrumentationkeyhere"
},
"Logging": {
MCT USE ONLY. STUDENT USE PROHIBITED 340  Module 12 Monitor and optimize Azure solutions  

"LogLevel": {
"Default": "Warning"
}
}
}

3. Build and run the app by using the following commands.


dotnet build
dotnet run

4. Open a new browser window and navigate to http://localhost:5000 to view your web app.
5. Set the browser window for the app side-by-side with the portal showing the Live Metrics Stream.
Notice the incoming requests on the Live Metrics Stream as you navigate around the web app.
6. In Visual Studio Code type ctrl-c to close the application.

Enable client-side telemetry in the app


✔️ Note: The following steps illustrate how to enable client-side telemetry in the app. Because there is
no client-side activity in the default app it won't impact the information being sent to Application Insights
in this demo.
1. Add the following injection in the _ViewImports.cshtml file:
@inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet

2. Insert the HtmlHelper in the _Layout.cshtml file at the end of the <head> section but before
any other script. If you want to report any custom JavaScript telemetry from the page, inject it after
this snippet:
@Html.Raw(JavaScriptSnippet.FullScript)
</head>

The .cshtml file names referenced earlier are from a default MVC application template. Ultimately, if
you want to properly enable client-side monitoring for your application, the JavaScript snippet must
appear in the <head> section of each page of your application that you want to monitor. You can
accomplish this goal for this application template by adding the JavaScript snippet to _Layout.
cshtml.

Clean up resources
When you're finished, delete the resource group created earlier in the demo.
MCT USE ONLY. STUDENT USE PROHIBITED
 Analyzing and troubleshooting apps  341

Analyzing and troubleshooting apps


View activity logs to audit actions on resources
The Azure Activity Log provides insight into subscription-level events that have occurred in Azure. This
includes a range of data, from Azure Resource Manager operational data to updates on Service Health
events. The Activity Log was previously known as Audit Logs or Operational Logs, since the Administrative
category reports control-plane events for your subscriptions.
Use the Activity Log, to determine the what, who, and when for any write operations (PUT, POST, DELETE)
taken on the resources in your subscription. You can also understand the status of the operation and
other relevant properties.
The Activity Log does not include read (GET) operations or operations for resources that use the Classic/
RDFE model.

View the Activity Log


View the Activity Log for all resources from the Monitor menu in the Azure portal. View the Activity Log
for a particular resource from the Activity Log option in that resource's menu. You can also retrieve
Activity Log records with PowerShell, CLI, or REST API.

Export Activity Log


Export the Activity Log to Azure Storage for archiving or stream it to an Event Hub for ingestion by a
third-party service or custom analytics solution.

Alert on Activity Log


You can create an alert when particular events are created in the Activity Log with an Activity Log alert.
You can also create an alert using a log query when your Activity Log is connected to a Log Analytics
workspace, but there is a cost to log query alerts. There is no cost for Activity Log alerts.

Monitor availability and responsiveness of any


web site
After you've deployed your web app/website, you can set up recurring tests to monitor availability and
responsiveness. Azure Application Insights sends web requests to your application at regular intervals
from points around the world. It can alert you if your application isn't responding, or if it responds too
slowly.
You can set up availability tests for any HTTP or HTTPS endpoint that is accessible from the public
internet. You don't have to make any changes to the website you're testing. In fact, it doesn't even have
to be a site you own. You can test the availability of a REST API that your service depends on.
There are three types of availability tests:
●● URL ping test: a simple test that you can create in the Azure portal.
●● Multi-step web test: A recording of a sequence of web requests, which can be played back to test
more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to
the portal for execution.
MCT USE ONLY. STUDENT USE PROHIBITED 342  Module 12 Monitor and optimize Azure solutions  

●● Custom Track Availability Tests: If you decide to create a custom application to run availability tests,
the TrackAvailability() method can be used to send the results to Application Insights.
You can create up to 100 availability tests per Application Insights resource.

URL ping test


The name “URL ping test” is a bit of a misnomer. To be clear, this test is not making any use of ICMP
(Internet Control Message Protocol) to check your site's availability. Instead it uses more advanced HTTP
request functionality to validate whether an endpoint is responding. It also measures the performance
associated with that response, and adds the ability to set custom success criteria coupled with more
advanced features like parsing dependent requests, and allowing for retries.
We'll walk you through the process of creating a URL ping test. You can create a test by navigating to
Availability pane of your Application Insights resource and selecting Create Test.

Create a test
●● URL: The URL can be any web page you want to test, but it must be visible from the public internet.
The URL can include a query string. So, for example, you can exercise your database a little. If the URL
resolves to a redirect, we follow it up to 10 redirects.
●● Parse dependent requests: Test requests images, scripts, style files, and other files that are part of
the web page under test. The recorded response time includes the time taken to get these files. The
test fails if any of these resources cannot be successfully downloaded within the timeout for the whole
test.
●● Enable retries: If a test fails, it is retried after a short interval. A failure is reported only if three
successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is
temporarily suspended until the next success. This rule is applied independently at each test location.
We recommend this option. On average, about 80% of failures disappear on retry.
●● Test frequency: Sets how often the test is run from each test location. With a default frequency of five
minutes and five test locations, your site is tested on average every minute.
MCT USE ONLY. STUDENT USE PROHIBITED
 Analyzing and troubleshooting apps  343

●● Test locations Are the places from where our servers send web requests to your URL. Our minimum
number of recommended test locations is five in order to insure that you can distinguish problems
in your website from network issues. You can select up to 16 locations.
Note: We strongly recommend testing from multiple locations with a minimum of five locations. This
is to prevent false alarms that may result from transient issues with a specific location. In addition we
have found that the optimal configuration is to have the number of test locations be equal to the alert
location threshold + 2. Enabling the “Parse dependent requests” option results in a stricter check. The
test could fail for cases which may not be noticeable when manually browsing the site.

Success criteria
●● Test timeout: Decrease this value to be alerted about slow responses. The test is counted as a failure
if the responses from your site have not been received within this period. If you selected Parse
dependent requests, then all the images, style files, scripts, and other dependent resources must
have been received within this period.
●● HTTP response: The returned status code that is counted as a success. 200 is the code that indicates
that a normal web page has been returned.
●● Content match: A string, like “Welcome!” We test that an exact case-sensitive match occurs in every
response. It must be a plain string, without wildcards. Don't forget that if your page content changes
you might have to update it. Only English characters are supported with content match

Alerts
●● Near-realtime (Preview): We recommend using Near-realtime alerts. Configuring this type of alert is
done after your availability test is created.
●● Classic: We no longer recommended using classic alerts for new availability tests.
●● Alert location threshold: We recommend a minimum of 3/5 locations. The optimal relationship
between alert location threshold and the number of test locations is alert location threshold =
number of test locations - 2, with a minimum of five test locations.

Application Map: Triage Distributed Applications


in Application Insights
Application Map helps you spot performance bottlenecks or failure hotspots across all components of
your distributed application. Each node on the map represents an application component or its depend-
encies; and has health KPI and alerts status. You can click through from any component to more detailed
diagnostics, such as Application Insights events. If your app uses Azure services, you can also click
through to Azure diagnostics, such as SQL Database Advisor recommendations.

What is a Component?
Components are independently deployable parts of your distributed/microservices application. Develop-
ers and operations teams have code-level visibility or access to telemetry generated by these application
components.
●● Components are different from “observed” external dependencies such as SQL, EventHub etc. which
your team/organization may not have access to (code or telemetry).
●● Components run on any number of server/role/container instances.
MCT USE ONLY. STUDENT USE PROHIBITED 344  Module 12 Monitor and optimize Azure solutions  

●● Components can be separate Application Insights instrumentation keys (even if subscriptions are
different) or different roles reporting to a single Application Insights instrumentation key. The preview
map experience shows the components regardless of how they are set up.

Composite Application Map


You can see the full application topology across multiple levels of related application components.
Components could be different Application Insights resources, or different roles in a single resource. The
app map finds components by following HTTP dependency calls made between servers with the Applica-
tion Insights SDK installed.
This experience starts with progressive discovery of the components. When you first load the application
map, a set of queries are triggered to discover the components related to this component. A button at
the top-left corner will update with the number of components in your application as they are discov-
ered.
On selecting Update map components, the map is refreshed with all components discovered until that
point. Depending on the complexity of your application, this may take a minute to load.
If all of the components are roles within a single Application Insights resource, then this discovery step is
not required. The initial load for such an application will have all its components.

One of the key objectives with this experience is to be able to visualize complex topologies with hun-
dreds of components.
Click on any component to see related insights and go to the performance and failure triage experience
for that component.

Set cloud role name


Application Map uses the cloud role name property to identify the components on the map. The
Application Insights SDK automatically adds the cloud role name property to the telemetry emitted by
components. For example, the SDK will add a web site name or service role name to the cloud role name
property. However, there are cases where you may want to override the default value. Below is an
example showing how to override the cloud role name and change what gets displayed on the Applica-
tion Map:
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.Extensibility;
MCT USE ONLY. STUDENT USE PROHIBITED
 Analyzing and troubleshooting apps  345

namespace CustomInitializer.Telemetry
{
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
//set custom role name here
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
}
}
MCT USE ONLY. STUDENT USE PROHIBITED 346  Module 12 Monitor and optimize Azure solutions  

Implement code that handles transient faults


Transient errors
An application that communicates with elements running in the cloud has to be sensitive to the transient
faults that can occur in this environment. Faults include the momentary loss of network connectivity to
components and services, the temporary unavailability of a service, or timeouts that occur when a service
is busy.
These faults are typically self-correcting, and if the action that triggered a fault is repeated after a suitable
delay, it's likely to be successful. For example, a database service that's processing a large number of
concurrent requests can implement a throttling strategy that temporarily rejects any further requests until
its workload has eased. An application trying to access the database might fail to connect, but if it tries
again after a delay, it might succeed.

Handling transient errors


In the cloud, transient faults aren't uncommon, and an application should be designed to handle them
elegantly and transparently. This minimizes the effects faults can have on the business tasks the applica-
tion is performing.
If an application detects a failure when it tries to send a request to a remote service, it can handle the
failure using the following strategies:
●● Cancel: If the fault indicates that the failure isn't transient or is unlikely to be successful if repeated,
the application should cancel the operation and report an exception. For example, an authentication
failure caused by providing invalid credentials is not likely to succeed no matter how many times it's
attempted.
●● Retry: If the specific fault reported is unusual or rare, it might have been caused by unusual circum-
stances, such as a network packet becoming corrupted while it was being transmitted. In this case, the
application could retry the failing request again immediately, because the same failure is unlikely to
be repeated, and the request will probably be successful.
●● Retry after a delay: If the fault is caused by one of the more commonplace connectivity or busy
failures, the network or service might need a short period of time while the connectivity issues are
corrected or the backlog of work is cleared. The application should wait for a suitable amount of time
before retrying the request.
For the more common transient failures, the period between retries should be chosen to spread requests
from multiple instances of the application as evenly as possible. This reduces the chance of a busy service
continuing to be overloaded. If many instances of an application are continually overwhelming a service
with retry requests, it'll take the service longer to recover.
If the request still fails, the application can wait and make another attempt. If necessary, this process can
be repeated with increasing delays between retry attempts, until some maximum number of requests
have been attempted. The delay can be increased incrementally or exponentially depending on the type
of failure and the probability that it'll be corrected during this time.

Retrying after a transient error


The following diagram illustrates invoking an operation in a hosted service using this pattern. If the
request is unsuccessful after a predefined number of attempts, the application should treat the fault as an
exception and handle it accordingly.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implement code that handles transient faults  347

1. The application invokes an operation on a hosted service. The request fails, and the service host
responds with HTTP response code 500 (internal server error).
2. The application waits for a short interval and tries again. The request still fails with HTTP response
code 500.
3. The application waits for a longer interval and tries again. The request succeeds with HTTP response
code 200 (OK).
The application should wrap all attempts to access a remote service in code that implements a retry
policy matching one of the strategies listed above. Requests sent to different services can be subject to
different policies. Some vendors provide libraries that implement retry policies, where the application can
specify the maximum number of retries, the amount of time between retry attempts, and other parame-
ters.
An application should log the details of faults and failing operations. This information is useful to opera-
tors. If a service is frequently unavailable or busy, it's often because the service has exhausted its resourc-
es. You can reduce the frequency of these faults by scaling out the service. For example, if a database
service is continually overloaded, it might be beneficial to partition the database and spread the load
across multiple servers.

Handling transient errors in code


This example in C# illustrates an implementation of this pattern. The OperationWithBasicRetryAs-
ync method, shown below, invokes an external service asynchronously through the TransientOpera-
tionAsync method. The details of the TransientOperationAsync method will be specific to the
service and are omitted from the sample code:
private int retryCount = 3;
private readonly TimeSpan delay = TimeSpan.FromSeconds(5);

public async Task OperationWithBasicRetryAsync()


{
int currentRetry = 0;
for (;;)
{
try
{
await TransientOperationAsync();
break;
}
catch (Exception ex)
MCT USE ONLY. STUDENT USE PROHIBITED 348  Module 12 Monitor and optimize Azure solutions  

{
Trace.TraceError("Operation Exception");
currentRetry++;
if (currentRetry > this.retryCount || !IsTransient(ex))
{
throw;
}
}
await Task.Delay(delay);
}
}

private async Task TransientOperationAsync()


{
...
}

The statement that invokes this method is contained in a try/catch block wrapped in a for loop. The for
loop exits if the call to the TransientOperationAsync method succeeds without throwing an excep-
tion. If the TransientOperationAsync method fails, the catch block examines the reason for the
failure. If it's believed to be a transient error, the code waits for a short delay before retrying the opera-
tion.
The for loop also tracks the number of times that the operation has been attempted, and if the code fails
three times, the exception is assumed to be more long lasting. If the exception isn't transient or it's long
lasting, the catch handler will throw an exception. This exception exists in the for loop and should be
caught by the code that invokes the OperationWithBasicRetryAsync method.

Detecting if an error is transient in code


The IsTransient method, shown below, checks for a specific set of exceptions that are relevant to the
environment the code is run in. The definition of a transient exception will vary according to the resourc-
es being accessed and the environment the operation is being performed in:
private bool IsTransient(Exception ex)
{
if (ex is OperationTransientException)
return true;

var webException = ex as WebException;


if (webException != null)
{
return new[] {
WebExceptionStatus.ConnectionClosed,
WebExceptionStatus.Timeout,
WebExceptionStatus.RequestCanceled
}.Contains(webException.Status);
}

return false;
}
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  349

Lab and review questions


Lab: Monitoring services that are deployed to
Azure

Lab scenario
You have created an API for your next big startup venture. Even though you want to get to market
quickly, you have witnessed other ventures fail when they don’t plan for growth and have too few
resources or too many users. To plan for this, you have decided to take advantage of the scale-out
features of Microsoft Azure App Service, the telemetry features of Application Insights, and the perfor-
mance-testing features of Azure DevOps.

Objectives
After you complete this lab, you will be able to:
●● Create an Application Insights resource.
●● Integrate Application Insights telemetry tracking into an ASP.NET web app and a resource built using
the Web Apps feature of Azure App Service.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 12 review questions


Review Question 1
The Application Insights .NET and .NET Core SDKs ship with two built-in channels.
Which channel below has the capability to store data on a local disk?
†† InMemoryChannel
†† ServerTelemetryChannel
†† LocalStorageChannel
†† None of the above
MCT USE ONLY. STUDENT USE PROHIBITED 350  Module 12 Monitor and optimize Azure solutions  

Review Question 2
True or False, the URL ping test in Application Insights uses ICMP to check a site's availability.
†† True
†† False

Review Question 3
In the cloud, transient faults aren't uncommon, and an application should be designed to handle them
elegantly and transparently.
Which of the following are valid strategies for handling transient errors? (Check all that apply.)
†† Retry after a delay
†† Cancel
†† Retry
†† Hope for the best
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  351

Answers
Review Question 1
The Application Insights .NET and .NET Core SDKs ship with two built-in channels.
Which channel below has the capability to store data on a local disk?
†† InMemoryChannel
■■ ServerTelemetryChannel
†† LocalStorageChannel
†† None of the above
Explanation
The ServerTelemetryChannel has retry policies and the capability to store data on a local disk.
Review Question 2
True or False, the URL ping test in Application Insights uses ICMP to check a site's availability.
†† True
■■ False
Explanation
This test is not making any use of ICMP (Internet Control Message Protocol) to check your site's availability.
Instead it uses more advanced HTTP request functionality to validate whether an endpoint is responding.
Review Question 3
In the cloud, transient faults aren't uncommon, and an application should be designed to handle them
elegantly and transparently.
Which of the following are valid strategies for handling transient errors? (Check all that apply.)
■■ Retry after a delay
■■ Cancel
■■ Retry
†† Hope for the best
Explanation
Retry after a delay, Retry, and Cancel are all valid strategies for handling transient faults.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 13 Integrate caching and content de-
livery within solutions

Develop for Azure Cache for Redis


Azure Cache for Redis overview
Caching is the act of storing frequently accessed data in memory that is very close to the application that
consumes the data. Caching is used to increase performance and reduce the load on your servers. Azure
Cache for Redis can be used to create an in-memory cache that can provide excellent latency and
potentially improve performance. Azure Cache for Redis is based on the popular software Redis.
It gives you access to a secure, dedicated Redis cache, managed by Microsoft. A cache created using
Azure Cache for Redis is accessible from any application within Azure. Azure Cache for Redis is typically
used to improve the performance of systems that rely heavily on back-end data stores.
Your cached data is located in-memory on an Azure server running the Redis cache as opposed to being
loaded from disk by a database. Your cache is also highly scalable. You can alter the size and pricing tier
at any time.

What type of data can be stored in the cache?


Redis supports a variety of data types all oriented around binary safe strings. This means that you can use
any binary sequence for a value, from a string like “i-love-rocky-road” to the contents of an image file. An
empty string is also a valid value.
●● Binary-safe strings (most common)
●● Lists of strings
●● Unordered sets of strings
●● Hashes
●● Sorted sets of strings
●● Maps of strings
MCT USE ONLY. STUDENT USE PROHIBITED 354  Module 13 Integrate caching and content delivery within solutions  

Each data value is associated to a key which can be used to lookup the value from the cache. Redis works
best with smaller values (100k or less), so consider chopping up bigger data into multiple keys. Storing
larger values is possible (up to 500 MB), but increases network latency and can cause caching and
out-of-memory issues if the cache isn't configured to expire old values.

What is a Redis key?


Redis keys are also binary safe strings. Here are some guidelines for choosing keys:
●● Avoid long keys. They take up more memory and require longer lookup times because they have to
be compared byte-by-byte. If you want to use a binary blob as the key, generate a unique hash and
use that as the key instead. The maximum size of a key is 512 MB, but you should never use a key that
size.
●● Use keys which can identify the data. For example, “sport:football;date:2008-02-02” would be a better
key than "fb:8-2-2". The former is more readable and the extra size is negligible. Find the balance
between size and readability.
●● Use a convention. A good one is “object:id”, as in "sport:football".

How is data stored in a Redis cache?


Data in Redis is stored in nodes and clusters.
Nodes are a space in Redis where your data is stored.
Clusters are sets of three or more nodes your dataset is split across. Clusters are useful because your
operations will continue if a node fails or is unable to communicate to the rest of the cluster.

What are Redis caching architectures?


Redis caching architecture is how we distribute our data in the cache. Redis distributes data in three
major ways:
1. Single node
2. Multiple node
3. Clustered
Redis caching architectures are split across Azure by tiers:
●● Basic cache: A basic cache provides you with a single node Redis cache. The complete dataset will be
stored in a single node. This tier is ideal for development, testing, and non-critical workloads.
●● Standard cache: The standard cache creates multiple node architectures. Redis replicates a cache in a
two-node primary/secondary configuration. Azure manages the replication between the two nodes.
This is a production-ready cache with master/slave replication.
●● Premium tier: The premium tier includes the features of the standard tier but adds the ability to
persist data, take snapshots, and back up data. With this tier, you can create a Redis cluster that
shards data across multiple Redis nodes to increase available memory. The premium tier also supports
an Azure Virtual Network to give you complete control over your connections, subnets, IP addressing,
and network isolation. This tier also includes geo-replication, so you can ensure your data is close to
the app that's consuming it.
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for Azure Cache for Redis  355

Summary
A database is great for storing large amounts of data, but there is an inherent latency when looking up
data. You send a query. The server interprets the query, looks up the data, and returns it. Servers also
have capacity limits for handling requests. If too many requests are made, data retrieval will likely slow
down. Caching will store frequently requested data in memory that can be returned faster than querying
a database, which should lower latency and increase performance. Azure Cache for Redis gives you access
to a secure, dedicated, and scalable Redis cache, hosted in Azure, and managed by Microsoft.

Configure Azure Cache for Redis


Create and configure the Azure Cache for Redis instance
You can create a Redis cache using the Azure portal, the Azure CLI, or Azure PowerShell. There are several
parameters you will need to decide in order to configure the cache properly for your purposes.

Name
The Redis cache will need a globally unique name. The name has to be unique within Azure because it is
used to generate a public-facing URL to connect and communicate with the service.
The name must be between 1 and 63 characters, composed of numbers, letters, and the ‘-’ character. The
cache name can't start or end with the '-' character, and consecutive ‘-’ characters aren't valid.

Resource Group
The Azure Cache for Redis is a managed resource and needs a resource group owner. You can either
create a new resource group, or use an existing one in a subscription you are part of.

Location
You will need to decide where the Redis cache will be physically located by selecting an Azure region. You
should always place your cache instance and your application in the same region. Connecting to a cache
in a different region can significantly increase latency and reduce reliability. If you are connecting to the
cache outside of Azure, then select a location close to where the application consuming the data is
running.
Important: Put the Redis cache as close to the data consumer as you can.

Pricing tier
As mentioned in the last unit, there are three pricing tiers available for an Azure Cache for Redis.
●● Basic: Basic cache ideal for development/testing. Is limited to a single server, 53 GB of memory, and
20,000 connections. There is no SLA for this service tier.
●● Standard: Production cache which supports replication and includes an 99.99% SLA. It supports two
servers (master/slave), and has the same memory/connection limits as the Basic tier.
●● Premium: Enterprise tier which builds on the Standard tier and includes persistence, clustering, and
scale-out cache support. This is the highest performing tier with up to 530 GB of memory and 40,000
simultaneous connections.
MCT USE ONLY. STUDENT USE PROHIBITED 356  Module 13 Integrate caching and content delivery within solutions  

You can control the amount of cache memory available on each tier - this is selected by choosing a cache
level from C0-C6 for Basic/Standard and P0-P4 for Premium. Check the pricing page1 for full details.
Tip: Microsoft recommends you always use Standard or Premium Tier for production systems. The Basic
Tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches
are really meant for simple dev/test scenarios since they have a shared CPU core and very little memory.
The Premium tier allows you to persist data in two ways to provide disaster recovery:
1. RDB persistence takes a periodic snapshot and can rebuild the cache using the snapshot in case of
failure.
2. AOF persistence saves every write operation to a log that is saved at least once per second. This
creates bigger files than RDB but has less data loss.
There are several other settings which are only available to the Premium tier.

Virtual Network support


If you create a premium tier Redis cache, you can deploy it to a virtual network in the cloud. Your cache
will be available to only other virtual machines and applications in the same virtual network. This provides
a higher level of security when your service and cache are both hosted in Azure, or are connected
through an Azure virtual network VPN.

Clustering support
With a premium tier Redis cache, you can implement clustering to automatically split your dataset among
multiple nodes. To implement clustering, you specify the number of shards to a maximum of 10. The cost
incurred is the cost of the original node, multiplied by the number of shards.

Accessing the Redis instance


Redis supports a set of known commands. A command is typically issued as COMMAND parameter1
parameter2 parameter3.
Here are some common commands you can use:

Command Description
ping Ping the server. Returns "PONG".
set [key] [value] Sets a key/value in the cache. Returns "OK" on
success.
get [key] Gets a value from the cache.
exists [key] Returns '1' if the key exists in the cache, '0' if it
doesn't.
type [key] Returns the type associated to the value for the
given key.
incr [key] Increment the given value associated with key by
'1'. The value must be an integer or double value.
This returns the new value.

1 https://azure.microsoft.com/pricing/details/cache/
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for Azure Cache for Redis  357

Command Description
incrby [key] [amount] Increment the given value associated with key by
the specified amount. The value must be an
integer or double value. This returns the new
value.
del [key] Deletes the value associated with the key.
flushdb Delete all keys and values in the database.
Redis has a command-line tool (redis-cli) you can use to experiment directly with these commands, an
example is shown below.
> set somekey somevalue
OK
> get somekey
"somevalue"
> exists somekey
(string) 1
> del somekey
(string) 1
> exists somekey
(string) 0

Adding an expiration time to values


Caching is important because it allows us to store commonly used values in memory. However, we also
need a way to expire values when they are stale. In Redis this is done by applying a time to live (TTL) to a
key.
When the TTL elapses, the key is automatically deleted, exactly as if the DEL command were issued. Here
are some notes on TTL expirations.
●● Expirations can be set using seconds or milliseconds precision.
●● The expire time resolution is always 1 millisecond.
●● Information about expires are replicated and persisted on disk, the time virtually passes when your
Redis server remains stopped (this means that Redis saves the date at which a key will expire).
Here is an example of an expiration:
> set counter 100
OK
> expire counter 5
(integer) 1
> get counter
100
... wait ...
> get counter
(nil)
MCT USE ONLY. STUDENT USE PROHIBITED 358  Module 13 Integrate caching and content delivery within solutions  

Accessing a Redis cache from a client


To connect to an Azure Cache for Redis instance, you'll need several pieces of information. Clients need
the host name, port, and an access key for the cache. You can retrieve this information in the Azure portal
through the Settings > Access Keys page.
●● The host name is the public Internet address of your cache, which was created using the name of the
cache. For example sportsresults.redis.cache.windows.net.
●● The access key acts as a password for your cache. There are two keys created: primary and secondary.
You can use either key, two are provided in case you need to change the primary key. You can switch
all of your clients to the secondary key, and regenerate the primary key. This would block any applica-
tions using the original primary key. Microsoft recommends periodically regenerating the keys - much
like you would your personal passwords.
⚠️ Warning: Your access keys should be considered confidential information, treat them like you would a
password. Anyone who has an access key can perform any operation on your cache!

Use the client API to interact with Redis


As mentioned earlier, Redis is an in-memory NoSQL database which can be replicated across multiple
servers. It is often used as a cache, but can be used as a formal database or even message-broker.
It can store a variety of data types and structures and supports a variety of commands you can issue to
retrieve cached data or query information about the cache itself. The data you work with is always stored
as key/value pairs.

Executing commands on the Redis cache


Typically, a client application will use a client library to form requests and execute commands on a Redis
cache. You can get a list of client libraries directly from the Redis clients page. A popular high-perfor-
mance Redis client for the .NET language is StackExchange.Redis. The package is available through
NuGet and can be added to your .NET code using the command line or IDE.

Connecting to your Redis cache with StackExchange.Redis


Recall that we use the host address, port number, and an access key to connect to a Redis server. Azure
also offers a connection string for some Redis clients which bundles this data together into a single
string. It will look something like the following (with the cache-name and password-here fields filled
in with real values):
[cache-name].redis.cache.windows.net:6380,password=[pass-
word-here],ssl=True,abortConnect=False

You can pass this string to StackExchange.Redis to create a connection the server.
Notice that there are two additional parameters at the end:
●● ssl - ensures that communication is encrypted.
●● abortConnection - allows a connection to be created even if the server is unavailable at that mo-
ment.
There are several other optional parameters2 you can append to the string to configure the client library.

2 https://github.com/StackExchange/StackExchange.Redis/blob/master/docs/Configuration.md#configuration-options
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for Azure Cache for Redis  359

✔️ Tip: The connection string should be protected in your application. If the application is hosted on
Azure, consider using an Azure Key Vault to store the value.

Creating a connection
The main connection object in StackExchange.Redis is the StackExchange.Redis.Connection-
Multiplexer class. This object abstracts the process of connecting to a Redis server (or group of
servers). It's optimized to manage connections efficiently and intended to be kept around while you need
access to the cache.
You create a ConnectionMultiplexer instance using the static ConnectionMultiplexer.Con-
nect or ConnectionMultiplexer.ConnectAsync method, passing in either a connection string or a
ConfigurationOptions object.
Here's a simple example:
using StackExchange.Redis;
...
var connectionString = "[cache-name].redis.cache.windows.net:6380,password=[pass-
word-here],ssl=True,abortConnect=False";
var redisConnection = ConnectionMultiplexer.Connect(connectionString);
// ^^^ store and re-use this!!!

Once you have a ConnectionMultiplexer, there are 3 primary things you might want to do:
1. Access a Redis Database. This is what we will focus on here.
2. Make use of the publisher/subscript features of Redis. This is outside the scope of this module.
3. Access an individual server for maintenance or monitoring purposes.

Accessing a Redis database


The Redis database is represented by the IDatabase type. You can retrieve one using the GetData-
base() method:
IDatabase db = redisConnection.GetDatabase();

Tip: The object returned from GetDatabase is a lightweight object, and does not need to be stored.
Only the ConnectionMultiplexer needs to be kept alive.
Once you have a IDatabase object, you can execute methods to interact with the cache. All methods
have synchronous and asynchronous versions which return Task objects to make them compatible with
the async and await keywords.
Here is an example of storing a key/value in the cache:
bool wasSet = db.StringSet("favorite:flavor", "i-love-rocky-road");

The StringSet method returns a bool indicating whether the value was set (true) or not (false). We
can then retrieve the value with the StringGet method:
string value = db.StringGet("favorite:flavor");
Console.WriteLine(value); // displays: ""i-love-rocky-road""
MCT USE ONLY. STUDENT USE PROHIBITED 360  Module 13 Integrate caching and content delivery within solutions  

Getting and Setting binary values


Recall that Redis keys and values are binary safe. These same methods can be used to store binary data.
There are implicit conversion operators to work with byte[] types so you can work with the data
naturally:
byte[] key = ...;
byte[] value = ...;

db.StringSet(key, value);

byte[] key = ...;


byte[] value = db.StringGet(key);

StackExchange.Redis represents keys using the RedisKey type. This class has implicit conversions to
and from both string and byte[], allowing both text and binary keys to be used without any compli-
cation. Values are represented by the RedisValuetype. As with RedisKey, there are implicit conver-
sions in place to allow you to pass string or byte[].

Other common operations


The IDatabase interface includes several other methods to work with the Redis cache. There are methods
to work with hashes, lists, sets, and ordered sets.
Here are some of the more common ones that work with single keys, you can read the source code3 for
the interface to see the full list.

Method Description
CreateBatch Creates a group of operations that will be sent to
the server as a single unit, but not necessarily
processed as a unit.
CreateTransaction Creates a group of operations that will be sent to
the server as a single unit and processed on the
server as a single unit.
KeyDelete Delete the key/value.
KeyExists Returns whether the given key exists in cache.
KeyExpire Sets a time-to-live (TTL) expiration on a key.
KeyRename Renames a key.
KeyTimeToLive Returns the TTL for a key.
KeyType Returns the string representation of the type of
the value stored at key. The different types that
can be returned are: string, list, set, zset and hash.

Executing other commands


The IDatabase object has an Execute and ExecuteAsync method which can be used to pass textual
commands to the Redis server. For example:

3 https://github.com/StackExchange/StackExchange.Redis/blob/master/src/StackExchange.Redis/Interfaces/IDatabase.cs
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for Azure Cache for Redis  361

var result = db.Execute("ping");


Console.WriteLine(result.ToString()); // displays: "PONG"

The Execute and ExecuteAsync methods return a RedisResult object which is a data holder that
includes two properties:
●● Type which returns a string indicating the type of the result - “STRING”, "INTEGER", etc.
●● IsNull a true/false value to detect when the result is null.
You can then use ToString() on the RedisResult to get the actual return value.
You can use Execute to perform any supported commands - for example, we can get all the clients
connected to the cache (“CLIENT LIST”):
var result = await db.ExecuteAsync("client", "list");
Console.WriteLine($"Type = {result.Type}\r\nResult = {result}");

This would output all the connected clients:


Type = BulkString
Result = id=9469 addr=16.183.122.154:54961 fd=18 name=DESKTOP-AAAAAA age=0
idle=0 flags=N db=0 sub=1 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0
omem=0 ow=0 owmem=0 events=r cmd=subscribe numops=5
id=9470 addr=16.183.122.155:54967 fd=13 name=DESKTOP-BBBBBB age=0 idle=0
flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0
ow=0 owmem=0 events=r cmd=client numops=17

Storing more complex values


Redis is oriented around binary safe strings, but you can cache off object graphs by serializing them to a
textual format - typically XML or JSON. For example, perhaps for our statistics, we have a GameStats
object which looks like:
public class GameStat
{
public string Id { get; set; }
public string Sport { get; set; }
public DateTimeOffset DatePlayed { get; set; }
public string Game { get; set; }
public IReadOnlyList<string> Teams { get; set; }
public IReadOnlyList<(string team, int score)> Results { get; set; }

public GameStat(string sport, DateTimeOffset datePlayed, string game, string[] teams, IEnumera-
ble<(string team, int score)> results)
{
Id = Guid.NewGuid().ToString();
Sport = sport;
DatePlayed = datePlayed;
Game = game;
Teams = teams.ToList();
Results = results.ToList();
}
MCT USE ONLY. STUDENT USE PROHIBITED 362  Module 13 Integrate caching and content delivery within solutions  

public override string ToString()


{
return $"{Sport} {Game} played on {DatePlayed.Date.ToShortDateString()} - " +
$"{String.Join(',', Teams)}\r\n\t" +
$"{String.Join('\t', Results.Select(r => $"{r.team } - {r.score}\r\n"))}";
}
}

We could use the Newtonsoft.Json library to turn an instance of this object into a string:
var stat = new GameStat("Soccer", new DateTime(2019, 7, 16), "Local Game",
new[] { "Team 1", "Team 2" },
new[] { ("Team 1", 2), ("Team 2", 1) });

string serializedValue = Newtonsoft.Json.JsonConvert.SerializeObject(stat);


bool added = db.StringSet("event:1950-world-cup", serializedValue);

We could retrieve it and turn it back into an object using the reverse process:
var result = db.StringGet("event:2019-local-game");
var stat = Newtonsoft.Json.JsonConvert.DeserializeObject<GameStat>(result.ToString());
Console.WriteLine(stat.Sport); // displays "Soccer"

Cleaning up the connection


Once you are done with the Redis connection, you can Dispose the ConnectionMultiplexer. This
will close all connections and shutdown the communication to the server.
redisConnection.Dispose();
redisConnection = null;

Demo: Connect an app to Azure Cache for Redis


by using .NET Core
In this demo you will learn how to:
●● Create a new Redis Cache instance by using Azure CLI commands.
●● Create a .NET Core console app to add and retrieve values from the cache by using the StackEx-
change.Redis NuGet package.

Prerequisites
This demo is performed in Visual Studio Code (VS Code).

Create demo folder and launch VS Code


1. Open a PowerShell terminal in your local OS and create a new directory for the project.
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for Azure Cache for Redis  363

md az204-redisdemo

2. Change in to the new directory and launch VS Code.


cd az204-redisdemo
code .

3. Open a terminal in VS Code and login to Azure.


az login

Create a new Redis Cache instance


1. Create a resource group, replace <myRegion> with a location that makes sense for you. Copy the first
line by itself and edit the value.
$myLocation="<myRegion>"
az group create -n az204-redisdemo-rg -l $myLocation

2. Create a Redis Cache instance by using the az redis create command. The instance name needs
to be unique and the script below will attempt to generate one for you. This command will take a few
minutes to complete.
$redisname = "az204redis" + $(get-random -minimum 10000 -maximum 100000)
az redis create -l $myLocation -g az204-redisdemo-rg -n $redisname --sku Basic --vm-size c0

3. Open the Azure Portal (https://portal.azure.com) and copy the connection string to the new Redis
Cache instance.
●● Navigate to the new Redis Cache.
●● Select Access keys in the Settings section of the Navigation Pane.
●● Copy the Primary connection string (StackExchange.Redis) value and save to Notepad.

Create the console application


1. Create a console app by running the command below in the VS Code terminal.
dotnet new console

2. Add the StackExchange.Redis NuGet package to the project.


dotnet add package StackExchange.Redis

3. In the Program.cs file add the using statement below at the top.
using StackExchange.Redis;
using System.Threading.Tasks;

4. Let's have the Main method run asynchronously by changing it to the following:
MCT USE ONLY. STUDENT USE PROHIBITED 364  Module 13 Integrate caching and content delivery within solutions  

static async Task Main(string[] args)

5. Connect to the cache by replacing the existing code in the Main method with the following code. Set
the connectionString variable to the value you copied from the portal.
string connectionString = "YOUR_CONNECTION_STRING";

using (var cache = ConnectionMultiplexer.Connect(connectionString))


{

✔️ Note: The connection to Azure Cache for Redis is managed by the ConnectionMultiplexer
class. This class should be shared and reused throughout your client application. We do not want to
create a new connection for each operation. Instead, we want to store it off as a field in our class and
reuse it for each operation. Here we are only going to use it in the Main method, but in a production
application, it should be stored in a class field, or a singleton.

Add a value to the cache


Now that we have the connection, let's add a value to the cache.
1. Inside the using block after the connection has been created, use the GetDatabase method to
retrieve an IDatabase instance.
IDatabase db = cache.GetDatabase();

2. Call StringSetAsync on the IDatabase object to set the key “test:key” to the value "some value".
The return value from StringSetAsync is a bool indicating whether the key was added. Append
the code below to what you entered in Step 1 of this section.
bool setValue = await db.StringSetAsync("test:key", "100");
Console.WriteLine($"SET: {setValue}");

Get a value from the cache


1. Next, retrieve the value using StringGetAsync. This takes the key to retrieve and returns the value.
Append the code below to what you entered in Step 2 above.
string getValue = await db.StringGetAsync("test:key");
Console.WriteLine($"GET: {getValue}");

2. Build and run the console app.


dotnet build
dotnet run

The output should be similar to the following:


SET: True
GET: 100
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for Azure Cache for Redis  365

Other operations
Let's add a few additional methods to the code.
1. Execute “PING” to test the server connection. It should respond with "PONG". Append the following
code to the using block.
var result = await db.ExecuteAsync("ping");
Console.WriteLine($"PING = {result.Type} : {result}");

2. Execute “FLUSHDB” to clear the database values. It should respond with "OK". Append the following
code to the using block.
result = await db.ExecuteAsync("flushdb");
Console.WriteLine($"FLUSHDB = {result.Type} : {result}");

3. Build and run the console app.


dotnet build
dotnet run

The output should be similar to the following:


SET: True
GET: 100
PING = SimpleString : PONG
FLUSHDB = SimpleString : OK

Clean up resources
When you're finished with the demo you can clean up the resources by deleting the resource group
created earlier. The following command can be run in the VS Code terminal.
az group delete -n az204-redisdemo-rg --no-wait --yes
MCT USE ONLY. STUDENT USE PROHIBITED 366  Module 13 Integrate caching and content delivery within solutions  

Develop for storage on CDNs


Azure CDN
In Azure, the Azure Content Delivery Network (Azure CDN) is a global CDN solution for delivering
high-bandwidth content that is hosted in Azure or in any other location. Using Azure CDN, you can cache
publicly available objects loaded from Azure Blob storage, a web application, a virtual machine, or any
publicly accessible web server. Azure CDN can also accelerate dynamic content, which cannot be cached,
by taking advantage of various network optimizations by using CDN POPs. An example is using route
optimization to bypass Border Gateway Protocol (BGP).
Here’s how Azure CDN works.

1. A user (Alice) requests a file (also called an asset) by using a URL with a special domain name, such as
<endpoint name>.azureedge.net. This name can be an endpoint hostname or a custom domain. The
DNS routes the request to the best performing POP location, which is usually the POP that is geo-
graphically closest to the user.
2. If no edge servers in the POP have the file in their cache, the POP requests the file from the origin
server. The origin server can be an Azure Web App, Azure Cloud Service, Azure Storage account, or
any publicly accessible web server.
3. The origin server returns the file to an edge server in the POP.
4. An edge server in the POP caches the file and returns the file to the original requestor (Alice). The file
remains cached on the edge server in the POP until the time-to-live (TTL) specified by its HTTP
headers expires. If the origin server didn't specify a TTL, the default TTL is seven days.
5. Additional users can then request the same file by using the same URL that Alice used, and can also
be directed to the same POP.
6. If the TTL for the file hasn't expired, the POP edge server returns the file directly from the cache. This
process results in a faster, more responsive user experience.

Manage Azure CDN by using Azure CLI


The Azure Command-Line Interface (Azure CLI) provides one of the most flexible methods to manage
your Azure CDN profiles and endpoints. You can get started by listing all of your existing CDN profiles:
MCT USE ONLY. STUDENT USE PROHIBITED
 Develop for storage on CDNs  367

az cdn profile list

This will globally list every CDN profile associated with your subscription. If you want to filter this list
down to a specific resource group, you can use the --resource-group parameter:
az cdn profile list --resource-group ExampleGroup

To create a new profile, you should use the new create verb for the az cdn profile command group:
az cdn profile create --name DemoProfile --resource-group ExampleGroup

By default, the CDN will be created by using the standard tier and the Akamai provider. You can custom-
ize this further by using the --sku parameter and one of the following options:
●● Custom_Verizon
●● Premium_Verizon
●● Standard_Akamai
●● Standard_ChinaCdn
●● Standard_Verizon
After you have created a new profile, you can use that profile to create an endpoint. Each endpoint
requires you to specify a profile, a resource group, and an origin URL:
az cdn endpoint create \
--name ContosoEndpoint \
--origin www.contoso.com \
--profile-name DemoProfile \
--resource-group ExampleGroup

You can customize the endpoint further by assigning a custom domain to the CDN endpoint. This helps
ensure that users see only the domains you choose instead of the Azure CDN domains:
az cdn custom-domain create \
--name FilesDomain \
--hostname files.contoso.com \
--endpoint-name ContosoEndpoint \
--profile-name DemoProfile \
--resource-group ExampleGroup

Cache expiration in Azure CDN


Because a cached resource can potentially be out-of-date or stale (compared to the corresponding
resource on the origin server), it is important for any caching mechanism to control when content is
refreshed. To save time and bandwidth consumption, a cached resource is not compared to the version
on the origin server every time it is accessed. Instead, as long as a cached resource is considered to be
fresh, it is assumed to be the most current version and is sent directly to the client. A cached resource is
considered to be fresh when its age is less than the age or period defined by a cache setting. For exam-
ple, when a browser reloads a webpage, it verifies that each cached resource on your hard drive is fresh
and loads it. If the resource is not fresh (stale), an up-to-date copy is loaded from the server.
MCT USE ONLY. STUDENT USE PROHIBITED 368  Module 13 Integrate caching and content delivery within solutions  

Caching rules
Azure CDN caching rules specify cache expiration behavior both globally and with custom conditions.
There are two types of caching rules:
●● Global caching rules. You can set one global caching rule for each endpoint in your profile that affects
all requests to the endpoint. The global caching rule overrides any HTTP cache-directive headers, if
set.
●● Custom caching rules. You can set one or more custom caching rules for each endpoint in your profile.
Custom caching rules match specific paths and file extensions; are processed in order; and override
the global caching rule, if set.
For global and custom caching rules, you can specify the cache expiration duration in days, hours,
minutes, and seconds.

Purging and preloading assets by using the Azure CLI


The Azure CLI provides a special purge verb that will unpublish cached assets from an endpoint. This is
very useful if you have an application scenario where a large amount of data is invalidated and should be
updated in the cache. To unpublish assets, you must specify either a file path, a wildcard directory, or
both:
az cdn endpoint purge \
--content-paths '/css/*' '/js/app.js' \
--name ContosoEndpoint \
--profile-name DemoProfile \
--resource-group ExampleGroup

You can also preload assets into an endpoint. This is useful for scenarios where your application creates a
large number of assets, and you want to improve the user experience by prepopulating the cache before
any actual requests occur:
az cdn endpoint load \
--content-paths '/img/*' '/js/module.js' \
--name ContosoEndpoint \
--profile-name DemoProfile \
--resource-group ExampleGroup
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  369

Lab and review questions


Lab: Enhancing a web application by using the
Azure Content Delivery Network

Lab scenario
Your marketing organization has been tasked with building a website landing page to host content about
an upcoming edX course. While designing the website, your team decided that multimedia videos and
image content would be the ideal way to convey your marketing message. The website is already com-
pleted and available using a Docker container, and your team also decided that it would like to use a
content delivery network (CDN) to improve the performance of the images, the videos, and the website
itself. You have been tasked with using Microsoft Azure Content Delivery Network to improve the perfor-
mance of both standard and streamed content on the website.

Objectives
After you complete this lab, you will be able to:
●● Register a Microsoft.CDN resource provider.
●● Create Content Delivery Network resources.
●● Create and configure Content Delivery Network endpoints that are bound to various Azure services.

Lab updates
The labs are updated on a regular basis. There may be some small changes to the objectives and/or
scenario. For the latest information please visit:
●● https://github.com/MicrosoftLearning/AZ-204-DevelopingSolutionsforMicrosoftAzure

Module 13 review questions


Review Question 1
Data in Redis cache is stored in nodes and clusters. Which of the statements below are true?
(Select all that apply.)
†† Nodes are a collection of clusters
†† Nodes are a space where data is stored
†† Clusters are sets of three or more nodes
†† Clusters are a space where data is stored
MCT USE ONLY. STUDENT USE PROHIBITED 370  Module 13 Integrate caching and content delivery within solutions  

Review Question 2
True or False, Azure Redis cache resources requires a globally unique name.
†† True
†† False

Review Question 3
The Redis database is represented by the IDatabase type. Which of the following methods creates a group of
operations to be sent to the server and processed as a single unit?
†† CreateBatch
†† CreateTransaction
†† CreateSingle
†† SendBatch
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab and review questions  371

Answers
Review Question 1
Data in Redis cache is stored in nodes and clusters. Which of the statements below are true?
(Select all that apply.)
†† Nodes are a collection of clusters
■■ Nodes are a space where data is stored
■■ Clusters are sets of three or more nodes
†† Clusters are a space where data is stored
Explanation
Review Question 2
True or False, Azure Redis cache resources requires a globally unique name.
■■ True
†† False
Explanation
The Redis cache will need a globally unique name. The name has to be unique within Azure because it is
used to generate a public-facing URL to connect and communicate with the service.
Review Question 3
The Redis database is represented by the IDatabase type. Which of the following methods creates a
group of operations to be sent to the server and processed as a single unit?
†† CreateBatch
■■ CreateTransaction
†† CreateSingle
†† SendBatch
Explanation
The CreateTransaction method creates a group of operations that will be sent to the server as a single unit
and processed on the server as a single unit.

You might also like