Overview
Developer guide
SDKs and tools
Quickstart
Web Apps
Virtual machines
Linux
Windows
Serverless
Microservices
Service Fabric
Container Service
Tutorials
Create and deploy a web app
.NET with SQL DB
Node.js with Mongo DB
PHP with MySQL
Java with MySQL
Deploy complex VM templates
Linux
Windows
Create an Azure connected function
Docker deploy web app on Linux
Samples
Azure CLI
Web Apps
Linux VM
Windows VM
Azure PowerShell
Web Apps
Linux VM
Windows VM
Concepts
Billing and subscriptions
Hosting comparisons
What is App Service?
Virtual machines
Linux VMs
Windows VMs
Service Fabric overview
How to guides
Plan
Web application architectures
VM architectures
Connect to on-premises networks
Microservices patterns/scenarios
Develop
Linux VM
Windows VM
Serverless apps
Microservices cluster
Deploy
Web and mobile apps from source control
Microservices locally
Linux VM
Windows VM
Store data
Blobs
File shares
Key-value pairs
JSON documents
Relational tables
Message queues
Scale
Web and mobile apps
Virtual machines
Microservice apps
Secure
Web and mobile apps
Backup
Web and mobile apps
Virtual machines
Monitor
Web and mobile apps
Linux VM
Windows VM
Microservices
Billing alerts
Automate
Scale Linux VM
Scale Windows VM
Reference
REST
SDKs
.NET
Java
Node.js
PHP
Python
Ruby
Command line interfaces
Azure CLI
Azure PowerShell
Billing
Resources
Azure limits and quotas
Azure regions
Azure Roadmap
Pricing calculator
Samples
Videos
Get started guide for Azure developers
20 minutes to read • Edit Online
What is Azure?
Azure is a complete cloud platform that can host your existing applications and streamline new application
development. Azure can even enhance on-premises applications. Azure integrates the cloud services that you need
to develop, test, deploy, and manage your applications, all while taking advantage of the efficiencies of cloud
computing.
By hosting your applications in Azure, you can start small and easily scale your application as your customer
demand grows. Azure also offers the reliability that’s needed for high-availability applications, even including
failover between different regions. The Azure portal lets you easily manage all your Azure services. You can also
manage your services programmatically by using service-specific APIs and templates.
This guide is an introduction to the Azure platform for application developers. It provides guidance and direction
that you need to start building new applications in Azure or migrating existing applications to Azure.
Where do I start?
With all the services that Azure offers, it can be an intimidating task to figure out which services you need to
support your solution architecture. This section highlights the Azure services that developers commonly use. For a
list of all Azure services, see the Azure documentation.
First, you must decide on how to host your application in Azure. Do you need to manage your entire infrastructure
as a virtual machine (VM ). Can you use the platform management facilities that Azure provides? Maybe you need a
serverless framework to host code execution only?
Your application needs cloud storage, which Azure provides several options for. You can take advantage of Azure's
enterprise authentication. There are also tools for cloud-based development and monitoring, and most hosting
services offer DevOps integration.
Now, let's look at some of the specific services that we recommend investigating for your applications.
Application hosting
Azure provides several cloud-based compute offerings to run your application so that you don't have to worry
about the infrastructure details. You can easily scale up or scale out your resources as your application usage
grows.
Azure offers services that support your application development and hosting needs. Azure provides Infrastructure
as a Service (IaaS ) to give you full control over your application hosting. Azure's Platform as a Service (PaaS )
offerings provide the fully managed services needed to power your apps. There's even true serverless hosting in
Azure where all you need to do is write your code.
Azure App Service
When you want the quickest path to publish your web-based projects, consider Azure App Service. App Service
makes it easy to extend your web apps to support your mobile clients and publish easily consumed REST APIs.
This platform provides authentication by using social providers, traffic-based autoscaling, testing in production, and
continuous and container-based deployments.
You can create web apps, mobile app back ends, and API apps.
Because all three app types share the App Service runtime, you can host a website, support mobile clients, and
expose your APIs in Azure, all from the same project or solution. To learn more about App Service, see What is
Azure Web Apps.
App Service has been designed with DevOps in mind. It supports various tools for publishing and continuous
integration deployments. These tools include GitHub webhooks, Jenkins, Azure DevOps, TeamCity, and others.
You can migrate your existing applications to App Service by using the online migration tool.
When to use: Use App Service when you’re migrating existing web applications to Azure, and when you need
a fully-managed hosting platform for your web apps. You can also use App Service when you need to support
mobile clients or expose REST APIs with your app.
Get started: App Service makes it easy to create and deploy your first web app, mobile app, or API app.
Try it now: App Service lets you provision a short-lived app to try the platform without having to sign up for
an Azure account. Try the platform and create your Azure App Service app.
When to use: Use Virtual Machines when you want full control over your application infrastructure or to
migrate on-premises application workloads to Azure without having to make changes.
Get started: Create a Linux VM or Windows VM from the Azure portal.
When to use: Use Azure Functions when you have code that is triggered by other Azure services, by web-
based events, or on a schedule. You can also use Functions when you don't need the overhead of a complete
hosted project or when you only want to pay for the time that your code runs. To learn more, see Azure
Functions Overview.
Get started: Follow the Functions quickstart tutorial to create your first function from the portal.
Try it now: Azure Functions lets you run your code without having to sign up for an Azure account. Try it now
at and create your first Azure Function.
When to use: Service Fabric is a good choice when you’re creating an application or rewriting an existing
application to use a microservice architecture. Use Service Fabric when you need more control over, or direct
access to, the underlying infrastructure.
Get started: Create your first Azure Service Fabric application.
When to use: When your application needs document, table, or graph databases, including MongoDB
databases, with multiple well-defined consistency models.
Get started: Build an Azure Cosmos DB web app. If you’re a MongoDB developer, see Build a
MongoDB web app with Azure Cosmos DB.
Azure Storage: Offers durable, highly available storage for blobs, queues, files, and other kinds of
nonrelational data. Storage provides the storage foundation for VMs.
When to use: When your app stores nonrelational data, such as key-value pairs (tables), blobs, files
shares, or messages (queues).
Get started: Choose from one of these types storage: blobs, tables, queues, or files.
Azure SQL Database: An Azure-based version of the Microsoft SQL Server engine for storing relational
tabular data in the cloud. SQL Database provides predictable performance, scalability with no downtime,
business continuity, and data protection.
When to use: When your application requires data storage with referential integrity, transactional
support, and support for TSQL queries.
Get started: Create a SQL database in minutes by using the Azure portal.
You can use Azure Data Factory to move existing on-premises data to Azure. If you aren't ready to move data to the
cloud, Hybrid Connections in Azure App Service lets you connect your App Service hosted app to on-premises
resources. You can also connect to Azure data and storage services from your on-premises applications.
Docker support
Docker containers, a form of OS virtualization, let you deploy applications in a more efficient and predictable way.
A containerized application works in production the same way as on your development and test systems. You can
manage containers by using standard Docker tools. You can use your existing skills and popular open-source tools
to deploy and manage container-based applications on Azure.
Azure provides several ways to use containers in your applications.
Azure Docker VM extension: Lets you configure your VM with Docker tools to act as a Docker host.
When to use: When you want to generate consistent container deployments for your applications on a
VM, or when you want to use Docker Compose.
Get started: Create a Docker environment in Azure by using the Docker VM extension.
Azure Kubernetes Service: Lets you create, configure, and manage a cluster of virtual machines that are
preconfigured to run containerized applications. To learn more about Azure Kubernetes Service, see Azure
Kubernetes Service introduction.
When to use: When you need to build production-ready, scalable environments that provide additional
scheduling and management tools, or when you’re deploying a Docker Swarm cluster.
Get started: Deploy a Kubernetes Service cluster.
Docker Machine: Lets you install and manage a Docker Engine on virtual hosts by using docker-machine
commands.
When to use: When you need to quickly prototype an app by creating a single Docker host.
Custom Docker image for App Service: Lets you use Docker containers from a container registry or a
customer container when you deploy a web app on Linux.
Authentication
It's crucial to not only know who is using your applications, but also to prevent unauthorized access to your
resources. Azure provides several ways to authenticate your app clients.
Azure Active Directory (Azure AD ): The Microsoft multitenant, cloud-based identity and access
management service. You can add single-sign on (SSO ) to your applications by integrating with Azure AD.
You can access directory properties by using the Azure AD Graph API directly or the Microsoft Graph API.
You can integrate with Azure AD support for the OAuth2.0 authorization framework and Open ID Connect
by using native HTTP/REST endpoints and the multiplatform Azure AD authentication libraries.
When to use: When you want to provide an SSO experience, work with Graph-based data, or
authenticate domain-based users.
Get started: To learn more, see the Azure Active Directory developer's guide.
App Service Authentication: When you choose App Service to host your app, you also get built-in
authentication support for Azure AD, along with social identity providers—including Facebook, Google,
Microsoft, and Twitter.
When to use: When you want to enable authentication in an App Service app by using Azure AD, social
identity providers, or both.
Get started: To learn more about authentication in App Service, see Authentication and authorization in
Azure App Service.
To learn more about security best practices in Azure, see Azure security best practices and patterns.
Monitoring
With your application up and running in Azure, you need to monitor performance, watch for issues, and see how
customers are using your app. Azure provides several monitoring options.
Application Insights: An Azure-hosted extensible analytics service that integrates with Visual Studio to
monitor your live web applications. It gives you the data that you need to improve the performance and
usability of your apps continuously. This improvement occurs whether you host your applications on Azure
or not.
Azure Monitor: A service that helps you to visualize, query, route, archive, and act on the metrics and logs
that you generate with your Azure infrastructure and resources. Monitor is a single source for monitoring
Azure resources and provides the data views that you see in the Azure portal.
DevOps integration
Whether it's provisioning VMs or publishing your web apps with continuous integration, Azure integrates with
most of the popular DevOps tools. You can work with the tools that you already have and maximize your existing
experience with support for tools like:
Jenkins
GitHub
Puppet
Chef
TeamCity
Ansible
Azure DevOps
Get started: To see DevOps options for an App Service app, see Continuous Deployment to Azure App
Service.
Try it now: Try out several of the DevOps integrations.
Azure regions
Azure is a global cloud platform that is generally available in many regions around the world. When you provision
a service, application, or VM in Azure, you're asked to select a region. This region represents a specific datacenter
where your application runs or where your data is stored. These regions correspond to specific locations, which are
published on the Azure regions page.
Choose the best region for your application and data
One of the benefits of using Azure is that you can deploy your applications to various datacenters around the
globe. The region that you choose can affect the performance of your application. For example, it's better to choose
a region that’s closer to most of your customers to reduce latency in network requests. You might also want to
select your region to meet the legal requirements for distributing your app in certain countries/regions. It's always
a best practice to store application data in the same datacenter or in a datacenter as near as possible to the
datacenter that is hosting your application.
Multi-region apps
Although unlikely, it’s not impossible for an entire datacenter to go offline because of an event such as a natural
disaster or Internet failure. It’s a best practice to host vital business applications in more than one datacenter to
provide maximum availability. Using multiple regions can also reduce latency for global users and provide
additional opportunities for flexibility when updating applications.
Some services, such as Virtual Machine and App Services, use Azure Traffic Manager to enable multi-region
support with failover between regions to support high-availability enterprise applications. For an example, see
Azure reference architecture: Run a web application in multiple regions.
When to use: When you have enterprise and high-availability applications that benefit from failover and
replication.
When to use: Use Resource Manager templates when you want a template-based deployment for your app
that you can manage programmatically by using REST APIs, the Azure CLI, and Azure PowerShell.
Get started: To get started using templates, see Authoring Azure Resource Manager templates.
When to use: When you need fine-grained access management for users and groups or when you need
to make a user an owner of a subscription.
Get started: To learn more, see Manage access using RBAC and the Azure portal.
Service principal objects: Along with providing access to user principals and groups, you can grant the
same access to a service principal.
When to use: When you’re programmatically managing Azure resources or granting access for
applications. For more information, see Create Active Directory application and service principal.
Tags
Azure Resource Manager lets you assign custom tags to individual resources. Tags, which are key-value pairs, can
be helpful when you need to organize resources for billing or monitoring. Tags provide you a way to track
resources across multiple resource groups. You can assign tags the following ways:
In the portal
In the Azure Resource Manager template
Using the REST API
Using the Azure CLI
Using PowerShell
You can assign multiple tags to each resource. To learn more, see Using tags to organize your Azure resources.
Billing
In the move from on-premises computing to cloud-hosted services, tracking and estimating service usage and
related costs are significant concerns. It’s important to estimate what new resources cost to run on a monthly basis.
You can also project how the billing looks for a given month based on the current spending.
Get resource usage data
Azure provides a set of Billing REST APIs that give access to resource consumption and metadata information for
Azure subscriptions. These Billing APIs give you the ability to better predict and manage Azure costs. You can track
and analyze spending in hourly increments and create spending alerts. You can also predict future billing based on
current usage trends.
Get started: To learn more about using the Billing APIs, see Azure Billing Usage and RateCard APIs overview.
Get started: See Azure Billing Usage and RateCard APIs overview.
Azure subscription and service limits, quotas, and
constraints
86 minutes to read • Edit Online
This document lists some of the most common Microsoft Azure limits, which are also sometimes called quotas.
To learn more about Azure pricing, see Azure pricing overview. There, you can estimate your costs by using the
pricing calculator. You also can go to the pricing details page for a particular service, for example, Windows VMs.
For tips to help manage your costs, see Prevent unexpected costs with Azure billing and cost management.
Managing limits
NOTE
Some services have adjustable limits.
When a service doesn't have adjustable limits, the following tables use the header Limit. In those cases, the default and the
maximum limits are the same.
When the limit can be adjusted, the tables include Default limit and Maximum limit headers. The limit can be raised above
the default limit but not above the maximum limit.
If you want to raise the limit or quota above the default limit, open an online customer support request at no charge.
Free Trial subscriptions aren't eligible for limit or quota increases. If you have a Free Trial subscription, you can
upgrade to a Pay-As-You-Go subscription. For more information, see Upgrade your Azure Free Trial subscription
to a Pay-As-You-Go subscription and the Free Trial subscription FAQ.
Some limits are managed at a regional level.
Let's use vCPU quotas as an example. To request a quota increase with support for vCPUs, you must decide how
many vCPUs you want to use in which regions. You then make a specific request for Azure resource group vCPU
quotas for the amounts and regions that you want. If you need to use 30 vCPUs in West Europe to run your
application there, you specifically request 30 vCPUs in West Europe. Your vCPU quota isn't increased in any other
region--only West Europe has the 30-vCPU quota.
As a result, decide what your Azure resource group quotas must be for your workload in any one region. Then
request that amount in each region into which you want to deploy. For help in how to determine your current
quotas for specific regions, see Resolve errors for resource quotas.
General limits
For limits on resource names, see Naming rules and restrictions for Azure resources.
For information about Resource Manager API read and write limits, see Throttling Resource Manager requests.
Management group limits
The following limits apply to management groups.
RESOURCE LIMIT
RESOURCE LIMIT
1You can apply up to 50 tags directly to a subscription. However, the subscription can contain an unlimited number
of tags that are applied to resource groups and resources within the subscription. The number of tags per resource
or resource group is limited to 50. Resource Manager returns a list of unique tag name and values in the
subscription only when the number of tags is 10,000 or less. You still can find a resource by tag when the number
exceeds 10,000.
2If you reach the limit of 800deployments, delete deployments from the history that are no longer needed. To
delete subscription level deployments, use Remove-AzDeployment or az deployment sub delete.
Resource group limits
RESOURCE LIMIT
Resources per resource group Resources aren't limited by resource group. Instead, they're
limited by resource type in a resource group. See next row.
RESOURCE LIMIT
Resources per resource group, per resource type 800 - Some resource types can exceed the 800 limit. See
Resources not limited to 800 instances per resource group.
VALUE LIMIT
Parameters 256
Variables 256
Outputs 64
Template size 4 MB
You can exceed some template limits by using a nested template. For more information, see Use linked templates
when you deploy Azure resources. To reduce the number of parameters, variables, or outputs, you can combine
several values into an object. For more information, see Objects as parameters.
CATEGORY LIMIT
Domains You can add no more than 900 managed domain names. If
you set up all of your domains for federation with on-premises
Active Directory, you can add no more than 450 domain
names in each directory.
1Scaling limits depend on the pricing tier. To see the pricing tiers and theirscaling limits, see API Management
pricing.
2Per unit cache size depends on the pricing tier. To see the pricing tiers and their scaling limits, see API
Management pricing.
3Connections are pooled and reused unless explicitly closed by the back end.
4This limit is per unit of the Basic, Standard, and Premium tiers. The Developer tier is limited to 1,024. This limit
limited to 4 KiB.
6This resource is available in the Premium tier only.
7This resource applies to the Consumption tier only.
8Applies to the Consumption tier only. Includes an up to 2048 bytes long query string.
App Service 10 per region 10 per 100 per 100 per 100 per 100 per
plan resource resource resource resource resource
group group group group group
CPU time (5 3 minutes 3 minutes Unlimited, pay Unlimited, pay Unlimited, pay Unlimited, pay
minutes)6 at standard at standard at standard at standard
rates rates rates rates
RESOURCE FREE SHARED BASIC STANDARD PREMIUM (V2) ISOLATED
CPU time 60 minutes 240 minutes Unlimited, pay Unlimited, pay Unlimited, pay Unlimited, pay
(day)6 at standard at standard at standard at standard
rates rates rates rates
Concurrent 1 1 1 5 5 5
debugger
connections
per
application
Custom Not Not Unlimited SNI Unlimited SNI Unlimited SNI Unlimited SNI
domain SSL supported, supported, SSL SSL and 1 IP SSL and 1 IP SSL and 1 IP
support wildcard wildcard connections SSL SSL SSL
certificate for certificate for connections connections connections
*.azurewebsite *.azurewebsite included included included
s.net available s.net available
by default by default
Integrated X X X X X10
load balancer
Always On X X X X
RESOURCE FREE SHARED BASIC STANDARD PREMIUM (V2) ISOLATED
Autoscale X X X
WebJobs11 X X X X X X
Endpoint X X X X
monitoring
Staging slots 5 20 20
per app
1Apps and storage quotas are per App Service plan unless noted otherwise.
2The actual number of apps that you can host on these machines depends on the activity of the apps, the size of the
machine instances, and the corresponding resource utilization.
3Dedicated instances can be of different sizes. For more information, see App Service pricing.
4More are allowed upon request.
5The storage limit is the total content size across all apps in the same App service plan. The total content size of all
apps across all App service plans in a single resource group and region cannot exceed 500GB.
6These resources are constrained by physical resources on the dedicated instances (the instance size and the
number of instances).
7If you scale an app in the Basic tier to two instances, you have 350 concurrent connections for each of the two
instances. For Standard tier and above, there are no theoretical limits to web sockets, but other factors can limit the
number of web sockets. For example, maximum concurrent requests allowed (defined by
maxConcurrentRequestsPerCpu ) are: 7,500 per small VM, 15,000 per medium VM (7,500 x 2 cores), and 75,000 per
large VM (18,750 x 4 cores).
8The maximum IP connections are per instance and depend on the instance size: 1,920 per B1/S1/P1V2 instance,
limit of 200.
10App Service Isolated SKUs can be internally load balanced ( ILB ) with Azure Load Balancer, so there's no public
connectivity from the internet. As a result, some features of an ILB Isolated App Service must be used from
machines that have direct access to the ILB network endpoint.
11Run custom executables and/or scripts on demand, on a schedule, or continuously as a background task within
your App Service instance. Always On is required for continuous WebJobs execution. There's no predefined limit
on the number of WebJobs that can run in an App Service instance. There are practical limits that depend on what
the application code is trying to do.
Automation limits
Process automation
RESOURCE LIMIT NOTES
Maximum number of new jobs that can 100 When this limit is reached, the
be submitted every 30 seconds per subsequent requests to create a job fail.
Azure Automation account The client receives an error response.
(nonscheduled jobs)
Maximum storage size of job metadata 10 GB (approximately 4 million jobs) When this limit is reached, the
for a 30-day rolling period subsequent requests to create a job fail.
Maximum job stream limit 1MB A single stream cannot be larger than 1
MB.
Job run time, Free tier 500 minutes per subscription per
calendar month
1A sandbox is a shared environment that can be used by multiple jobs. Jobs that use the same sandbox are bound
by the resource limitations of the sandbox.
Change Tracking and Inventory
The following table shows the tracked item limits per machine for change tracking.
File 500
Registry 250
Services 250
Daemon 250
Update Management
The following table shows the limits for Update Management.
Databases 64
Azure Cache for Redis limits and sizes are different for each pricing tier. To see the pricing tiers and their associated
sizes, see Azure Cache for Redis pricing.
For more information on Azure Cache for Redis configuration limits, see Default Redis server configuration.
Because configuration and management of Azure Cache for Redis instances is done by Microsoft, not all Redis
commands are supported in Azure Cache for Redis. For more information, see Redis commands not supported in
Azure Cache for Redis.
1Each Azure Cloud Service with web or worker roles can have two deployments, one for production and one for
staging. This limit refers to the number of distinct roles, that is, configuration. This limit doesn't refer to the number
of instances per role, that is, scaling.
Maximum 1 16 16 8 6 6 6 6
services
RESOURCE FREE BASIC S1 S2 S3 S3 HD L1 L2
Maximum N/A 3 SU 36 SU 36 SU 36 SU 36 SU 36 SU 36 SU
scale in
search
units
(SU)2
1 Free is based on shared, not dedicated, resources. Scale-up is not supported on shared resources.
Partitions N/A 1 12 12 12 3 12 12
per
service
Replicas N/A 3 12 12 12 12 12 12
1 Basic has one fixed partition. At this tier, additional search units are used for allocating more replicas for increased
query workloads.
2 S3 HD has a hard limit of three partitions, which is lower than the partition limit for S3. The lower partition limit
is imposed because the index count for S3 HD is substantially higher. Given that service limits exist for both
computing resources (storage and processing) and content (indexes and documents), the content limit is reached
first.
3 Service level agreements are offered for billable services on dedicated resources. Free services and preview
features have no SLA. For billable services, SLAs take effect when you provision sufficient redundancy for your
service. Two or more replicas are required for query (read) SLAs. Three or more replicas are required for query
and indexing (read-write) SLAs. The number of partitions isn't an SLA consideration.
To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and
responses, see Service limits in Azure Cognitive Search.
A mixture of Cognitive Services Maximum of 200 total Cognitive 100 Computer Vision resources in West
resources Services resources. US 2, 50 Speech Service resources in
West US, and 50 Text Analytics
resources in East US.
A single type of Cognitive Services Maximum of 100 resources per region, 100 Computer Vision resources in West
resources. with a maximum of 200 total Cognitive US 2, and 100 Computer Vision
Services resources. resources in East US.
RESOURCE LIMIT
The following table describes the limits on management operations performed on Azure Data Explorer clusters.
App Service plans 100 per region 100 per resource group 100 per resource group
Custom domain SSL support unbounded SNI SSL unbounded SNI SSL and 1 IP unbounded SNI SSL and 1 IP
connection included SSL connections included SSL connections included
1 For specific limits for the various App Service plan options, see the App Service plan limits.
2 By default, the timeout for the Functions 1.x runtime in an App Service plan is unbounded.
3 Requires the App Service plan be set to Always On. Pay at standard rates.
4 These limits are set in the host.
5 The actual number of function apps that you can host depends on the activity of the apps, the size of the machine
apps in a Premium plan or an App Service plan, you can map a custom domain using either a CNAME or an A
record.
8 Guaranteed for up to 60 minutes.
Maximum nodes per cluster with Virtual Machine Scale Sets 1000 (100 nodes per node pool)
and Standard Load Balancer SKU
Maximum pods per node: Advanced networking with Azure Azure CLI deployment: 301
Container Networking Interface Azure Resource Manager template: 301
Portal deployment: 30
1When you deploy an Azure Kubernetes Service (AKS ) cluster with the Azure CLI or a Resource Manager template,
this value is configurable up to 250 pods per node. You can't configure maximum pods per node after you've
already deployed an AKS cluster, or if you deploy a cluster by using the Azure portal.
The following table shows the data size limit for Azure Maps. The Azure Maps data service is available only at the
S1 pricing tier.
RESOURCE LIMIT
For more information on the Azure Maps pricing tiers, see Azure Maps pricing.
Metric alerts (classic) 100 active alert rules per subscription. Call support.
Metric alerts 2,000 active alert rules per subscription Call support.
in Azure public, Azure China 21Vianet
and Azure Government clouds.
Activity log alerts 100 active alert rules per subscription. Same as default.
RESOURCE DEFAULT LIMIT MAXIMUM LIMIT
Action groups
RESOURCE DEFAULT LIMIT MAXIMUM LIMIT
Azure app push 10 Azure app actions per action group. Call support.
Query language Azure Monitor uses the same Kusto query language as Azure
Data Explorer. See Azure Monitor log query language
differences for KQL language elements not supported in Azure
Monitor.
Azure regions Log queries can experience excessive overhead when data
spans Log Analytics workspaces in multiple Azure regions. See
Query limits for details.
LIMIT DESCRIPTION
Cross resource queries Maximum number of Application Insights resources and Log
Analytics workspaces in a single query limited to 100.
Cross-resource query is not supported in View Designer.
Cross-resource query in log alerts is supported in the new
scheduledQueryRules API.
See Cross-resource query limits for details.
Current Per GB pricing tier No limit 30 - 730 days Data retention beyond 31
(introduced April 2018) days is available for
additional charges. Learn
more about Azure Monitor
pricing.
Legacy Per Node (OMS) No limit 30 to 730 days Data retention beyond 31
(introduced April 2016) days is available for
additional charges. Learn
more about Azure Monitor
pricing.
Azure portal
Maximum records returned by a log 10,000 Reduce results using query scope, time
query range, and filters in the query.
Maximum size for a single post 30 MB Split larger volumes into multiple posts.
Maximum size for field values 32 KB Fields longer than 32 KB are truncated.
Search API
Maximum request rate 200 requests per 30 seconds per AAD See Rate limits for details.
user or client IP address
Data export Not currently available Use Azure Function or Logic App to
aggregate and export data.
Operation
|where OperationCategory == "Ingestion"
|where Detail startswith "The rate of data crossed the threshold"
NOTE
Depending on how long you've been using Log Analytics, you might have access to legacy pricing tiers. Learn more about
Log Analytics legacy pricing tiers.
Application Insights
There are some limits on the number of metrics and events per application, that is, per instrumentation key. Limits
depend on the pricing plan that you choose.
Total data per day 100 GB You can reduce data by setting a cap. If
you need more data, you can increase
the limit in the portal, up to 1,000 GB.
For capacities greater than 1,000 GB,
send email to
[email protected].
Availability multi-step test detailed 90 days This resource provides detailed results
results retention of each step.
Backup limits
For a summary of Azure Backup support settings and limitations, see Azure Backup Support Matrices.
Batch limits
RESOURCE DEFAULT LIMIT MAXIMUM LIMIT
NOTE
Default limits vary depending on the type of subscription you use to create a Batch account. Cores quotas shown are for
Batch accounts in Batch service mode. View the quotas in your Batch account.
1Extra small instances count as one vCPU toward the vCPU limit despite using a partial CPU core.
2The storage account limit includes both Standard and Premium storage accounts.
Ports per IP 5
Maximum image layer size 200 GiB 200 GiB 200 GiB
Webhooks 2 10 500
1The specified storage limits are the amount of included storage for each tier. You're charged an additional daily
rate per GiB for image storage above these limits. For rate information, see Azure Container Registry pricing.
2ReadOps, WriteOps, and Bandwidth are minimum estimates. Azure Container Registry strives to improve
performance as usage requires.
3A docker pull translates to multiple read operations based on the number of layers in the image, plus the manifest
retrieval.
4Adocker push translates to multiple write operations, based on the number of layers that must be pushed. A
docker push includes ReadOps to retrieve a manifest for an existing image.
A Content Delivery Network subscription can contain one or more Content Delivery Network profiles. A Content
Delivery Network profile can contain one or more Content Delivery Network endpoints. You might want to use
multiple profiles to organize your Content Delivery Network endpoints by internet domain, web application, or
some other criteria.
Data Factory limits
Azure Data Factory is a multitenant service that has the following default limits in place to make sure customer
subscriptions are protected from each other's workloads. To raise the limits up to the maximum for your
subscription, contact support.
Version 2
RESOURCE DEFAULT LIMIT MAXIMUM LIMIT
ForEach parallelism 20 50
1 The data integration unit ( DIU ) is used in a cloud-to-cloud copy operation, learn more from Data integration units
(version 2). For information on billing, see Azure Data Factory pricing.
2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network egress
costs.
REGION GROUP REGIONS
Region group 1 Central US, East US, East US2, North Europe, West Europe,
West US, West US 2
Region group 2 Australia East, Australia Southeast, Brazil South, Central India,
Japan East, Northcentral US, Southcentral US, Southeast Asia,
West Central US
Region group 3 Canada Central, East Asia, France Central, Korea Central, UK
South
3 Pipeline, data set, and linked service objects represent a logical grouping of your
workload. Limits for these
objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is
designed to scale to handle petabytes of data.
Version 1
RESOURCE DEFAULT LIMIT MAXIMUM LIMIT
Bytes per object for data set and linked 100 KB 2,000 KB
service objects1
Retry count for pipeline activity runs 1,000 MaxInt (32 bit)
1 Pipeline, data set, and linked service objects represent a logical grouping of your
workload. Limits for these
objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is
designed to scale to handle petabytes of data.
2 On-demand HDInsight cores are allocated out of the subscription that contains the data factory. As a result, the
previous limit is the Data Factory-enforced core limit for on-demand HDInsight cores. It's different from the core
limit that's associated with your Azure subscription.
3 The cloud data movement unit ( DMU ) forversion 1 is used in a cloud-to-cloud copy operation, learn more from
Cloud data movement units (version 1). For information on billing, see Azure Data Factory pricing.
Maximum number of Data Lake Storage 10 To request an increase for this limit,
Gen1 accounts, per subscription, per contact support.
region
Maximum number of access ACLs, per 32 This is a hard limit. Use groups to
file or folder manage access with fewer entries.
Maximum number of default ACLs, per 32 This is a hard limit. Use groups to
file or folder manage access with fewer entries.
RESOURCE LIMIT
Publish rate for a custom topic (ingress) 5,000 events per second per topic
RESOURCE LIMIT
Publish rate for an event domain (ingress) 5,000 events per second
FEATURE LIMITS
Bandwidth 20 CUs
Namespaces 50 per CU
Message Size 1 MB
Capture Included
Identity Manager limits
CATEGORY LIMIT
NOTE
If you anticipate using more than 200 units with an S1 or S2 tier hub or 10 units with an S3 tier hub, contact Microsoft
Support.
The following table lists the limits that apply to IoT Hub resources.
RESOURCE LIMIT
Maximum size of device-to-cloud batch AMQP and HTTP: 256 KB for the entire batch
MQTT: 256 KB for each message
Maximum size of device twin 8 KB for tags section, and 32 KB for desired and reported
properties sections each
Maximum message routing rules 100 (for S1, S2, and S3)
Maximum number of concurrently connected device streams 50 (for S1, S2, S3, and F1 only)
Maximum device stream data transfer 300 MB per day (for S1, S2, S3, and F1 only)
NOTE
If you need more than 100 paid IoT hubs in an Azure subscription, contact Microsoft Support.
NOTE
Currently, the total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. If you
want to increase this limit, contact Microsoft Support.
IoT Hub throttles requests when the following quotas are exceeded.
Device connections 6,000/sec/unit (for S3), 120/sec/unit (for S2), 12/sec/unit (for
S1).
Minimum of 100/sec.
Device-to-cloud sends 6,000/sec/unit (for S3), 120/sec/unit (for S2), 12/sec/unit (for
S1).
Minimum of 100/sec.
File upload operations 83.33 file upload initiations/sec/unit (5,000/min/unit) (for S3),
1.67 file upload initiations/sec/unit (100/min/unit) (for S1 and
S2).
10,000 SAS URIs can be out for an Azure Storage account at
one time.
10 SAS URIs/device can be out at one time.
Direct methods 24 MB/sec/unit (for S3), 480 KB/sec/unit (for S2), 160
KB/sec/unit (for S1).
Based on 8-KB throttling meter size.
Device twin reads 500/sec/unit (for S3), Maximum of 100/sec or 10/sec/unit (for
S2), 100/sec (for S1)
Device twin updates 250/sec/unit (for S3), Maximum of 50/sec or 5/sec/unit (for
S2), 50/sec (for S1)
Jobs per-device operation throughput 50/sec/unit (for S3), maximum of 10/sec or 1/sec/unit (for S2),
10/sec (for S1).
Device stream initiation rate 5 new streams/sec (for S1, S2, S3, and F1 only).
NOTE
To increase the number of enrollments and registrations on your provisioning service, contact Microsoft Support.
NOTE
Increasing the maximum number of CAs is not supported.
The Device Provisioning Service throttles requests when the following quotas are exceeded.
Operations 200/min/service
NOTE
In the previous table, we see that for RSA 2,048-bit software keys, 2,000 GET transactions per 10 seconds are allowed. For
RSA 2,048-bit HSM-keys, 1,000 GET transactions per 10 seconds are allowed.
The throttling thresholds are weighted, and enforcement is on their sum. For example, as shown in the previous table, when
you perform GET operations on RSA HSM-keys, it's eight times more expensive to use 4,096-bit keys compared to 2,048-bit
keys. That's because 1,000/125 = 8.
In a given 10-second interval, an Azure Key Vault client can do only one of the following operations before it encounters a
429 throttling HTTP status code:
For information on how to handle throttling when these limits are exceeded, see Azure Key Vault throttling
guidance.
1A subscription-wide limit for all transaction types is five times per key vault limit. For example, HSM -other
transactions per subscription are limited to 5,000 transactions in 10 seconds per subscription.
RESOURCE LIMIT
Policies 1,000,0006
File size In some scenarios, there's a limit on the maximum file size
supported for processing in Media Services.7
1If you change the type, for example, from S2 to S1, the maximum reserved unit limits are reset.
2This number includes queued, finished, active, and canceled jobs. It doesn't include deleted jobs. You can delete old
jobs by using IJob.Delete or the DELETE HTTP request.
As of April 1, 2017, any job record in your account older than 90 days is automatically deleted, along with its
associated task records. Automatic deletion occurs even if the total number of records is below the maximum
quota. To archive the job and task information, use the code described in Manage assets with the Media Services
.NET SDK.
3When you make a request to list job entities, a maximum of 1,000 jobs is returned per request. To keep track of all
submitted jobs, use the top or skip queries as described in OData system query options.
4Locators aren't designed for
managing per-user access control. To give different access rights to individual users,
use digital rights management (DRM ) solutions. For more information, see Protect your content with Azure Media
Services.
5The storage accounts must be from the same Azure subscription.
6There's a limit of 1,000,000
policies for different Media Services policies. An example is for the Locator policy or
ContentKeyAuthorizationPolicy.
NOTE
If you always use the same days and access permissions, use the same policy ID. For information and an example, see
Manage assets with the Media Services .NET SDK.
7
7The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits
apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that
you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If
your source file is larger than 260-GB, your Job will likely fail.
The following table shows the limits on the media reserved units S1, S2, and S3. If your source file is larger than
the limits defined in the table, your encoding job fails. If you encode 4K resolution sources of long duration, you're
required to use S3 media reserved units to achieve the performance needed. If you have 4K content that's larger
than the 260-GB limit on the S3 media reserved units, open a support ticket.
S1 26
S2 60
S3 260
API calls 500,000 1.5 million per unit 15 million per unit
Push notifications Azure Notification Hubs Free Notification Hubs Basic tier Notification Hubs Standard
tier included, up to 1 million included, up to 10 million tier included, up to 10
pushes pushes million pushes
For more information on limits and pricing, see Azure Mobile Services pricing.
Networking limits
Networking limits - Azure Resource Manager
The following limits apply only for networking resources managed through Azure Resource Manager per region
per subscription. Learn how to view your current resource usage against your subscription limits.
NOTE
We recently increased all default limits to their maximum limits. If there's no maximum limit column, the resource doesn't have
adjustable limits. If you had these limits increased by support in the past and don't see updated limits in the following tables,
open an online customer support request at no charge
RESOURCE LIMIT
RESOURCE LIMIT
RESOURCE LIMIT
The following limits apply only for networking resources managed through the classic deployment model per
subscription. Learn how to view your current resource usage against your subscription limits.
Concurrent TCP or UDP flows per NIC 500,000, up to 1,000,000 for two or 500,000, up to 1,000,000 for two or
of a virtual machine or role instance more NICs. more NICs.
ExpressRoute limits
RESOURCE LIMIT
Number of virtual network links allowed per ExpressRoute See the Number of virtual networks per ExpressRoute circuit
circuit table.
50 Mbps 10 20
100 Mbps 10 25
200 Mbps 10 25
500 Mbps 10 40
1 Gbps 10 50
2 Gbps 10 60
5 Gbps 10 75
10 Gbps 10 100
40 Gbps* 10 100
NOTE
Global Reach connections count against the limit of virtual network connections per ExpressRoute Circuit. For example, a 10
Gbps Premium Circuit would allow for 5 Global Reach connections and 95 connections to the ExpressRoute Gateways or 95
Global Reach connections and 5 connections to the ExpressRoute Gateways or any other combination up to the limit of 100
connections for the circuit.
Throughput per Virtual WAN VPN connection (2 tunnels) 2 Gbps with 1 Gbps/IPsec tunnel
1 In case of WAF -enabled SKUs, we recommend that you limit the number of resources to 40 for optimal
performance.
Network Watcher limits
RESOURCE LIMIT NOTE
Packet capture sessions 10,000 per region Number of sessions only, not saved
captures.
RESOURCE LIMIT
Number of IP Configurations on a private link service 8 (This number is for the NAT IP addresses used per PLS)
*May vary due to other on-going RDP sessions or other on-going SSH sessions.
**May vary if there are existing RDP connections or usage from other on-going SSH sessions.
Azure DNS limits
Public DNS zones
RESOURCE LIMIT
Virtual Networks Links per private DNS zones with auto- 100
registration enabled
RESOURCE LIMIT
Number of private DNS zones a virtual network can get linked 1000
2These limits are applied to every individual virtual machine and not at the virtual network level. DNS queries
exceeding these limits are dropped.
Azure Firewall limits
RESOURCE LIMIT
Port range in network and application rules 0-64,000. Work is in progress to relax this limitation.
Public IP addresses 100 maximum (Currently, SNAT ports are added only for the
first five public IP addresses.)
Timeout values
Client to Front Door
Front Door has an idle TCP connection timeout of 61 seconds.
Front Door to application back-end
If the response is a chunked response, a 200 is returned if or when the first chunk is received.
After the HTTP request is forwarded to the back end, Front Door waits for 30 seconds for the first packet from
the back end. Then it returns a 503 error to the client. This value is configurable via the field
sendRecvTimeoutSeconds in the API.
For caching scenarios, this timeout is not configurable and so, if a request is cached and it takes more
than 30 seconds for the first packet from Front Door or from the backend, then a 504 error is returned to
the client.
After the first packet is received from the back end, Front Door waits for 30 seconds in an idle timeout. Then it
returns a 503 error to the client. This timeout value is not configurable.
Front Door to the back-end TCP session timeout is 90 seconds.
Upload and download data limit
WITH CHUNKED TRANSFER ENCODING
(CTE) WITHOUT HTTP CHUNKING
Download There's no limit on the download size. There's no limit on the download size.
Upload There's no limit as long as each CTE The size can't be larger than 2 GB.
upload is less than 2 GB.
Other limits
Maximum URL size - 8,192 bytes - Specifies maximum length of the raw URL (scheme + hostname + port +
path + query string of the URL )
Maximum Query String size - 4,096 bytes - Specifies the maximum length of the query string, in bytes.
Maximum HTTP response header size from health probe URL - 4,096 bytes - Specified the maximum length of
all the response headers of health probes.
For more information on limits and pricing, see Notification Hubs pricing.
Number of topics or queues Namespace Subsequent requests for 10,000 for the Basic or
per namespace creation of a new topic or Standard tier. The total
queue on the namespace are number of topics and
rejected. As a result, if queues in a namespace must
configured through the be less than or equal to
Azure portal, an error 10,000.
message is generated. If
called from the management For the Premium tier, 1,000
API, an exception is received per messaging unit (MU).
by the calling code. Maximum limit is 4,000.
Number of partitioned Namespace Subsequent requests for Basic and Standard tiers:
topics or queues per creation of a new partitioned 100.
namespace topic or queue on the
namespace are rejected. As a Partitioned entities aren't
result, if configured through supported in the Premium
the Azure portal, an error tier.
message is generated. If
called from the management Each partitioned queue or
API, the exception topic counts toward the
QuotaExceededException quota of 1,000 entities per
is received by the calling namespace.
code.
QUOTA NAME SCOPE NOTES VALUE
Message size for a queue, Entity Incoming messages that Maximum message size: 256
topic, or subscription entity exceed these quotas are KB for Standard tier, 1 MB
rejected, and an exception is for Premium tier.
received by the calling code.
Due to system overhead,
this limit is less than these
values.
Message property size for a Entity The exception Maximum message property
queue, topic, or subscription SerializationException is size for each property is
entity generated. 32,000. Cumulative size of all
properties can't exceed
64,000. This limit applies to
the entire header of the
BrokeredMessage, which has
both user properties and
system properties, such as
SequenceNumber, Label, and
MessageId.
Number of subscriptions per Entity Subsequent requests for 2,000 per-topic for the
topic creating additional Standard tier.
subscriptions for the topic
are rejected. As a result, if
configured through the
portal, an error message is
shown. If called from the
management API, an
exception is received by the
calling code.
QUOTA NAME SCOPE NOTES VALUE
Size of SQL filters or actions Namespace Subsequent requests for Maximum length of filter
creation of additional filters condition string: 1,024 (1 K).
are rejected, and an
exception is received by the Maximum length of rule
calling code. action string: 1,024 (1 K).
Maximum number of
expressions per rule action:
32.
Storage limits
The following table describes default limits for Azure general-purpose v1, v2, Blob storage, block blob storage, and
Data Lake Storage Gen2 enabled storage accounts. The ingress limit refers to all data that is sent to a storage
account. The egress limit refers to all data that is received from a storage account.
RESOURCE LIMIT
Maximum request rate1 per storage account 20,000 requests per second
Maximum ingress1 per storage account (regions other than US 5 Gbps if RA-GRS/GRS is enabled, 10 Gbps for LRS/ZRS2
and Europe)
Maximum egress for general-purpose v1 storage accounts (US 20 Gbps if RA-GRS/GRS is enabled, 30 Gbps for LRS/ZRS2
regions)
Maximum egress for general-purpose v1 storage accounts 10 Gbps if RA-GRS/GRS is enabled, 15 Gbps for LRS/ZRS2
(non-US regions)
1 Azure Storage standard accounts support higher capacity limits and higher limits for ingress by request. To
request an increase in account limits, contact Azure Support.
2 If yourstorage account has read-access enabled with geo-redundant storage (RA-GRS ) or geo-zone-redundant
storage (RA-GZRS ), then the egress targets for the secondary location are identical to those of the primary
location. Azure Storage replication options include:
Locally redundant storage (LRS )
Zone-redundant storage (ZRS )
Geo-redundant storage (GRS )
Read-access geo-redundant storage (RA-GRS )
Geo-zone-redundant storage (GZRS )
Read-access geo-zone-redundant storage (RA-GZRS )
3 Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob storage.
Azure Storage and blob storage limitations apply to Data Lake Storage Gen2.
NOTE
Microsoft recommends that you use a general-purpose v2 storage account for most scenarios. You can easily upgrade a
general-purpose v1 or an Azure Blob storage account to a general-purpose v2 account with no downtime and without the
need to copy data. For more information, see Upgrade to a general-purpose v2 storage account.
If the needs of your application exceed the scalability targets of a single storage account, you can build your
application to use multiple storage accounts. You can then partition your data objects across those storage
accounts. For information on volume pricing, see Azure Storage pricing.
All storage accounts run on a flat network topology regardless of when they were created. For more information
on the Azure Storage flat network architecture and on scalability, see Microsoft Azure Storage: A Highly Available
Cloud Storage Service with Strong Consistency.
For more information on limits for standard storage accounts, see Scalability targets for standard storage accounts.
Storage resource provider limits
The following limits apply only when you perform management operations by using Azure Resource Manager with
Azure Storage.
RESOURCE LIMIT
Maximum size of single blob container Same as maximum storage account capacity
Maximum size of a block blob 50,000 X 100 MiB (approximately 4.75 TiB)
Target request rate for a single blob Up to 500 requests per second
Target throughput for a single block blob Up to storage account ingress/egress limits1
1 Throughput for a single blob depends on several factors, including, but not limited to: concurrency, request size,
performance tier, speed of source for uploads, and destination for downloads. To take advantage of the
performance enhancements of high-throughput block blobs, upload larger blobs or blocks. Specifically, call the Put
Blob or Put Block operation with a blob or block size that is greater than 4 MiB for standard storage accounts. For
premium block blob or for Data Lake Storage Gen2 storage accounts, use a block or blob size that is greater than
256 KiB.
Azure Files limits
For more information on Azure Files limits, see Azure Files scalability and performance targets.
Minimum size of a file share No minimum; pay as you go 100 GiB; provisioned
Maximum IOPS per share 10,000 IOPS*, 1,000 IOPS 100,000 IOPS
Target throughput for a single file share up to 300 MiB/sec*, Up to 60 MiB/sec , See premium file share ingress and
egress values
Maximum egress for a single file share See standard file share target Up to 6,204 MiB/s
throughput
Maximum ingress for a single file share See standard file share target Up to 4,136 MiB/s
throughput
Maximum open handles per file 2,000 open handles 2,000 open handles
RESOURCE STANDARD FILE SHARES PREMIUM FILE SHARES
Maximum number of share snapshots 200 share snapshots 200 share snapshots
* Available in most regions, see Regional availability for the details on available regions.
Azure File Sync limits
RESOURCE TARGET HARD LIMIT
Sync groups per Storage Sync Service 100 sync groups Yes
Minimum file size for a file to be tiered V9: Based on file system cluster size Yes
(double file system cluster size). For
example, if the file system cluster size is
4kb, the minimum file size will be 8kb.
V8 and older: 64 KiB
NOTE
An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will
not be able to operate.
Maximum request rate per storage account 20,000 messages per second, which assumes a 1-KiB message
size
Target throughput for a single queue (1-KiB messages) Up to 2,000 messages per second
RESOURCE TARGET
Number of tables in an Azure storage account Limited only by the capacity of the storage account
Number of partitions in a table Limited only by the capacity of the storage account
Number of entities in a partition Limited only by the capacity of the storage account
Maximum number of properties in a table entity 255 (including the three system properties, PartitionKey,
RowKey, and Timestamp)
Maximum total size of an individual property in an entity Varies by property type. For more information, see Property
Types in Understanding the Table Service Data Model.
Size of an entity group transaction A transaction can include at most 100 entities and the payload
must be less than 4 MiB in size. An entity group transaction
can include an update to an entity only once.
Maximum request rate per storage account 20,000 transactions per second, which assumes a 1-KiB entity
size
Target throughput for a single table partition (1 KiB-entities) Up to 2,000 entities per second
RESOURCE LIMIT
For Standard storage accounts: A Standard storage account has a maximum total request rate of 20,000
IOPS. The total IOPS across all of your virtual machine disks in a Standard storage account should not
exceed this limit.
You can roughly calculate the number of highly utilized disks supported by a single Standard storage
account based on the request rate limit. For example, for a Basic tier VM, the maximum number of highly
utilized disks is about 66, which is 20,000/300 IOPS per disk. The maximum number of highly utilized disks
for a Standard tier VM is about 40, which is 20,000/500 IOPS per disk.
For Premium storage accounts: A Premium storage account has a maximum total throughput rate of 50
Gbps. The total throughput across all of your VM disks should not exceed this limit.
For more information, see Virtual machine sizes.
Managed virtual machine disks
Standard HDD managed disks
STAND
ARD
DISK
TYPE S4 S6 S10 S15 S20 S30 S40 S50 S60 S70 S80
Disk 32 64 128 256 512 1,024 2,048 4,096 8,192 16,38 32,76
size in 4 7
GiB
IOPS Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
per 500 500 500 500 500 500 500 500 1,300 2,000 2,000
disk
STAND
ARD
DISK
TYPE S4 S6 S10 S15 S20 S30 S40 S50 S60 S70 S80
Throu Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
ghput 60 60 60 60 60 60 60 60 300 500 500
per MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s MiB/s
disk ec ec ec ec ec ec ec ec ec ec ec
STA
NDA
RD
SSD
SIZE
S E1 E2 E3 E4 E6 E10 E15 E20 E30 E40 E50 E60 E70 E80
Disk 4 8 16 32 64 128 256 512 1,02 2,04 4,09 8,19 16,3 32,7
size 4 8 6 2 84 67
in
GiB
IOP Up Up Up Up Up Up Up Up Up Up Up Up Up Up
S to to to to to to to to to to to to to to
per 500 500 500 500 500 500 500 500 500 500 500 2,00 4,00 6,00
disk 0 0 0
Thr Up Up Up Up Up Up Up Up Up Up Up Up Up Up
oug to to to to to to to to to to to to to to
hpu 25 25 25 25 50 60 60 60 60 60 60 400 600 750
t MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB
per /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec
disk
PRE
MIU
M
SSD
SIZE
S P1 P2 P3 P4 P6 P10 P15 P20 P30 P40 P50 P60 P70 P80
Disk 4 8 16 32 64 128 256 512 1,02 2,04 4,09 8,19 16,3 32,7
size 4 8 6 2 84 67
in
GiB
Pro 120 120 120 120 240 500 1,10 2,30 5,00 7,50 7,50 16,0 18,0 20,0
visi 0 0 0 0 0 00 00 00
one
d
IOP
S
per
disk
PRE
MIU
M
SSD
SIZE
S P1 P2 P3 P4 P6 P10 P15 P20 P30 P40 P50 P60 P70 P80
Pro 25 25 25 25 50 100 125 150 200 250 250 500 750 900
visi MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB MiB
one /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec /sec
d
Thr
oug
hpu
t
per
disk
Max 30 30 30 30 30 30 30 30
bur min min min min min min min min
st
dur
atio
n
RESOURCE LIMIT
RESOURCE LIMIT
PREMIUM
STORAGE DISK
TYPE P10 P20 P30 P40 P50
Disk size 128 GiB 512 GiB 1,024 GiB (1 TB) 2,048 GiB (2 TB) 4,095 GiB (4 TB)
Maximum 100 MB/sec 150 MB/sec 200 MB/sec 250 MB/sec 250 MB/sec
throughput per
disk
Maximum 280 70 35 17 8
number of disks
per storage
account
RESOURCE LIMIT
Maximum number of schedules per 168 A schedule for every hour, every day of
bandwidth template the week.
Maximum size of a tiered volume on 64 TB for StorSimple 8100 and StorSimple 8100 and StorSimple 8600
physical devices StorSimple 8600 are physical devices.
Maximum size of a tiered volume on 30 TB for StorSimple 8010 StorSimple 8010 and StorSimple 8020
virtual devices in Azure 64 TB for StorSimple 8020 are virtual devices in Azure that use
Standard storage and Premium storage,
respectively.
Maximum size of a locally pinned 9 TB for StorSimple 8100 StorSimple 8100 and StorSimple 8600
volume on physical devices 24 TB for StorSimple 8600 are physical devices.
Maximum number of snapshots of any 256 This amount includes local snapshots
type that can be retained per volume and cloud snapshots.
Restore and clone recover time for <2 minutes The volume is made available
tiered volumes within 2 minutes of a restore or
clone operation, regardless of
the volume size.
The volume performance might
initially be slower than normal as
most of the data and metadata
still resides in the cloud.
Performance might increase as
data flows from the cloud to the
StorSimple device.
The total time to download
metadata depends on the
allocated volume size. Metadata
is automatically brought into the
device in the background at the
rate of 5 minutes per TB of
allocated volume data. This rate
might be affected by Internet
bandwidth to the cloud.
The restore or clone operation is
complete when all the metadata
is on the device.
Backup operations can't be
performed until the restore or
clone operation is fully complete.
LIMIT IDENTIFIER LIMIT COMMENTS
Restore recover time for locally pinned <2 minutes The volume is made available
volumes within 2 minutes of the restore
operation, regardless of the
volume size.
The volume performance might
initially be slower than normal as
most of the data and metadata
still resides in the cloud.
Performance might increase as
data flows from the cloud to the
StorSimple device.
The total time to download
metadata depends on the
allocated volume size. Metadata
is automatically brought into the
device in the background at the
rate of 5 minutes per TB of
allocated volume data. This rate
might be affected by Internet
bandwidth to the cloud.
Unlike tiered volumes, if there
are locally pinned volumes, the
volume data is also downloaded
locally on the device. The restore
operation is complete when all
the volume data has been
brought to the device.
The restore operations might be
long and the total time to
complete the restore will depend
on the size of the provisioned
local volume, your Internet
bandwidth, and the existing data
on the device. Backup
operations on the locally pinned
volume are allowed while the
restore operation is in progress.
Maximum client read/write throughput, 920/720 MB/sec with a single 10- Up to two times with MPIO and two
when served from the SSD tier* gigabit Ethernet network interface network interfaces.
Maximum client read/write throughput, 11/41 MB/sec Read throughput depends on clients
when served from the cloud tier* generating and maintaining sufficient
I/O queue depth.
*Maximum throughput per I/O type was measured with 100 percent read and 100 percent write scenarios. Actual
throughput might be lower and depends on I/O mix and network conditions.
Maximum number of inputs per job 60 There's a hard limit of 60 inputs per
Azure Stream Analytics job.
Maximum number of outputs per job 60 There's a hard limit of 60 outputs per
Stream Analytics job.
Maximum number of functions per job 60 There's a hard limit of 60 functions per
Stream Analytics job.
Maximum number of streaming units 192 There's a hard limit of 192 streaming
per job units per Stream Analytics job.
Maximum number of jobs per region 1,500 Each subscription can have up to 1,500
jobs per geographical region.
1Virtual machines created by using the classic deployment model instead of Azure Resource Manager are
automatically stored in a cloud service. You can add more virtual machines to that cloud service for load balancing
and availability.
2Input endpoints allow communications to a virtual machine from outside the virtual machine's cloud service.
Virtual machines in the same cloud service or virtual network can automatically communicate with each other. For
more information, see How to set up endpoints to a virtual machine.
Virtual Machines limits - Azure Resource Manager
The following limits apply when you use Azure Resource Manager and Azure resource groups.
RESOURCE LIMIT
VM total cores per subscription 201 per region. Contact support to increase limit.
Azure Spot VM total cores per subscription 201 per region. Contact support to increase limit.
VM per series, such as Dv2 and F, cores per subscription 201 per region. Contact support to increase limit.
RESOURCE LIMIT
1Default limits vary by offercategory type, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and
G. For example, the default for Enterprise Agreement subscriptions is 350.
2With Azure Resource Manager, certificates are stored in the Azure Key Vault. The number of certificates is
unlimited for a subscription. There's a 1-MB limit of certificates per deployment, which consists of either a single
VM or an availability set.
NOTE
Virtual machine cores have a regional total limit. They also have a limit for regional per-size series, such as Dv2 and F. These
limits are separately enforced. For example, consider a subscription with a US East total VM core limit of 30, an A series core
limit of 30, and a D series core limit of 30. This subscription can deploy 30 A1 VMs, or 30 D1 VMs, or a combination of the
two not to exceed a total of 30 cores. An example of a combination is 10 A1 VMs and 20 D1 VMs.
See also
Understand Azure limits and increases
Virtual machine and cloud service sizes for Azure
Sizes for Azure Cloud Services
Naming rules and restrictions for Azure resources