LenelS2 OnGuard Deployment Guide - tcm841 146850
LenelS2 OnGuard Deployment Guide - tcm841 146850
LenelS2 OnGuard Deployment Guide - tcm841 146850
Contents
1 Overview ...................................................................................................................4
2 The Cloud Environment .............................................................................................4
3 The OnGuard® Platform & The Cloud .......................................................................5
4 OnGuard using Infrastructure as a Service ...............................................................7
4.1 OnGuard IaaS Reference Designs .....................................................................8
4.1.1 OnGuard 32ES/ADV Architecture for the Cloud..........................................8
4.1.2 OnGuard Pro Architecture for the Cloud ...................................................10
4.1.3 OnGuard Enterprise Architecture for the Cloud ........................................12
5 Additional Deployment Considerations....................................................................14
5.1 On Premise Hardware ......................................................................................14
5.2 OnGuard Database Server...............................................................................14
5.3 Network ............................................................................................................15
5.4 Video Considerations .......................................................................................15
5.5 Licensing Options .............................................................................................17
5.6 Support Statement ...........................................................................................17
5.7 Conclusion .......................................................................................................17
6 Reference Information .............................................................................................19
6.1 Network Latencies by Connection Type ...........................................................19
6.2 Panel Download Times ....................................................................................19
1 Overview
This document is a high-level design and reference guide for deploying the LenelS2
OnGuard Security Solution within a cloud infrastructure environment. Attention is given
to general principles applicable to any cloud infrastructure. Specific detail is provided on
two of the most popular public cloud environments, Microsoft® Azure® and Amazon®
Web Services® (AWS®).
Because there is no need to purchase the computing equipment, cloud resources are
delivered and purchased using a services model. Users consume and pay for
computing, networking, and storage on an ongoing fractional basis, for only what they
need, when they need it.
The delivery of computing services can take a number of different forms. For our
purposes, three are relevant: Infrastructure as a Service (IaaS), Platform as a Service
(PaaS), and Software as a Service (SaaS).
IaaS Model
In an IaaS model, a cloud provider hosts the infrastructure components including
servers, storage and networking hardware and software. An end user leases these
components, typically on a Virtual Machine (VM) basis, and then installs and runs his or
her applications within this environment. IaaS provides the tools and services necessary
to assist the end user in maintaining and managing infrastructure, including the ability to
adjust the amount and type of the component resources used to best match the
applications’ needs.
PaaS Model
PaaS extends the IaaS model by offering end users access to supporting software
components, such as middleware and application runtime environments. This reduces
the amount of software that the end users would otherwise need to develop or provide
in order to run their application suites.
Cloud service providers can offer advanced capabilities for security and performance,
combined with network services such as load balancing and WAN optimization services.
Beyond the choice of cloud computing model, consideration should be given to a
provider’s overall capabilities. Specific evaluation criteria may include assessing a
provider’s number and location of data centers, their ability to support various WAN
traffic architectures and bandwidths, their data security and compliance models, and
their support of legacy application environments.
Deploying OnGuard in a cloud environment can lower capital costs through the efficient
lease of virtualized resources in place of the purchase of dedicated hardware. Operating
costs can be lowered by leveraging cloud data center personnel to manage the
virtualized and shared hardware infrastructure. Highly distributed and global
deployments can be simplified by taking advantage of the fact that large cloud providers
have already invested in global footprint data centers. Cloud environments also offer
high levels of built-in resiliency and redundancy. They can simplify IT compliance
requirements by offering such compliance as a core competency of the provider.
With these benefits in mind, careful attention should be paid to the tradeoffs and
potential risks that the use of cloud hosting can impose. Principal among these is the
reliability and overall throughput of the network connections between the premise(s) to
be secured and the cloud infrastructure. Panels, readers, cameras, and other security
sensors and devices must necessarily remain on the secured site. And while these
devices, when properly deployed, can function autonomously in the event of a Wide
Area Network (WAN) loss, the ability to properly monitor and manage them can be
seriously curtailed. Additionally, traditional, full featured ‘thick’ client applications may
not perform acceptably across the higher latency, lower bandwidth links that often
In the simplest sense, deploying OnGuard® into a cloud environment is very similar to
an on-premise deployment using Virtual Machines (VMs). OnGuard is well suited to
leverage the IaaS and selected PaaS capabilities of large cloud providers as long as its
client-server roots are kept in mind when designing such a deployment. The following
sections outline the best practices for creating such a deployment. These best practices
should apply equally to any cloud environment, whether public or private in nature.
Some specific consideration is also given to two of the most popular public cloud
providers, Amazon AWS and Microsoft Azure.
Independent of the OnGuard tier being deployed, there are some general guidelines that
can be applied.
Utilize a strong Virtual Machine (VM) for the OnGuard Application Server. Many
cloud providers offer different processor types and memory configurations for their
Virtual Machines. These configurations can be specialized for bursty workloads,
computationally intensive tasks, high throughput, etc. For OnGuard, a general purpose
processor class is recommended. The mimimum hardware requirements found in the
OnGuard Specifications Sheets provide a good starting point.
For small to medium installations, combine the Application, Database and Comm
Servers in one VM. This approach is a common practice in on-premise OnGuard
installations and is certainly acceptable in a cloud setting where virtual machines are
engineered to be highly available, data redundant and resilient to outages.
When using separate VMs for the Application, Database, and Comm Servers,
consider colocation. Low latency connections between these services are critical to
assuring a high performing OnGuard® installation. (See the section on Performance
Profiling for acceptable system latencies.) Placing these services in different VMs may
help with resilience and performance tuning so long as the connectivity between them is
robust. The VMs supporting these services should be in the same data center and the
same virtual subnet.
Use OnGuard Browser-based Clients wherever and whenever possible. The new
OnGuard browser-based clients are built for Internet connectivity and are much less
sensitive to network delays than the older, Windows® based thick clients. If, for
whatever reason, you must use one or more thick clients then there are two choices for
assuring performance. One is to run the thick client in the Cloud in a VM and use a
remote desktop console on-site, communicating with the cloud client via a Remote
Desktop Protocol. The other approach is to use a high performance WAN link. The
latency target should be in the 50ms range. Amazon AWS provides this option via their
Direct Connect® service. Microsoft Azure has a similar offering called ExpressRoute®.
Using the principles of the previous section, each design builds on the preceding in
terms of the complexity and the potential scale of the deployment.
Cloud
Campus
Comm Server
In this scenario, a Virtual Private Cloud is created that extends the campus network to
the cloud via secure VPN connections. In the cloud environment, a single VM is
instantiated that hosts all of the OnGuard server components. Most cloud providers
design redundancy into the physical hosting of their VMs, so this configuration in the
cloud is more resilient than a single physical server operating on the campus.
Client Applications: In the simplest deployment scenario it is envisioned that all on-site
access to the system be performed using the OnGuard® web clients. This approach
provides the best user experience and mitigates the need to invest in more costly site-
to-cloud network capacity. If thick clients are required, then the most cost effective
option is often to run the thick clients in the cloud environment, in a client suitable VM,
and create an RDP connection from the cloud to an RDP console on-site. Depending on
the number of open applications, and the degree of graphics processing being
performed on the thick client, the cloud client VM can be sized from an Intel I-3 with
4GB RAM to an Intel E5 with 8GB RAM.
Cloud
Campus
RDP
Thick
Client
A pps Connection
Campus
Virtual
Private
<…> Cloud
10.0.0.0/16
Secure
Gateway
App
Server DB
Server
Campus
<=
100
<
300ms
latency
Access
Panels/ Browser
Client RDP
Client
Comm
Comm
Server ... Comm
Server Server
<…>
10.0.0.0/16
Access
HW
VPN
Hardware
Virtual Machines: General performance processors such as those in the the Xeon E5
class (“Ivy Bridge” or “Sandy-Bridge”) or later generation are acceptable but higher
cache and RAM values should be considered (up to 15MB and 16GB respectively).
Comm Servers should be initially sized based on access control panel counts. 100
access panels per Comm Server is a good rule of thumb, keeping in mind that traffic
volumes vary from panel group to panel group. Microsoft® Windows Server® 2012 or
2016 should be installed. For other compatible versions of Windows, see the Operating
System Compatibility Chart on the LenelS2 Partner Center portal.
VM to VM Connectivity: When placing the OnGuard® server components into separate
VMs, special attention needs to be paid to the VM to VM connectivity and virtual
network latency. Application Server to DB Server, Application Server to Comm Server,
and Comm Server to DB Server should maintain connection latencies of between 5ms
and 40ms, with the low end of that range highly preferred. This implies that these server
VMs should be colocated within the same cloud datacenter, possibly within the same
physical rack space if the provider supports such an option.
Cloud to Site Connectivity: Most cloud providers offer a Secure Gateway/Virtual VPN
service to secure the connection on the cloud side of the WAN connection. On the user
premise a hardware VPN device is recommended. Bandwidth requirements are difficult
to forecast, but network latency tolerances can be bounded. For Comm Servers, thin
clients running OnGuard web clients, and remote desktop clients network latencies
should be below 300ms in order to assure reasonable client response and acceptable
performance for the communication with the on-site access control panels. If thick
clients must be deployed on-site, then higher speed network connections should be
employed that assure latencies at or below 60ms. Examples of higher speed links
include ExpressRoute (Microsoft Azure) and Direct Connect (Amazon AWS).
Client Applications: In the scenario pictured, a mix of thin, remote desktop, and thick
clients are deployed. Where thick clients are considered necessary, then high speed
site-to-cloud links should be employed as discussed under the Connectivity subsection,
above. For cloud-based thick clients utilizing RDP connections, the cloud client VM can
be sized from an Intel I-3 with 4GB RAM to an Intel E5 with 8GB RAM depending on the
number of running applications and the level of graphics processing being performed.
Cloud
<…> <…>
Regional
DB
10.0.0.0/16 10.0.0.0/20
App
Server High
Speed
Direct
Server Secure
Gateway
Connection Thick
Client
Cloud Datacenter #2 -‐ Regional Database Cloud Datacenter #3 -‐ Regional Database
Datacenter
Interconnect
Regional
DB
App
Server Regional
DB
Server App
Server
Server
<=
100
Access
Panels/ Comm
Server
Comm
Comm
Server ... Comm
Server Server
<…> <…>
10.0.0.0/16 10.0.0.0/16
<…> <…>
10.0.0.0/20 10.0.0.0/20
Campus Campus
Cloud to Site Connectivity: Most cloud providers offer a Secure Gateway/Virtual VPN
service to secure the connection on the cloud side of the WAN connection. On the user
premise, a hardware VPN device is recommended. Bandwidth requirements are difficult
to forecast, but network latency tolerances can be bounded. For Comm Servers, thin
clients running OnGuard browser-based web clients and remote desktop clients network
latencies should be below 300ms in order to assure reasonable client response and
acceptable performance for the communication with on-site access control panels. If
thick clients must be deployed on-site, then higher speed network connections should
be employed that assure latencies below 60ms. Examples of higher speed links include
ExpressRoute® (Microsoft Azure®) and Direct Connect® (Amazon AWS®).
Client Applications: In the scenario pictured above, a mix of thin, remote desktop, and
thick clients are deployed. Where thick clients are considered necessary, then high
speed site-to-cloud links should be employed as discussed under the Connectivity
subsection, above. For cloud-based thick clients utilizing RDP connections, the cloud
client VM can be sized from an Intel I-3 with 4GB RAM to an Intel E5 with 8GB RAM
depending on the number of running applications and the level of graphics processing
being performed.
Comm Servers deserve special mention as they are the most common scale-out
component of an OnGuard® system and might seem a candidate for on-premise
placement. The critical design point for a Comm Server is not its connectivity to
the access control panels but rather to the Application and Database Servers.
Colocation of these components is strongly encouraged for a high performing
installation.
Client workstations should leverage the new OnGuard browser-based clients whenever
possible. These applications are designed for internet connectivity and are far more
tolerant of varying network latencies. As described in the preceding sections, if thick
client apps are considered necessary, there are two options to consider. The first is to
run those client applications in the cloud and use remote desktop protocols and
applications to display them on-premise. The second is to assure the necessary
network bandwidth and quality of service attributes through the use of higher quality
internet connections. Many cloud providers offer dedicated cloud connections for these
types of situations.
Microsoft® SQL Server® and SQL Server Express® are supported for cloud-based
deployments of OnGuard. A full list of compatible versions is available on the Lenel
Partner Center portal. Oracle® Database Server has not been tested for cloud
deployment and is not supported at this time.
With the release of OnGuard system v7.5, cloud deployments in the Microsoft Azure
environment now support the Azure SQL® database. For some organizations this
service-based offering may represent an attractive alternative to purchasing a full
Microsoft SQL Server license. Refer to the Microsoft Azure home page for additional
information and pricing options.
5.3 Network
A site-to-site connection (S2S) joins a cloud virtual network to an on-premise network
via an IPSEC tunnel established through the use of a (preferably) hardware-based VPN
device. Organizations can domain-join VMs in the cloud network and leverage their on-
premises domain name servers (DNS) for address resolution between on-premise and
cloud assets. The persistence, cost, and security profile of a S2S solution make it the
most appropriate option for an OnGuard system IaaS implementation.
ExpressRoute and AWS Direct Connect are two offerings, from Microsoft and Amazon
respectively,which offer a direct connection to the cloud, bypassing the public internet.
The reliability and speed of such connections make these services a good fit for
scenarios involving thick client applications, data migration and disaster recovery.
Campus
Video
Client
Local
Connection
RDP
Client
Cloud
Higher
rate/res
video
to
cloud
<…>
RDP
data
d own 10.0.0.0/16
RDP
Full
Video
Client Connection <
60ms
latency
High
Speed
Direct
Connection
Virtual
Private
<…> Cloud
10.0.0.0/16
Secure
Gateway
App
Server DB
Server
Campus
<=
100
<
300ms
latency
Access
Video
Client
Panels/ Lower
rate/res
video
to
cloud Local
Connection
RDP
Client
Comm
RDP
data
d own
Comm
Server ... Comm
Server Server
<…>
10.0.0.0/16
VPN Hardware
This scenario follows the same Virtual Private Cloud model as the previous ones, but
unlike the access control server components, the video server/recorder units remain on-
premise. The video server/recorder units may be accessed and managed via the cloud
infrastructure but the video content itself is not stored in the cloud.
Video monitoring and retrieval may be performed in one of several ways. The video may
be retrieved directly from the recorder when the client is within the same campus or LAN
environment. A similar approach may be used across campuses when those campuses
share network connections independent of the cloud connection and the network speed
is sufficient to support such retrieval. When the campus interconnect is supported only
via the Virtual Private Cloud connection, the best approach may be to run the video
client in a VM within the cloud infrastructure and use an RDP connected client on the
receiving campus to view the video. This type of approach makes the best sense when,
as mentioned above, the cloud provider charges for the transmission of the video from
cloud to campus (but not for receiving same). This approach also makes sense when
bandwidth is limited or there are multiple monitoring clients on a campus since only the
video display data must be transmitted and not the full video stream.
Client Applications: In Figure 6, a mix of thin, remote desktop, and thick clients are
deployed. Web browser-based or thick video clients are depicted using local and
campus-to-campus connectivity to view video. These connections avoid the
transmission of video to and from the cloud, minimizing the potential for additional
network latency and cloud expense. It is assumed that management traffic flows
through the cloud in order to set up the video connections to the edge-based servers.
As a result, cloud connectivity is still required for proper traffic routing.
5.7 Conclusion
Deploying an OnGuard system in an IaaS cloud environment can be an effective way of
managing the capital and the operational costs of the system. In designing a cloud
deployment, the scale of the system, in terms of capacity as well as geography, must be
considered. Special attention should be paid to the placement of the server components
of the solution. Specific guidelines to minimize variables such as connection type and
LenelS2.com
Specifications subject to change without notice.
©2019, 2021 Carrier. All Rights Reserved. All trademarks are the property of their respective owners.
LenelS2 is a part of Carrier. 2021/08