Servicenow Istanbul It Operations Management PDF
Servicenow Istanbul It Operations Management PDF
Servicenow Istanbul It Operations Management PDF
Contents
IT Operations Management......................................................................................................................... 4
Discovery............................................................................................................................................. 4
Discovery basics....................................................................................................................... 4
Activate Discovery.................................................................................................................. 12
Discovery resource utilization.................................................................................................12
Discovery setup...................................................................................................................... 14
Running discoveries in your network..................................................................................... 22
Discovery configuration...........................................................................................................54
Advanced Discovery configuration....................................................................................... 169
Patterns and horizontal discovery........................................................................................ 209
Discovery probes and sensors............................................................................................. 211
Data collected by Discovery.................................................................................................278
Discovery troubleshooting.....................................................................................................479
Content packs for Discovery................................................................................................ 486
Application Profile Discovery - Legacy.................................................................................486
Service Mapping..............................................................................................................................514
Get started with Service Mapping........................................................................................ 515
Service Mapping setup......................................................................................................... 537
Plan business services......................................................................................................... 614
Business service definition................................................................................................... 628
Business service analysis and maintenance using maps.................................................... 666
Advanced Service Mapping configuration............................................................................ 702
Pattern customization........................................................................................................... 730
Event Management......................................................................................................................... 813
Understanding Event Management...................................................................................... 813
Event Management setup.....................................................................................................819
Administer events................................................................................................................. 854
Manage and monitor alerts.................................................................................................. 944
Service Analytics................................................................................................................ 1012
Operational Metrics.............................................................................................................1030
Cloud Management....................................................................................................................... 1054
Cloud Management product overview................................................................................1055
How Cloud Management works......................................................................................... 1106
Cloud administrator actions................................................................................................ 1111
Cloud operator actions....................................................................................................... 1190
Cloud user actions..............................................................................................................1204
Cloud Management reports................................................................................................ 1244
Advanced topics................................................................................................................. 1252
Errors and Troubleshooting................................................................................................ 1282
Content packs for Cloud Management...............................................................................1289
Index........................................................................................................................................................ 1290
IT Operations Management
Track and manage IT resources and systems.
Discovery
ServiceNow® Discovery finds applications and devices on your network, and then updates the CMDB with
the information it finds.
Discovery is available as a separate subscription from the rest of the ServiceNow platform. See Activate
Discovery on page 12 for details.
Discovery basics
Discovery finds computers, servers, printers, and a variety of IP-enabled devices, and the applications that
run on them. It can then update the CIs in your CMDB with the data it collects.
Discovery uses these components to explore computers and devices (which are also known as hosts):
Probes and sensors Probes and sensors are scripts that collect data
on the host, process it, and update the CMDB.
Several probes and sensors are provided out of
box, but you can also customize them and create
custom ones. You can also configure parameters to
control the behavior of a particular probe every time
Discovery phases
The four phases of discovery are outlined here. For a more detailed, step-by-step breakdown of the steps
for each phase, see Horizontal discovery process flow on page 8.
Discovery follows these phases:
• Scanning Discovery sends the Shazzam probe to the
network to see if specified ports are open on the
network and if they can respond to queries. For
example, if Shazzam finds a device that responds
on port 135, Discovery knows that it is a Windows
server.
• Classification If Discovery finds devices, it continues to send
probes to find the type of device at each IP
address. For example, Discovery would send
the WMI probe, which is used for Windows
devices, and find out that the Windows operating
system on this server is running Windows 2012.
Classifiers specify which trigger probes to run for
identification and exploration.
• Identification Discovery tries to gather more information about
the device, looks at those attributes, determines
if it has a CI in the CMDB, and reconciles that
information by either updating the CI or creating
a CI. Discovery uses additional probes, sensors,
and identifiers to do this. Identifiers, also known as
identification rules, specify the attributes that the
probes look at when reconciling data with the CIs
in the CMDB. If you are using patterns, Discovery
uses the appropriate identification rule for the CI
type specified in the pattern.
• Exploration The identifier in the previous step (Identification)
launches the exploration probes configured in
the classification record to gather additional
information about the device, like the applications
running on the device, and attributes about the
Note: Both Discovery and Service Mapping can use the same pattern; however, you define steps
in the pattern differently for the two applications.
Discovery uses special server processes, called MID Servers. Each MID server is a lightweight Java
process that can run on a Linux, Unix, or Windows server. The job of the MID server during Discovery is
to execute probes and patterns, and then return the results back to the instance for processing. It does not
retain any information.
MID servers communicate with the instance they are associated with by a simple model: They query
the instance for the initial probes to run, and they post the results back to the instance. There, the data
collected by the probes is processed by sensors, which decide how to proceed. Optionally, if you use
patterns, the operations in the patterns decide how to proceed. The MID server starts all communications,
using SOAP on HTTPS, which means that all communications are secure, and all communications are
initiated inside the enterprise's firewall. No special firewall rules or VPNs are required.
Discovery is agentless, meaning that it does not require any permanent software to be installed on any
computer or device to be discovered. The MID server uses several techniques to probe devices without
using agents. For example, the MID server uses SSH to connect to a Unix or Linux computer, and then run
a standard command (such as uname or df) to gather information. Similarly, it uses the Simple Network
Management Protocol (SNMP) to gather information from a network switch or a printer.
IP service affinity
IP Service affinity saves the IP service information that is used to successfully find a device and associates
it with the IP address of the device. Using this information, Discovery can target the device in subsequent
runs with the accurate protocol. Discovery records the IP Service along with the IP address. Discovery can
store the successful IP service information in the IP Service Affinity table [ip_service_affinity].
For example: A network device has both an SSH port and an SNMP port open. By its agentless design,
Discovery tries SSH first. However, network devices should be discovered through SNMP. Discovery tries
the SSH probe and it fails. This triggers the SNMP probe, which succeeds. With the association between
the IP address and the IP service, subsequent discovery runs that target this IP address use SNMP first,
because that is the probe that succeeded.
Discovery communications
Discovery communications cover how your instance talks to the MID Servers and how the MID Servers
talk to your devices. The MID Server is installed on the local internal network. All communications between
the MID Server and the instance are done via SOAP over HTTPS. Since we use the highly secure and
common protocol HTTPS, the MID Server can connect to the instance directly without having to open any
additional ports on the firewall. The MID Server can also be configured to communicate through a proxy
server if certain restrictions apply.
The MID Server is deployed in the internal network, so it can, with proper login credentials, connect directly
to discoverable devices.
Help the Help Desk is a standard feature available through the self-service Help the Help Desk application.
It gathers information, much as Discovery does, about a single Windows computer by running a script on
that computer. Discovery does many things that Help the Help Desk cannot do.
Automatic discovery by
schedule
Automatic discovery on user
login
Manually initiated discovery
Windows workstations
Windows servers *
Linux systems
Automatic discovery of
computers and devices
Automatic discovery of
relationships between
processes running on servers
*
Returns information about Windows server machines when Discovery is installed.
e. Discovery checks the ECC queue to find out which ports responded, which identifies the type of
machine. For example, if Shazzam detects that the machine is listening on port 22, Discovery
treats the machine as a UNIX or Linux machine.
Note: The trigger probe could also be the Horizontal Pattern probe, which tells Discovery
to follow the operations in the specified pattern, rather than sending out additional probes.
The operations in the pattern cover both the identification and exploration phases.
c. The MID Server checks the ECC queue, retrieves the discovery request, and runs the
identification trigger probe.
d. The identification probe accumulates identification data for each device and sends that data back
to the instance via the MID Server.
e. Discovery uses sensors for the identifier probe to process the information.
f. The sensors perform the analysis on the CMDB using CI identifiers. The sensors can update
existing CIs in the CMDB or create new ones.
• UNIX classifications:
• AIX
• ESX
• HP-UX
• Mac OS X
• Solaris
• Linux, including:
• Red Hat
• Fedora
• Debian
• SUSE
• CentOS
• Ubuntu
• Application classifications:
• Web servers:
• Apache Tomcat
• Apache mod_jk module and Apache mod_proxy module
• Microsoft IIS
• Oracle iPlanet
• JBoss
• Cloud-based technology:
• Amazon Web Services
• Microsoft Azure
• Databases:
• MySQL
• DB2
• Microsoft SQL
• MongoDB
• HBase
• Oracle
• PostgreSQL
• SAP HANA*
• Sybase*
• Storage servers:
• SMI- Storage Server
• SMI- Storage Switch
• Virtualization:
• Docker*
• Kernel-based Virtual Machine
• Solaris Zones
• vCenter
• Others:
• Cisco Unified Computing System (UCS)*
• HP Service Manager application server*
• HP Operations Manager*
• Oracle JavaSpaces
• GlassFish
• SMI- WBEM
• Jrun*
• LDAP service
• MongoDB Shard (MongoS)*
• NGINX
• Puppet
• SAP ASCS*, SAP Business Objects CMS Server*, SAP CI*, SAP DI*, SAP ERS*, SAP SCS*
• Microsoft Sharepoint*
• Microsoft SQL Server Analysis Services*
• Oracle Tuxedo*
• Oracle WebLogic
• IBM WebSphere*, WebSphere Message Broker*, IBM WebSphere MQ*
• Nagios
Items with an asterisk * have a pattern that Discovery can use to perform horizontal discovery.
Activate Discovery
Discovery is available as a separate subscription from the rest of the platform and requires the Discovery
(com.snc.discovery) plugin.
Role required: None
To purchase a Discovery subscription, contact your ServiceNow account manager or sales representative.
1. In the HI Service Portal, click Service Catalog > Activate Plugin.
2. Fill out the form.
3. Click Submit.
Note: These measurements were taken with base operating configurations. Your local system
results may vary.
Discovery setup
You can set up Discovery, including a MID Server, by using Guided Setup in your instance. If you do not
want to use Guided Setup, you can set up MID Servers and credentials, and then run discovery from the
Discovery Schedule.
Select your method of setup:
Setup method Description
Important: You must complete MID Server configuration before you can launch the Discovery
guided setup.
4. Click Configure to create the MID Server user and follow the instructions in the help pane that
appears on the right edge of the screen.
5. When you have provided the requested information for the MID Server user, click Submit, and then
click Mark as Complete at the bottom of the help pane.
The view returns to the task list. Notice that the circular progress indicator for the category shows 33%
of the MID Server configuration complete. The progress indicator on the left edge of the screen shows
the completion percentage for all the MID Server and Discovery tasks, with each category contributing
50% of the total. In the case of the MID Server, the completed task represents 16% of that category's
50% contribution to the whole.
6. Click Configure to begin the next task, that of downloading and installing the MID Server.
7. Continue working on the configuration tasks until your MID Server is running and has been
successfuly validated.
8. Return to the list of ITOM guided setup categories.
Notice that the Discovery setup procedure is now available, and the progress indicators have been
updated appropriately.
9. Click Get Started in the Discovery pane and complete the configuration tasks in the order prescribed,
using the same general process for completing the procedure as you did for the MID Server.
The progress indicator updates as you complete the Discovery setup tasks.
• The MID Server might not have outbound access on port 443 (SSL) or a proxy server might be
preventing TCP communication to the instance.
• Make sure that no firewalls are blocking communication between the MID Server and the instance.
5. Add credentials. Set the Discovery Credentials on the instance for all the devices in the network -
Windows and UNIX computers, printers, and network gear.
Credentials for Windows devices (using the WMI protocol) are provided by the logon configured for
the MID Server service on the Windows server host. Credentials for UNIX, vCenter, and SNMP must
be configured on the instance. Discovery will automatically figure out which credentials work for a
particular computer or device.
6. Define and run Discovery schedules.
The Discovery Schedule is the control point for running discoveries. The schedule controls when
Discovery runs, defines the MID Server to use, the type of Discovery that should run, and the IP
addresses to query. Create as many schedules as necessary, using different types of discoveries, and
configure them to run at any time. Let Discovery run on its configured schedule or manually execute
Discovery at any time. These schedules can be set up in a variety of ways, including a single schedule
for the entire network or separate schedules for each location or VLAN.
If you do not know the IP address to scan in your network, run a Network Discovery first to discover
the IP networks. Once discovered, you can convert these networks into IP address range sets that you
use in a Discovery Schedule.
Note: For advanced discoveries, such as those requiring load balancing or scanning across
multiple domains, use Discovery Behaviors.
Ensure that your MID Servers are properly configured prior to creating a Discovery schedule.
• Supported applications: The applications that are allowed to use the MID Server. You can use the
ALL application to allow any application to use the MID Server.
• IP ranges: The ranges of IP addresses the MID Server can scan. To find a MID Server match, the IP
range you configure on the Discovery schedule must fall into the ranges that one or more MID Servers
can support. The maximum number of IP addresses that a Discovery schedule can scan is set at
100,000 in the Max range size (glide.discovery.max_range_size) Discovery property. If the number
of IP addresses in a schedule exceeds the value set in this property, the schedule does not run.
• Capabilities: The capabilities that the MID Server supports. You can use the ALL capability to allow
any application to use the MID Server.
Ensure that your MID Servers can authenticate on the devices they find and classify configuration items
(CI) properly.
• Credentials: The MID Servers with the login credentials query the devices in the network.
• Classifications: The device and process classifications provided in the base platform are sufficient.
Create classifications as needed for the devices, processes, and applications in the network.
1. Use the Discovery Configuration Console to get started with Discovery. The console provides
configuration options which let you choose the types of devices, applications, and software CIs you
want Discovery to find. If you select a CI to exclude from scanning, the instance disables the related
probe or classifier that Discovery uses to identify the CI. See Discovery configuration console on page
23 to get started.
2. Determine what type of discovery to run:
• Run a Configuration item (CI) discovery to find the devices, computers, and applications on
your network. This is the most common type of discovery. Run CI discovery from the Discovery
Schedule, where you to set up a recurring schedule or run a discovery on demand. The Discovery
Schedule also provides configuration options for MID Servers and the Shazzam port probe.
• Run a Network discovery to find the internal IP networks within your organization. If you already
know the IP address ranges in your network, it is not necessary to run Network Discovery. It is
intended for organizations that do not have complete knowledge of the IP addresses available for
Discovery in their networks.
• Run Discovery to find AWS and Azure resources in your organizations cloud.
3. After you run a discovery, monitor the results of the discovery and resolve errors if they occurred:
• Use the Discovery status to see a summary of a Discovery and to access the ECC queue, which
shows probe and sensor activity, as well as the actual XML payload that is sent to or from an
instance.
• Use the Discovery dashboard to monitor ongoing Discovery operations.
If you use Internet Explorer, you must use version 8 or later. The configuration console supports keyboard
navigation and screen readers.
Console overview
Use the switches in the console to determine if a configuration item (CI) is discovered. The switch turns
gray when an item is excluded. When you configure Discovery to ignore a CI, the instance disables the
probe or classifier that explores that CI.
Note: This action does not deactivate the probe or classifier for general use across the system.
The item remains available for other processes.
Device Discovery
You can prevent the discovery of network devices, storage devices, and computers. Blocking device data
at the top level disables the related port probes. For example, blocking Network Devices disables the
SNMP port probe. Blocking devices at the second level of the hierarchy has the following effects:
• Device class categories disable classifiers.
• Device info categories disable probes executed by classifiers.
Data blocked at the second level of the hierarchy effects the nested subclasses. For example, if you block
Windows Servers & Computers data, the system also blocks data from nested subclasses, since those
classes are all dependent on the port probes that are disabled.
Application Discovery
Disabling the discovery of application data affects all host devices on which the application runs. For
example, if you configure Discovery to ignore databases, no information is gathered for either Linux or
Windows databases. Conversely, if you configure the system to ignore a device type, such as a Windows
server, no databases running on that server are explored, even if they are configured to be discovered.
The instance cannot identify the applications running on a server until it first discovers that server.
When you exclude an item from the Applications section, the system disables the relevant process
classifier.
Software Discovery
You can configure which Windows and UNIX software packages identified by Discovery are stored in the
CMDB.
Discovery provides configurable lists of keywords it can use to filter software packages for Windows and
UNIX operating systems. A filtering keyword can be any term that is present in a software package name
that you want to use to filter the results of a Discovery. You can choose to include or exclude packages
with names that contain the keywords defined in these lists.
Keywords for filtering software are stored in the Software Filter Keys [discovery_spkg_keys] table. The
following default keywords are provided for Windows:
• Hotfix
• Language Pack
• Security Update
Note: The system does not provide default filtering keywords for UNIX software.
The instance creates an update set record for any change you make to the console.
1. Navigate to Discovery > Discovery Definition > Configuration Console.
2. In the Software Filter panel, select the tab for the operating system you want, either Unix or Windows.
3. Add or delete keywords for filtering software packages.
a) To add a keyword, enter it in the field provided and click New Key.
b) To delete a keyword, click the red + icon to the right of the keyword.
Discovery looks for software packages containing these terms in the selected operating system and
either includes or excludes them from the CMDB. The system does not provide default keywords for
filtering UNIX software packages, but provides these default keywords for Windows:
• Hotfix
• Language Pack
• Security Update
4. Select whether to Include or Exclude software packages with the keywords in their names.
• Include: Only add software packages that match the keywords.
• Exclude: Add all software packages except those matching the keywords.
5. You can switch off the software filter for either UNIX or Windows.
When the filter is disabled for an operating system, all discovered software packages for that operating
system are added to the CMDB, with no filtering applied.
Protocol Category Top level Device Port probes active_discover_cis Turning off this
[discovery_category_protocol]
categories, such [discovery_port_probe] field prevents
as: Discovery from
attempting to
• Network
use that protocol
Devices,
to discover CIs,
• Windows but allows port
Servers & probe to be used
Computers elsewhere for
• Unix Servers & scanning.
Computers
Disabling classifiers
Disabling Discovery for a specific configuration item does not prevent Discovery from finding the CI, but
prevents the system from classifying and entering the CI into the CMDB. The system stops Discovery
when the disabled classifier is detected. This prevents the system from skipping a higher order of
classification and identifying the CI at a lower level of classification. For example, if you disable Discovery
of Windows 2008 Servers in the console, the system stops Discovery when it detects the disabled server
classifier and does not run the Windows Computer classifier.
Schedule discovery
A schedule determines what horizontal discovery searches for, when it runs, and which MID Servers are
used.
Roles required: admin, discovery_admin
You can use a Discovery schedule to launch horizontal discovery, which uses probes, sensors, and
pattern operations to scan your network for CIs. Use this procedure to create a schedule manually from the
Discovery Schedule form.
Service Mapping also provides a Discovery schedule for top-down discovery. See Create a discovery
schedule for an application CI on page 638 for more information.
Use the Discovery Schedule module in the Discovery application to:
• Configure device identification by IP address or other identifiers.
• Determine if credentials will be used in device probes.
• Name the MID Server to use for a particular type of discovery.
• Create or disable a schedule that controls when the discovery runs in your network.
• Configure the use of multiple Shazzam probes for load balancing.
• Configure the use of multiple MID Servers for load balancing.
• Run a discovery manually.
Field Description
Field Description
MID Server Select the method that Discovery uses to select a MID Server:
selection method
• Auto-Select MID Server: Allow Discovery to select the MID
Server automatically based on the Discovery IP Ranges you
configure. To find a matching MID Server, you must configure MID
Servers to use:
• The Discovery application, or ALL applications. This makes the
MID Server authorized to be accessed by Discovery.
• The IP Range that includes the ranges you configure on the
Discovery Schedule.
MID Server Select the MID Server to use for this schedule. This field is available
if MID Server selection method is set to Specific MID Server, or if you
discover IP addresses, networks, or web services.
MID Server Select the MID Server cluster to use for this schedule. This field
Cluster is available if MID Server selection method is set to Specific MID
Cluster.
Behavior Select a behavior configured for the MID Servers in your network.
This field is available only if MID Server selection method is set to
Use Behavior.
Field Description
Active Select the check box to enable this schedule. If you clear the check
box, the schedule is disabled, but you can still run a discovery
manually from this form, using the configured values.
Location Choose a location to assign to the CIs that are discovered by this
schedule. If this field is blank, then no location is assigned.
Max run time Set a time limit for running this schedule. When the configured
time elapses, the remaining tasks for the discovery are canceled,
even if the scan is not complete. Use this field to limit system load
to a desirable time window. If no value is entered in this field, this
schedule runs until complete.
Run and related Determines the run schedule of the discovery. Configure the
fields frequency in the Run field and the other fields that appear to specify
an exact time.
Note: The run time always uses the system timezone. If you
add the optional Run as timezone field, it has no effect on the
actual runtime.
Include alive Select this check box to include alive devices, which are devices that
have at least one port that responds to the scan, but no open ports.
Discovery knows that there is a device there, but has no information
about it. If this check box is cleared, Discovery returns all active
devices, which are devices that have at least one open port.
Log state changes Select this check box to create a log entry every time the state
changes during a discovery, such as a device going from Active to
Classifying. View the discovery states from the Discovery Devices
related list on the Discovery Status form. The Completed activity and
Current activity fields display the states.
Shazzam batch Enter the number of IP addresses that each Shazzam probe can
size scan. Dividing the IP addresses into batches improves performance
by allowing classification for each batch to begin after the batch
completes, rather than after all IP addresses have been scanned.
The probes run sequentially. For example, if this value is set to 1000
and a discovery must scan 10,000 IP addresses using a single MID
Server, it creates 10 Shazzam probes with each probe scanning
1000 IP addresses. By default, the batch size is 5000. A UI policy
enforces a minimum batch size of 256 because batch sizes below 256
IP addresses do not benefit from clustering. The policy converts any
value below 256 to a value of zero.
The value for this field cannot exceed the value defined in the
maximum range size property for the Shazzam probe.
Field Description
Shazzam cluster Select the check box to distribute Shazzam processing among
support multiple MID Servers in a cluster and improve performance. Works
with the Shazzam batch size. For example, if the cluster contains
100,000 IP addresses and there are 10 MID Server Schedules, every
MID Server will process 10,000 addresses (100,000 IP Addresses/10
MID Servers). If the Shazzam batch size contains the default 5,000 IP
addresses per probe, a Schedule runs two Shazzam probes per MID
Server (10,000 IP addresses/5,000 per batch). This field is available
starting with the Eureka release.
Use SNMP Use this field to designate the SNMP version to use for this discovery.
Version Valid options are v1/v2c, v3, or All.
Quick ranges Define IP addresses and address ranges to scan by entering IP
addresses in multiple formats (network, range, or list) in a single,
comma-delimited string. For more information, see Create a Quick IP
range for a Discovery schedule on page 67.
Discover now Use this link to immediately start this Discovery.
Discovery IP This related list defines the ranges of IP addresses to scan with this
Ranges schedule. If you are using a simple CI scan (no behaviors), use this
related list to define the IP addresses to discover.
Discovery Range This related list defines each range set in a schedule to be scanned
Sets by one or more Shazzam probes.
Discovery Status This related list displays the results of current and past Discovery
schedule runs.
Daily Runs every day. Use the Time field to select the time of day this
discovery runs.
Weekly Runs on one designated day of each week. Use the Time field to
select the time of day this discovery runs.
Monthly Runs on one designated day of each month. Use the Day field to
select the day of the month. Use the Time field to select the time of
day this discovery runs. If the designated day does not occur in the
month, the schedule does not run that month. For example, if you
designate day 30, the schedule does not run in February.
Periodically Runs every designated period of time. Use the Repeat Interval
field to define the period of time in days, hours, minutes and
seconds. The first discovery runs at the point in time defined in the
Starting field. The subsequent discoveries run after each Repeat
Interval period passes.
Once Run one time as designated by the date and time defined in the
Starting field.
On Demand Does not run on a schedule. Click the Discover now link to run this
discovery.
If you selected a specific MID Server, go to the MID Server dashboard to verify that the MID Server you
select is up and validated.
Note: DiscoverNow does not currently support IP network discovery. Make sure that you enter
a single IP address only and not an entire network, such as 10.105.37.0/24.
When a MID Server is assigned to the subnet containing the target IP address and currently in
an operational status ofUp, the name appears automatically in the MID Server field. If multiple
MID servers are found, the system selects one for you. The value in the MID Server field can be
overwritten if you want to select a different MID Server.
Attention: If the selected MID Server is part of a load balanced cluster and becomes
unavailable for any reason, the instance does not assign another MID Server from that cluster
to the quick Discovery. You must select another MID Server from the list of appropriate MID
Servers.
4. If no MID Server is defined for that network, select one from the list of available MID Servers.
The discoveryFromIP method takes two arguments, IP and MID Server. The IP argument is
mandatory, but the MID Server argument is optional. To choose the MID Server, supply either the
sys_id or name of the MID Server as the argument. If you do not name a MID Server, the system
attempts to find a valid one automatically. A valid MID Server has a status of Up and is capable of
discovering the given IP address. If the system finds a valid MID Server and runs a discovery, the
discoveryFromIP method returns the sys_id of the discovery status record. If no MID Server is capable
of discovering this IP address, the method returns the value undefined.
If you manually specify the TARGET_MIDSERVER, the system validates the given value and ensures
that the MID Server table contains the specified MID Server record. If the validation passes, the
discoveryFromIP method returns the sys_id of the discovery status record. If the validation fails, the
method return the value undefined.
2. To view the actual payload of a probe, click the XML icon in a record in the ECC Queue.
3. Use the Discovery Log form for a quick look at how the probes are doing. To display the Discovery
Log, navigate to Discovery > Discovery Log.
Column Information
Column Information
Note:
If you cancel an active discovery, note the following:
• Existing sensor jobs that have started processing are immediately terminated.
• The existing sensor jobs that are in a Ready state, but have not started processing, are
deleted from the system.
These steps are followed when you select Auto-Select MID Server for the MID Server selection method
on the Discovery form.
1. Discovery looks for a MID Server with the Discovery application that also has an appropriate IP range
configured.
2. If no MID Servers meet these criteria, it looks for a MID Server that has the ALL application that also
has an appropriate IP range configured.
3. If more than one MID Servers meet the above criteria, Discovery chooses the first MID Server with the
status of Up. If more than one MID Servers are Up, it randomly picks one.
4. If none are Up, it uses the default MID Server specified for the Discovery application, assuming it is
Up.
5. If no default MID Server is specified for the Discovery application, it uses the default MID Server
specified for the ALL application, assuming it is Up.
6. If no default MID Server is specified for the Discovery application, Discovery cycles through the same
steps above, but it looks for MID Servers with the status of Paused or Upgrading.
Note: When a MID Server is paused or upgrading, it does not actually process commands
until it returns to the status of Up.
These steps are followed when you select Specific MID Cluster for the MID Server selection method on
the Discovery form, and the cluster is a load balancing cluster:
1. Discovery uses the first MID Server in the cluster that Discovery finds with the status of Up.
2. If more than one MID Servers are Up, Discovery randomly picks one. If it cannot find any MID Servers,
Discovery looks for MID Servers in the cluster with the status of Paused or Upgrading.
Note: Discovery ignores the default MID Server for the Discovery and ALL applications when
selecting a MID Server from the cluster.
During the port scan phase, Discovery collects all of the target IP addresses and splits them equally
between MID Servers matching the criteria mentioned above (meaning the MID Servers are qualified
to do the port scan work). The Shazzam batch size, which you configured on the Discovery schedule,
determines the number of IP addresses that each Shazzam probe can scan. This helps determine how
much work each MID Server does during the port scan phase.
For example, if there are 16,000 IP addresses to scan among three qualified MID Servers, and you use
the default Shazzam batch size of 5000, two MID Servers each handle 5000 IP address scans (1 Shazzam
probe each), and the other MID Server handles 6000 IP address scans by launching two Shazzam probes.
Network Discovery
Network Discovery discovers the internal IP networks within your organization.
Note: If you already know the IP address ranges in your network, it is not necessary to run
Network Discovery. This procedure is intended for organizations that do not have complete
knowledge of the IP addresses available for Discovery in their networks.
Discovery uses the information it gathers to update routers and Layer 3 switches in the CMDB. Network
Discovery is performed by a single MID Server that begins its scan on a configurable list of starting
(or seed) routers. Typically, the starting routers are the default routers used by all the MID Server host
machines in the network, but can be any designated routers. The MID Server uses the router tables on
the starting routers to discover other routers in the network. The MID Server then spreads out through the
network, using router tables it finds to discover other routers, and so on, until all the routers and switches
have been explored.
After running Network Discovery, convert the IP networks it finds into IP address Range Sets that you use
in Discovery schedules to discover configuration items (CI).
Configure SNMP credentials or (optionally) SSH credentials. Port 161 must be open for SNMP access and
port 22 or SSH access.
8. Add starting routers to the schedule in the Discovery Range Sets list.
a) Click the Network Discovery Auto Starting Routers link to populate the list with the starting
router for each MID Server in your network.
The Discovery Status page appears, displaying the progress of the conversion. The system
increments the Started and Completed count of IP networks, until all the networks are converted.
Note: The Amazon Web Services (com.snc.aws) plugin does not need to be active for Discovery
to run on AWS accounts.
Configure the AWS Admin account for your Amazon Web Service
Create an account record in the instance that matches your admin account in AWS. This account record is
necessary for the instance to reach your cloud resources in AWS.
Role required: admin
1. Navigate to AWS Discovery > Accounts.
2. Click New.
3. Use the following information to fill out the AWS Account form:
• Name: Descriptive name for this account.
• Account ID: The 12 digit administrator account ID that you receive from Amazon. The Account ID
can be found on your Billing Management Console in AWS, under the Account field.
4. Click Submit.
6. Click Submit.
Note: Do not fill in the MID Server field, as MID servers are not used for AWS discovery.
3. Click Submit.
4. Navigate to AWS Discovery > Discovery Schedules.
5. Click the schedule that was just created for this discovery.
6. Under Related Links, in the Web Service Accounts tab, click Edit.
7. Move the AWS account that is to be discovered from the Collection pane to the Web Service
Accounts List pane.
8. Click Save.
Azure discovery
After you run discovery on Azure resources, check the Azure Resource Type [azure_resource_type] table,
which contains the information on all the Azure Resources. Check the Azure Datasource Type Mapping
[azure_datasource_type_mapping] table to see more attributes of discovered Azure CIs.
4. Click Submit.
Field Value
Name Enter the name of the service principal to register
with the instance.
Tenant ID and Client ID Paste the values that you obtained from the
Azure portal.
Authentication Method Select Client secret.
Secret key Paste the secret key that was generated while
creating the Azure Service Principal.
This field appears when Authentication method
is Client secret.
The instance discovers your organization's Azure subscriptions. This discovery process discovers
only which subscriptions the Service Principal has access to. The process does not discover other
resources.
Field Value
Name Name of the Azure subscription that uses the
Discovery schedule that is defined on this page.
Discover Type of resources to discover.
MID server MID server that will manage the Discovery
process.
Active Check box to activate this schedule.
Max run time Overall time period over which the schedule
applies. For example, enter 365 days if you want
this schedule to be used for one year. Specify a
period less than a day using hh:mm:ss format.
Run How often the schedule runs.
5. Click Submit.
Discovery runs and populates your CMDB (configuration management database) and images that you
can later approve to offer to cloud users in the catalog. The current state of the resource, if applicable, is
updated accordingly (for example, Azure virtual machine instances, Azure virtual networks, Azure subnets,
etc.).
Discovery configuration
Configure the elements that Discovery needs to investigate your network, such as credentials, schedules,
and IP addresses.
Discovery can run on a regular, configurable schedule or can be launched manually. A discovery over
a specified IP address range which tells the Discovery application which specific devices to investigate.
To retrieve useful information, Discovery needs credentials (usually a user name and password pair) for
devices within a particular range so that Discovery can connect to and run various probes on the devices it
finds. Discovery compares the devices it finds with configuration items (CI) in the CMDB and updates any
matching devices. If Discovery does not find a matching CI in the CMDB, it creates a CI.
Use the following links to configure Discovery for your environment. You do not need to perform all these
procedures to run a Discovery. The platform provides many defaults you can use to explore your network
that are suitable for most discoveries. To get started quickly with Discovery, you can use Guided Setup,
which expedites the setup of a basic Discovery.
Discovery properties on page Discovery and SCCM together Discovery Dashboard on page
55 on page 81 156
Discovery IP address Discovery classifiers on page Discovery customization on
configuration on page 66 84 page 168
Discovery domain separation on Discovery identifiers on page
page 76 114
SNMP support for Discovery on Discovery status on page 138
page 78
Discovery properties
Discovery properties allow you to control several aspects of the horizontal discovery process.
Disco properties
Field Description
Field Description
Field Description
• Type: integer
• Default value: 4
Field Description
Field Description
Field Description
Field Description
Field Description
Field Description
glide.discovery.discoverable.network.max.netmask.bits
Maximum netmask size for discoverable
networks (bits). The maximum number of bits
in a regular netmask for networks that will be
discovered by Network Discovery. By "regular
netmask" we mean a netmask that can be
expressed in binary as a string of ones followed
by a string of zeroes (255.255.255.0 is regular,
255.255.255.64 is irregular). Regular networks
are commonly expressed like this: 10.0.0.0/24,
which means a network address of 10.0.0.0
with a netmask of 255.255.255.0. Larger bit
numbers mean networks with smaller numbers
of addresses in them. For example, the
network 10.128.0.128/30 has four addresses
in it: one network address (10.128.0.128),
one broadcast address (10.128.0.131), and
two usable addresses (10.128.0.129 and
10.128.0.130). Small networks like this are
commonly configured in network gear to
provide loopback addresses or networks used
strictly by point-to-point connections. Since
these sorts of networks generally don't need
to be discovered by Network Discovery, it
would be useful to filter them out. By setting
this property to a value of 1 through 32, you
can limit the sizes of regular networks that
are discovered. Setting it to any other value
will cause all networks to be discovered.
Irregular networks are always discovered. The
default value is 28, which means that regular
networks with 8 or fewer addresses will not be
discovered.
• Type : integer
• Default value: 28
Field Description
Field Description
glide.discovery.max_concurrent_invocations_per_schedule
Maximum concurrent invocations per
schedule: Prevents an unbounded number of
invocations from inundating the system when
a schedule takes longer than the time between
invocations. The value is an integer defining
the maximum number of automated invocations
of the same schedule that may proceed at one
time. If the limit has been reached subsequent
scheduled invocations will be cancelled. The
default value is 3. A value of 0 or any negative
number will disable this restriction.
• Type : integer
• Default value: 3
Note: If you do not know the IP addresses in the network, run Network Discovery on page 41
first to determine the IP networks. Then, convert the IP networks into IP address range sets.
IP address list
Use IP address lists to add individual addresses for Discovery to query. These addresses should not be
included in any existing IP range or IP network. You can enter the IP address of the device or a host name
(DNS name). If you enter a host name, it must be resolvable from the MID Server system.
IP address range
You can define arbitrary ranges of IP addresses for Discovery to query. This is a good way to include
selected segments of a network or subnet. However, Discovery has no way of knowing if the IP range
includes addresses for private networks or broadcast addresses and scans all the addresses in the range.
If the network and broadcast addresses are included, then the results are inaccurate. For this reason,
discoveries configured to detect IP networks are generally more accurate than those configured for IP
address ranges. Include only those IP addresses in your range that are reserved for manageable devices
on the public network.
IP network
An IP network includes the range of available IP addresses in that network, including the network address
(the lowest address in the range) and the broadcast address (the highest address in the range). An
example of a class C network range is 192.168.0.0 to 192.168.0.255. In the Range Set form, this network
can be entered with either of the following notations:
• 192.168.0.0/24
• 192.168.0.1/255.255.255.0
This notation indicates that Discovery is scanning an IP network, and Discovery does not scan the highest
and lowest numbers in the range. This prevents significant errors from being introduced into the Discovery
data by the broadcast address, which returns all the devices in the network, and the network address,
which can add an arbitrary number of redundant devices. This built-in control makes IP networks the best
method of defining which IP address ranges to query.
Property Description
1. Click the Quick Ranges related link on the Discovery Schedule form.
2. Enter the IP ranges and specific IP addresses to scan.
3. Click Make Ranges.
Note: The Quick Range interface is for entering IP addresses only and cannot be used to edit
IP addresses that have already been submitted.
6. Navigate to System Import Sets > Create Transform Map and map the items in the
Excel spreadsheet to the fields of the CMDB in the target table Discovery IP Range
[discovery_range_item] table.
7. Give the Transform Map a unique and descriptive name.
8. Submit the form, and then click New in the Field Maps Related List.
9. Map the fields from the Excel spreadsheet to the fields in the Discovery IP Range
(discovery_range_item) table using the following field names:
• RangeName to Discovery Range
• IP to Network IP
• mask to Network Mask (or Bits)
• type to Type. Note that the type field is case sensitive.
The display names are different from the actual field names that appear in the Field Maps list.
10. Click the Mapping Assist Related Link and use the lists that appear to resolve the fields between the
table and the data source (the Excel spreadsheet in this example).
This produces the complete map between the data source and the fields to populate in the CMDB.
11. Click Save.
The view returns to the Table Transform Map form.
12. Click Transform in the Related Links to move the data into the proper fields in the Discovery IP Range
(discovery_range_item) table.
The imported IP ranges are available now for use in any advanced Discovery schedule.
Exclude IP addresses
Administrators can exclude specific IP addresses in a range or network from a Discovery Schedule.
For example, you might exclude a subnet containing devices restricted from interacting with other devices
or exclude a device with an intentionally unorthodox configuration that causes an authentication issue each
time it is discovered.
To exclude an IP address:
1. In the Discovery Schedule form, click the link for the Type of IP address range that contains the
address to exclude.
For example, to exclude 10.10.10.28, select the IP Network for 10.10.10.0/24, which is the range of IP
addresses that contains the target address.
The excluded IP address appears in the Discovery Range Item IPs related list for that IP address
Type.
7. Click Update to save the excluded address and return to the Discovery Schedule.
Discovery implements data domain separation through the MID server by impersonating the MID Server
user during sensor processing. Discovery uses the domain that the MID Server user is in to determine
which domain the discovered data should be put into. Discovery configuration information, including
classifiers, identifiers, probes, and sensors, is not domain separated.
Domain separation for Discovery is available starting with the Helsinki release.
Note: Discovery of Amazon Web Services and Microsoft Azure cloud resources does not support
domain separation because MID Servers are not used.
You can create versions of these specific MID Server policy records that only a MID Server from the
same domain can use. This process separation is supported for records in tables that extend MID Server
Synchronized Files [ecc_agent_sync_file]:
• MID Server MIB File [ecc_agent_mib]
• MID Server JAR File [ecc_agent_jar]
• MID Server Script File [ecc_agent_script_files]
By default, all records in these tables are members of the global domain. A user can override the default
global domain and create a version of these policies for use in the user's own domain.
Note: Attachments on MIB or JAR file records might not appear as they did in a non-domain
separated environment. This occurs because the Attachments [sys_attachment] table is data
separated. When data is separated between domains, a record in a child domain cannot access
records in a parent domain.
Records in all tables that extend the Base Configuration Item [cmdb] table can be domain separated. In
addition, records in these tables can also be domain separated:
• Serial Number [cmdb_serial_number]
• TCP Connection [cmdb_tcp]
• Fibre Channel Initiator [cmdb_fc_initiator]
• Fibre Channel Targets [cmdb_fc_target]
• IP Address to DNS Name [cmdb_ip_address_dns_name]
• Service [cmdb_ip_service_ci]
• KVM Virtual Device [cmdb_kvm_device]
• Load Balancer Service VLAN [cmdb_lb_service_vlan]
• Load Balancer VLAN Interface [cmdb_lb_vlan_interface]
• Switch Port [cmdb_switch_port]
When the MID Server connects to the instance, the MID Server record is created in the proper domain.
If you must change the MID Server domain:
1. Stop the MID Server and delete the ecc_agent record.
2. Update the MID Server config.xml with the new user in the new domain and restart the MID Server
service.
If you need to create versions of specific MID Server files that only MID Servers in your domain can use:
1. Open or create a record in one of these MID Server modules:
• SNMP MIBs
• JAR Files
• Script Files
2. Update an existing domain policy or submit a new record. The system overrides the global domain
configuration with your domain and saves a new copy of the record. The MID Servers in your domain
can only access records in the global domain and records overridden by your specific domain.
Note: Attachments on MIB or JAR file records might not appear as they did in a non-domain
separated environment. This occurs because the Attachments [sys_attachment] table is data
separated. When data is separated between domains, a record in a child domain cannot
access records in a parent domain.
Note: If records are not removed before the switch, duplicate records may exist. In the event that
AI and non-AI data becomes mixed, clear the Software Installation table.
The ServiceNow SCCM integrations are self-contained and can exist independently. They each use their
own import set tables, data sources and transform maps. However, all SCCM integrations will transform
data into the same tables within the ServiceNow CMDB. To avoid the data being overwritten by another
source:
• Use one SCCM integration and disable all other SCCM scheduled imports.
• Perform a full import to clear the cmdb_software_instance table, the cmdb_sam_sw_install table, and
other tables of old SCCM data.
Note: It is possible to configure each plugin to integrate with SCCM 2007 or 2012 because the
mechanism of the integration is actually the same, which is to leverage Java Database Connectivity
(JDBC) imports. However, the data sources will need to be modified if a plugin is to be used for an
SCCM version they’re not written for. Starting with Fuji, it is recommended to use the plugin version
that corresponds to the SCCM version it is designed to integrate with.
For instructions on disabling the current SCCM import schedule, see Upgrade the SCCM integration
version
Important: To improve the performance of your initial SCCM import, you can prevent the
system from checking against deleted software prior to the import date. Navigate to Integration
- Microsoft SCCM <version> > Data Sources > SCCM <version> Removed Software and
enter the current date in the Last run datetime field. This field is populated automatically for each
subsequent run of the removed software data source.
Important: If SCCM Integration is active before Discovery is enabled and the property is
enabled, Discovery ignores the population of software for any CIs that are also imported
through SCCM. If Discovery is enabled before the SCCM Integration, it is possible for software
installation data from both sources to be mixed.
Discovery classifiers
A classifier tells Discovery which probes to trigger for the identification and exploration phases of
discovery. Classifiers can also trigger the Horizontal Pattern probe, which launches a pattern, rather than
additional probes, for identification and exploration.
The classifier essentially starts the identification stage. Discovery uses it after the classification probe
returns important parameters to the instance that tell Discovery what to do next.
In most cases, you do not need to create a classifier or modify a classifier. But if you are having trouble
with Discovery, you might want to check the conditions that determine when a classifier runs based on the
parameters the classification probe returns to the instance. Or if you want to discover a new type of CI that
Discovery does not already find, you can create your own classifier.
Discovery classification can be broken down into three types: device classification, process classification,
and IP address (or IP scan) classification:
See Classification for IP address discovery on page 102 for more details about the parameters available
to classifiers for this type of discovery.
Classifier criteria
Classifiers also provide criteria that you can use to specify when Discovery should use the classifier under
the conditions that you define. The criteria is based on the parameters that a classify probe returns to
Discovery. Criteria is constructed with the parameter, an operator, and a value.
Discovery can use patterns, rather than probes, to identify and explore CIs. Discovery triggers patterns
from the Horizontal Discovery probe, which can be specified on a classifier. You can create you own
patterns and add them, via the Horizontal Discovery probe, to a classifier. See Add the Horizontal Pattern
probe to a classifier on page 210 for instructions. You might already be using one of the out-of-box
patterns that are provided with Discovery. You can verify this by looking at the classifier to see if the
Horizontal Pattern Probe is specified.
To log debugging information about classifications, add the following system property. The resulting log
entries list the name of each classifier that runs, along with all the names and values that are available to
the criteria in the classifier.
System property Description
• Create or modify a discovery classifier if you want to classify CIs that Discovery does not already
classify, or trigger other probes that are not already on a classifier. You can modify classifiers that
Discovery uses in standard CI discovery, process classifiers for applications, and classifiers based on
IP address scans.
Before you modify any classifiers, review the parameters that are available for each type of classifier.
• If Windows machines are on your network, you can use the WinRM protocol, rather than WMI, for more
efficient lightweight data transfer and remote command execution. By default, Discovery uses WMI.
For instructions on the classifier modifications you can make to use WinRM, see Use Windows Remote
Management for classification on page 107.
• If you have Windows computers that are acting as servers and you want them to be classified by their
function rather than by the operating system, you can make changes to the criteria of the Windows
classifier. See Reclassify a Windows Workstation machine as a server on page 109 for instructions.
3. Click New in the list of classifications for the type you selected.
4. Fill out the form fields (see table):
(JS),
g_probe_parameters['node_port']
= 16001; //
Related lists
Classification Criteria Criteria formed from specific parameters
and the values that they must contain
to match devices that Discovery finds in
the network with CIs in the CMDB. See
Discovery classification parameters on page
112 for a list of the parameters you can
use for the criteria.
SNMP OID Classifications Unique fingerprints of all the SNMP devices
that ship with the base Discovery product.
Users can add OIDs for SNMP devices not
in this list. This related list is available only
for SNMP devices.
Triggers probes Probes that Discovery launches to identify
and explore detailed information about
a CI that it has classified in the network.
If you want to use patterns for horizontal
discovery, add the Horizontal Pattern probe
in the Probe column, and then specify your
pattern in the Pattern column.
b) Type in the appropriate parameter, select an operator, and enter a value to use for this
classification. Operators include:
Parameter Description
This example shows a completed CI classification form with exploration probes defined.
For instruction on creating probes, see Discovery probes and sensors on page 211. The
probes defined here are launched when the device is properly classified, unless Discovery
is configured to stop after classification.
By default, application names are in this format: <name of the process classifer>@<the name
of the computer CI where the process resides>;
For example, for a MySQL server running on a computer called machineA, the application is named
mysql@machineA.
You can use the On classification script field in the process classifier record to change the default
application name to match your business needs. For example, the following script changes the default
application name to include a suffix after the process classifier:
name "eclipse"
command "/glide/eclipse/Eclipse.app/Contents/MacOS/
eclipse"
parameter "-psn_0_1884620"
If an Eclipse application runs on a computer called machineA, ServiceNow names the application
eclipse@machineA. The following script appends the parameter value as part of the application name.
Sometimes it is useful to pass values to the triggered probes in the process classification. You can do
this by creating a custom script that defines a name/value pair for the g_probe_parameters object. For
example:
g_probe_parameters['processCommand'] = command;
In this example, when a classification record triggers a probe, the script passes the probe a parameter
called processCommand with the value of the command variable.
Script objects
Field Description
Field Description
4. Click Submit.
Note: Optional fields that can be used to form parameters appear as child tags beneath the default
fields. Example of these are the sysDescr and banner_text fields.
Parameters are expressed in the form of <portprobe.service.field>. The value for field can come from any
of the fields or child tags in the XML file. For example, the following parameters classify a device as a UNIX
server and detect an installation of MySQL:
• ssh.ssh.result
• mysql.mysql.result
These parameters were derived from the values in the following XML file generated by a Shazzam probe
conducting an IP Scan. The result field returned a value of open for ports 22 and 3306 on the target device.
The service field indicates the services that normally communicate over those ports.
The sysDescr field can provide additional information about devices, depending upon the manufacturer.
This XML file from the Shazzam probe reveals the following about port 161 on the device at IP
10.10.11.149:
In the classification criteria, we can construct the following parameter with sysDescr that returns an Apple
AirPort wireless router:
Note: IP address classification attempts to classify devices when no credentials are available;
however, Discovery will use credentials when they are available, even when IP address
classification is configured.
5. Save the record and navigate to Discovery Definition > Port Probes and click New.
6. Create a port probe using the new IP Service you just defined.
3. Create a new classification and add the parameter for IP address scanning.
In this example, we have created an application classifier that will discover Apache Tomcat, based
on the port information we received from the Nmap scan. See the following section for details about
forming parameters for IP address scans.
4. Optional: In the Classification Criteria related list, create a criteria filter that determines when this
classifier applies to the discovered devices. See the IP address classification parameters for a list of
the parameters you can use.
Run a IP address discovery through the Discovery Schedule to search for devices.
7. Click Submit.
The system displays the MID Server record.
8. Optional: Add other Windows Remote Management protocol parameters as needed.
mid.powershell_api.winrm.remote_port
Specifies the 5985 No
communications port
the MID Server uses
to communicate with
the WinRM protocol.
mid.powershell_api.session_pool.target.max_size
Specifies the 2 Yes
maximum number of
sessions allowed in
the pool per target
host.
mid.powershell_api.session_pool.max_size
Specifies the 100 Yes
maximum number of
sessions allowed in
the session pool.
mid.powershell_api.idle_session_timeout
Specifies the 60 Yes
timeout value of idle
Powershell sessions
in seconds.
Run a discovery from the Discovery schedule to find Windows machines on your network.
The following procedure reclassifies any Windows workstation operating system (Windows Vista, XP, or
Windows 7) that is acting as a server.
1. Navigate to Discovery Definition > CI Classification > Windows.
2. Create a new classification record, such as Windows XP Server.
3. Select Windows Server [cmdb_ci_win_server] as the table.
4. Right-click in the header bar and select Save from the context menu.
The Classification Criteria and Triggers Probes Related Lists appear.
5. Configure the following Classification Criteria:
Run a discovery from the Discovery schedule to find Windows machines on your network, and then check
the cmdb_ci_win_server table and related tables to see how data is populated in the CMDB.
Unix parameters
The UNIX parameters define the characteristics of several types of computers, such as Linux, Solaris, and
HP-UX, communicating with SSH protocol, version 2.
Parameter Description
Windows parameters
Windows parameters identify Windows computers communicating with the WMI protocol.
Parameter Description
SNMP parameters
The SNMP parameters can define the characteristics of several types of devices, such as routers,
switches, and printers.
Parameter Description
Parameter Description
Process parameters
Process parameters identify processes such as those used by LDAP, Apache Server, and JBoss Server.
Parameter Description
Parameter Description
Caution: You should not modify IP services unless your organization uses custom ports.
Discovery identifiers
After Discovery classifies a CI, it uses identifiers to determine if the device already exists in the CMDB.
Discovery launches special identity probes that accumulate identification data for each device and
feed that data into the identifiers, which determine the action that Discovery must take for each device.
Identifiers accurately determine the identity of the device to prevent the creation of duplicate CIs. This
identification step only takes place for the Configuration item type of discovery, not for the other types of
discovery.
The identity probe in the base Discovery system can be configured to ask the device for information
such as its serial numbers, name, and network identification. The results of this scan are processed
by an identity sensor, which then passes the results to the identifier. The identifier then attempts to
find a matching device in the CMDB. If the identifier finds a matching CI, the identifier either updates
that CI or does nothing. If the identifier cannot find a matching CI, it either creates a new CI or does
nothing. If Discovery is configured to continue, the identifier launches the exploration probes configured
in the classification record to gather additional information about the device. Exploration probes can be
multiprobes or simple probes.
This diagram shows the processing flow for classifying and probing devices with identifiers configured.
Table Description
Identifier rules
The default Discovery system contains these identifier rules, each of which is associated with a specific
CI type (the sys_class_name field on the CI record) or the table in the Applies to field and contains the
appropriate attributes for discovering CIs from the specified table. Where necessary to discover all possible
occurrences of an attribute, tables from related lists (Search on tables) are included in the rule. For more
information, see Create or edit a CI identification rule.
The sys_class_name cannot be an attribute for independent rules, such as cmdb_ci_hardware. If your
Discovery identification strategy depends on matching a CI with a specific class, you must create a new
rule for each class you want to use for matching and specify that class in the Applies to field of the
Identifier form.
For example, you can create an identifier for a Linux server with different attributes than the Hardware
Rule. You might only want to use the machine name, IP address, and MAC address for identification. Your
new rule would look like this:
Custom identifiers must have different Order values than those of the default identifiers. Discovery
parses identifiers and attributes in sequence from low order numbers to high. You can create identifiers to
run before or after the default identifiers, or mixed in with the identifiers from the base system. To prevent
any identifier or rule from running, disable it by clearing the Active check box. The evaluation order for
CMDB identifiers is established within each rule and only controls the parsing order of the attributes in that
rule.
You can control how Discovery handles duplicates using properties installed with Identification
and Reconciliation. Use the glide.identification_engine.skip_duplicates and
glide.identification_engine.skip_duplicates.threshold properties.
All instances use identifiers from the CMDB Identification and Reconciliation framework. Upgrades from
pre-Geneva versions still preserve the legacy identifiers, but you can switch to the new identifiers using a
property: glide.discovery.use_cmdb_identifiers. If you upgraded from a pre-Geneva version,
you must manually add this property and set it to true to use the new identifiers. If you upgraded from
Geneva or later releases, this property is available in the System Properties [sys_properties] table.
To preserve functionality in custom legacy identifiers, convert them to the new CMDB identifier rules
format before enabling this property. The system does not reconfigure your custom identifiers to the new
framework automatically.
Note: When Service Mapping is active, the new identifiers from the CMDB Identification and
Reconciliation framework are always used regardless of the property value.
Configure Discovery identity sensors on page 124 that are part of the identify probes.
Note: To avoid confusion, the multi-sensors provided in the instance must have the same
names as their matching multi-probes.
The Responds to Probes related list shows the simple probes which pass their data to this multi-
sensors. These are the same simple probes that appear in the Includes Probes list in the matching
multi-probes record.
3. Click the Script link for each probe in the list to see the script that the multi-sensors runs to process
the data from the probe.
Run a discovery through the Discovery Schedule to search for CIs and verify that they are identified
correctly in the CMDB.
Linux
In the result of dmidecode | cat, the value on the left side is what Discovery looks for. The value on the
right side is how it is stored in the Serial Number [cmbd_serial_number] table.
UUID : uuid_serial
Windows
For Win32 WMI classes, the value on the left is the name by which it is stored in the Serial Number
[cmdb_serial_number] table. The value on the right is the WMI value.
('system' : Win32_ComputerSystemProduct.IdentifyingNumber);
('uuid' : Win32_ComputerSystemProduct.UUID);
('chassis' : Win32_SystemEnclosure.SerialNumber);
('bios' : Win32_BIOS.SerialNumber);
('baseboard' : Win32_BaseBoard.SerialNumber);
SNMP
For SNMP, the mapping below is based on the code. Physical types of serial numbers are from all
instances of iso.org.dod.internet.mgmt.mib-2.entityMIB.
'cisco_stack' :
'iso.org.dod.internet.private.enterprises.cisco.workgroup.ciscoStackMIB.chassisGrp.chas
'cisco_chassis' :
'iso.org.dod.internet.private.enterprises.cisco.temporary.chassis.chassisId'
'foundry' :
'iso.org.dod.internet.private.enterprises.foundry.products.switch.snChassis.snChasGen.s
'apc_pdu' :
'iso.org.dod.internet.private.enterprises.apc.products.hardware.masterswitch.sPDUIdent.
'printer' :
'iso.org.dod.internet.mgmt.mib-2.printmib.prtGeneral.prtGeneralEntry'
'standard' :
'iso.org.dod.internet.private.enterprises.apc.products.hardware.ups.upsIdent.upsAdvIden
Serial Number Table & Class Identify CIs in the CMDB This identifier matches the
Name based on serial number(s) in discovered serial number
the serial number table and against the serial_number
sys_class_name. in the Serial Number
[cmdb_serial_number] table.
If the discovered serial
number matches a CI in the
cmdb_serial_number table
and the class name, the CI
is declared identified. Note
that in the base system,
Discovery populates the
cmdb_serial_number table.
Therefore, this identifier is
important for Discovery to use
to find the CIs it has discovered
in the past (assuming these
were valid serial numbers).
For more information, see
Serial number types for
identification on page 126.
Serial Number & Class Name Identify CIs in the CMDB based This identifier matches the
on the serial_number field and discovered serial numbers
matching sys_class_name. against the serial number
field in the base CI record. If a
discovered serial number and
class type (sys_class_name)
matches a CI, that CI is
declared identified. The
matching class name
requirement means that
Discovery will not identify
CIs that are not in the same
class. For example, a computer
and a printer have the same
serial number, but since
their class names differ,
Discovery does not identify
them as the same CI. Typically,
imported data has the serial
number field completed,
but no matching value in the
cmdb_serial_numbertable. This
identifier solves the issue of
Discovery finding imported data
in the CMDB.
Name & Class Name Identify CIs in the CMDB based This identifier matches
on name field and matching the discovered name
sys_class_name. against the name field in
the base CI record. If a
discovered name and class
type (sys_class_name)
matches a CI, then that
CI is declared identified.
The matching class name
requirement means that
Discovery will not identify
CIs that are not in the same
class. For example, a computer
and a printer have the same
name, but since their class
name differs, Discovery will not
identify them as the same CI.
Network Identify CIs in the CMDB based This identifier matches
on IPs and MAC Address(es) in the discovered network
the network adapter table. adapters against those
in the Network Adapter
[cmdb_ci_network_adapter]
table, using the IP address
and MAC address. If all the
discovered network adapters
match a CI in the network
adapter table, then the CI is
declared identified.
Network & Class Name Identify CIs in the CMDB based This identifier matches the
on the mac_address field discovered IP address and
and the ip_address field and MAC address of the base
matching sys_class_name. CI against those in the base
CI record. If a discovered IP
address, MAC address and
class type (sys_class_name)
matches a CI, then that
CI is declared identified.
The matching class name
requirement means that
Discovery will not identify CIs
that are not in the same class.
MAC Address & Class Name Identify CIs in the CMDB based This identifier matches the
on the mac_address field and discovered MAC address of the
matchingsys_class_name. base CI against those in the
base CI record. If a discovered
MAC address and class type
(sys_class_name) matches
a CI, then that CI is declared
identified. The matching class
name requirement means that
Discovery will not identify CIs
that are not in the same class.
IP Address & Class Name Identify CIs in the CMDB based This identifier matches the
on the ip_address field and the discovered IP address of the
matching sys_class_name. base CI against those in the
base CI record. If a discovered
IP address and class type
(sys_class_name) matches
a CI, then that CI is declared
identified. The matching class
name requirement means that
Discovery will not identify CIs
that are not in the same class.
MACAddress Identify CIs in the CMDB based This identifier matches the
on MAC Address(es) in the discovered MAC address of the
network adapter table. base CI against those in the
base CI record. If a discovered
MAC address and class type
(sys_class_name) matches
a CI, then that CI is declared
identified. The matching class
name requirement means that
Discovery will not identify CIs
that are not in the same class.
Generic Serial Number Identify CIs in the CMDB based This identifier matches the
on serial_numberfield. discovered serial number
against the serial number
field in the base CI record. If
a discovered serial number
matches a CI, then that CI
is declared identified. Note
here that the matching class
name is NOT a requirement,
which means that Discovery
will identify CIs even if they
are not in the same class. For
example, a computer and a
printer have the same serial
number, but even though their
class name differs, Discovery
will still identify them as the
same CI.
Generic Network Identify CIs in the CMDB based This identifier matches the
on the mac_address field and discovered IP address and
the ip_address field. MAC address of the base
CI against those in the base
CI record. If a discovered IP
address and MAC address
matches a CI, then that CI
is declared identified. The
matching class name is NOT a
requirement, which means that
Discovery will identify CIs that
are not in the same class.
Generic Name Identify CIs in the CMDB based This identifier matches the
on name field. discovered name against
the name field in the base CI
record. If a discovered name
matches a CI, then that CI
is declared identified. The
matching class name is NOT a
requirement, which means that
Discovery will identify CIs that
are not in the same class.
Generic MAC Address Identify CIs in the CMDB based This identifier matches the
on MAC address field. discovered MAC address of the
base CI against those in the
base CI record. If a discovered
MAC address matches a
CI, then that CI is declared
identified. The matching class
name is NOT a requirement,
which means that Discovery will
identify CIs that are not in the
same class.
Generic IP Address Identify CIs in the CMDB based This identifier matches the
on IP address field. discovered IP address of the
base CI against those in the
base CI record. If a discovered
IP address matches a CI, then
that CI is declared identified.
The matching class name is
NOT a requirement, which
means that Discovery will
identify CIs that are not in the
same class.
Note: To restrict Discovery to use only the CMDB identifiers, you can add a property to the
sys_properties table and set it to hide legacy identifiers.
This example uses the serial_number and serial_number_type attributes from the hardware identifier.
1. Navigate to Discovery Definition > CI Identification > Identifiers.
2. Open the record for the custom legacy identifier and note the table in the Applies to field.
For example, the identifier might be on the Hardware [cmdb_ci_hardware] table.
3. Determine if the script in the legacy identifier is configured to query another table for matches.
In this example, the Serial Number Table & Class Name identifier is configured to query the Serial
Number [cmdb_ci_serial_number] table for a matching serial number. In CMDB identifier rules, this is
known as the Search on table.
Field Description
The default identifiers provided with Discovery should be adequate for most discoveries. However, if you
need to discover data from a child table, such as Linux Server [cmdb_ci_linux_server], you can create a
custom legacy identifier and select the exact criteria you want to add to the CMDB.
1. Navigate to Discovery Definition > CI Identification > Identifiers.
2. Click New.
3. Fill out the unique fields provided by the identifier form.
Discovery status
The Discovery status provides a summary of a Discovery launched from a schedule. You can also cancel a
Discovery that is in progress from the status form.
To access the Discovery Status form, navigate to Discovery > Status and open the status record
for a Discovery. Each record in the Status list represents the execution of a Discovery by a schedule
and displays such high level information as the date of the Discovery, the mode, the number of probe
messages sent to devices, and the number of sensor records that were processed. A status record
contains data that can help you troubleshoot a failed discovery. Use this data to troubleshoot the behavior
of individual probes and sensors or even run those elements separately. Use the status controls to enter
probe/sensor threads at any point for a specific Discovery, and then follow the process in either direction.
• Refresh: Runs the probe again. You can re-run probes when you encounter a failed Discovery or other
unexpected results.
• Discovery timeline: The Discovery timeline is a graphical display of a discovery, including information
about each probe and sensor that was used.
• Discovery log entries: The Discovery Log shows information such as classification failures, CMDB
updates, and authentication failures. A Discovery Log record is created for each action associated with
a discovery status.
• ECC queue entries: Entries in the ECC queue provide you with a connected flow of probe and sensor
activity, as well as the actual XML payload that is sent to or from an instance.
• Device history: The device history provides a summary of all the devices scanned during discovery,
and what action sensors took on the CMDB.
Note: By default, only 30 days of Discovery records are displayed in the status list at a time.
When a Discovery status cancels, any associated sensor transactions are immediately terminated and
any scheduled sensor jobs are deleted from the system. After cancellation, the cleanup Status shows the
Completed count and the cancellation is logged in the Discovery log. In the Queue [ecc_queue] table, any
records belonging to sensors exceeding the Transaction Quota Rule will be set to the Error state.
Viewing timelines
View timelines for an entire Discovery or for individual devices in a Discovery. In the out-of-box
system, the maximum size Discovery that can be displayed in a timeline is 300 entries in the ECC
Queue (150 probes and 150 sensors). To display larger discoveries, change the default setting in the
glide.discovery.timeline.max.entries property.
Probe and sensor timelines are available automatically when Discovery starts. Partial timelines for any
device or for the entire Discovery can be viewed during the scanning process. View the progress of the
Discovery by refreshing the browser.
View a Discovery timeline
A Discovery Timeline generates a graphical display of the Discovery Status records for a Discovery,
including information about each probe and sensor running.
The timeline for the entire Discovery appears, unless the size threshold is exceeded. If the timeline is
too large to display, an error message appears.
4. Clear the warning, and then select the Devices Related List.
5. Click the IP address of a device.
6. In the Device record, click the Show Discovery Timeline Related Link.
The Discovery timeline for that device appears.
7. Use the pink slider at the bottom of the timeline to change the perspective.
a) Move the slider from right to left to view all the tasks on a long timeline.
b) Adjust the end points of the slider to change the magnification. A narrow slider zooms in on the
spans and provides a more detailed view of complex timelines. A wide slider pulls the view out
and makes more of the timeline visible on the screen.
8. Use the selector range at the top of the screen to adjust the visible time frame. To limit the timeline to
the length of the Discovery, click Max.
The time scale adjusts automatically to the length of the Discovery. The available time scale range is
from one day to 1 year.
Tooltips
Discovery timelines display probe and sensor performance data and CI information in tooltips. Hover
the cursor over a span to view this data. Probes are displayed by black spans. The queue time for
a probe is shown as a silver bar within the span, and the processing time is represented by the
remaining space. Sensor spans are red, and the queue time is shown as a green bar. Selected spans
of any type display in yellow.
ECC Queue
Double-click a span to open the ECC Queue record for that probe or sensor.
Click the IP address of a device in this list for details about that device. The log results for that device are
displayed in the list at the bottom of the form.
If there were issues, or if Discovery failed to complete, click the Details link to view the log records for the
issue. The failure of any probe is considered an issue, even if the device was eventually classified properly
and updated in the CMDB.
When Discovery scans for IP addresses only (without credentials or identifiers), no updates are made to
the CMDB. All IP addresses discovered appear on this list, including multiple IPs on the same device. The
results of IP address scans include slightly different information than the results of a CI scan. Since there
is no CMDB activity associated with the IP address scan, the Completed activity column displays only the
classification status.
Possible statuses are:
• Classified
• Unclassified
• Alive, not classified
For Classified devices, Discovery might identify the type of device in the Current activity column. For
example, Network Gear might be classified as Cisco Network Gear, and a Computer might be classified as
a Windows Computer.
If Discovery is running on an instance, the size of this table can grow to several gigabytes. Most of the
accumulated data is unnecessary, but some entries might be important for troubleshooting any problems
with Discovery. For example, if Discovery is not properly capturing the disk drives on a particular Windows
server, look in the ECC Queue at the data returned by the Windows - System Information probe. You
should retain ECC Queue data from Discovery for at least a month.
By default, records in the ECC Queue older than 7 days are deleted automatically. You can set the deletion
schedule by updating the table rotation schedule for the ECC queue. The table rotation names are:
• ecc_queue_event
• ecc_queue
The ECC Queue provides you with a connected flow of probe and sensor activity. The position of records
in the list is dependent on your sorting criteria and might not represent the processing order of the
Discovery. Activity is recorded (Created date) when the probe is launched or the sensor records the
response in the instance. On a busy system, sensor input might be queued in the instance and delayed
from being recorded. Sensors can launch multiple probes that require different intervals to respond. A
probe record displays the instructions it was given when it was launched, and the sensor record shows the
probe instructions and the results of the probe. All this information is checked into the instance.
To track the processing sequence, use the following methods:
• To track the flow of probes and sensors deeper into the Discovery (toward completion), drill down using
the Queue Related List at the bottom of the ECC Queue record. This list displays the next activity in the
Discovery, by either a probe or a sensor. If the Queue list is empty, then that branch of the Discovery is
complete. Multiple entries means that the sensor in the current record launched multiple probes.
• To track backwards toward the beginning of the Discovery, drill down using the Related Field icon in the
Response to field. This action opens the ECC Queue record for the activity that spawned the current
probe or sensor record.
As you drill down through the probes and sensors that are part of a Discovery, you can view the CMDB
record for the device (CI) that was probed, using either of the following methods:
• Right-click a probe or sensor in the ECC Queue record list and select Go to CMDB item.
• In a ECC Queue record, click the Go to CMDB item Related Link.
Troubleshooting
If you are suspicious of the results of a Discovery (credential failure, for example), you can run individual
probes directly. To run a probe from the ECC Queue list, right-click the probe and select Run Again from
the pop-up menu. To run a probe directly from an ECC Queue record, click the Run Again UI action in
Related Links.
Discovery logs
The Discovery Log and the Horizontal Discovery Log display the activity that takes place during a
discovery. Use the logs to debug failed discoveries.
Discovery Log
The Discovery Log shows information such as classification failures, CMDB updates, and authentication
failures. A Discovery Log record is created for each action associated with a discovery status.
The Pattern Discovery log includes Horizontal Discovery log records, which display information about
discoveries that were performed with patterns. A horizontal discovery log record is created for an entire
horizontal discovery run, which includes the results of all the operations specified in the pattern.
Pre Pattern Execution The actions that were run before the pattern
was launched.
Selecting Pattern for Execution The pattern that was run for discovery.
{Identification Section name} Displays the results of the operations in
the pattern. Expand the name to see each
operation.
Pre Payload Processing Scripts The results of scripts that were run before
the payload was received.
4. Click an item in the left-hand column to see more information about it in the right-hand column.
5. To debug the pattern, click the name of the Identification Section, and then click the Debug button in
the upper-right.
6. To return to the log record, click the X in the upper-right.
Discovery Dashboard
The Discovery Dashboard is a central place for Discovery users to monitor ongoing Discovery operations.
The dashboard contains reports that query the database and display the results.
Dashboards are the home pages for products on the instance. The Discovery Dashboard provides
summary information about your Discovery status, devices, applications, and timeline. It also displays
Discovery errors and suggested solutions.
Note: All the data on the dashboard is domain separated, and shows only data collected from the
domain specified in the MID Server user account for the Discovery schedule used.
The Active Discovery Status report displays the progress of an actively running Discovery. The information
displayed is pulled from the Discovery Status module. The Progress field of the status record specifies the
percent complete of the Discovery schedule. The percent complete is calculated based on previous runs of
the same Discovery schedule. The progress field is empty in the following cases:
• When you use DiscoverNow
• When you use Discovery from a configuration item (CI).
• During the first run of a Discovery schedule.
The Newly Discovered Devices report displays the count of the devices that the Discovery application has
identified in the last 7 days, grouped by the class of the device.
The Total Discovered Devices report displays the total count of the devices that the Discovery application
has identified in the last 30 days, grouped by the class of the device.
The Unrefreshed Devices report displays devices that were discovered up to a year ago, but have not been
identified in the last 30 days, grouped by the class of the device.
The New Applications Discovered report displays the count of new applications that the Discovery
application has identified in the last 7 days, grouped by the class of the application.
The Total Discovered Applications report displays the total count of the applications that the Discovery
application has identified in the last 30 days, grouped by the class of the application.
The Unrefreshed Applications report displays applications that were discovered up to a year ago, but have
not been identified in the last 30 days, grouped by the class of the application.
The Active Discovery Errors report lists the errors that are generated by each active Discovery. The errors
for a specific Discovery are overwritten when that Discovery runs again. This report displays the following
data:
Column Description
Column Description
The Unused Credentials report lists the credentials that are not being used by the Discovery or
Orchestration applications at the current time.
Note: This may include credentials for a Discovery that has been scheduled but not run for the first
time.
The Upcoming Discovery Schedule report is a graphical display of the Discovery schedules that are
scheduled to be executed in the next 30 days. Each row in the timeline is a unique Discovery schedule.
Different spans on the same line show the upcoming Discovery runs of that schedule. Each span
represents the estimated time it will take to complete the run. The blue color inside the timespan is the
actual completion time. The grey color is the maximum run time of that schedule.
You can use the timeline to determine if your scheduled Discovery runs are spread out over time, rather
than all running at once.
The following Discovery scheduled runs do not appear in the timeline.
• When you use a Quick Discovery.
• When you use Discovery from a configuration item (CI).
• When the Discovery Schedule run type is On Demand.
Discovery customization
You can customize how Discovery works in your environment.
You can disable the rule and you can modify the amount of time that has elapsed before the rule takes
effect. Accessing this record requires the admin role.
Note: PowerShell is the preferred method for running Discovery over multiple Windows domains,
because it allows a single MID Server to authenticate on machines on different domains using
credentials stored on the instance.
Note: After changing the setting for any parameter, be sure to restart the MID Server service.
Note: On a 64-bit
version of Windows
Server 2003 or
Windows XP, a
Microsoft hotfix may be
required to enable this.
To discover applications
running on a 64-bit Windows
machine, the MID Server must
be running on a 64-bit Windows
host machine.
• Type: string (path)
• Default value: none
The following script includes were added for PowerShell discoveries. These scripts run on the MID Server
to generate the scripts that Discovery uses for WMIRunner and PowerShell.
When a Windows machine is classified with PowerShell, and an MSSQL instance is detected, a probe
called Windows - MSSQL is launched. The probe returns the SQL database catalogs and version to a
matching sensor.
Probe parameter
A probe parameter called WMI_ActiveConnections.ps1 contains a script that runs netstat.exe on a target
machine for connection information (such as process IDs, ports, IP addresses) when PowerShell is
enabled.
Credentials
Discovery uses Windows PowerShell credentials from the Credentials [discovery_credentials] table or
the domain administrator credentials of the MID Server service. If Discovery cannot find PowerShell
credentials in the Credentials table of the type, Windows), it uses the login credentials of the MID Server
service.
For example, if a web server application uses a database server application, the web server "depends on"
the database. The web server also "runs on" the host or server cluster. You can use the data from running
processes to determine which devices to drill into to see more application-specific configuration data.
You can disable ADM and use the ADM probes and sensor to collect active connections and active
process information without collecting all the Application Dependency Mapping information.
Your instance must meet the following requirements to use application dependency mapping:
• The glide.discovery.application_mapping Discovery property is enabled.
• ADM probes are configured.
• Process classifiers are configured.
In our example relationship, if the virtual server sannnm-01 crashes, the database instances and the web
server upstream are adversely affected. Likewise, if the web server fails, the web site hosted on it goes
down. The CI record for the virtualized Windows server sannnm-01 with its upstream and downstream
relationships is shown below. The downstream relationships show that this server is virtualized by VMware
running on a Windows server named sandb01. Our upstream relationships show a MySQL instance, a
SQL instance, and a web server running on sannnm-01. Farther upstream, a web site is hosted on the web
server.
When you delete a CI from the CMDB, ServiceNow also deletes all relationships with that CI.
Deleting a CI affects references to the CI in Incidents, Problems and Changes. In some situation, it might
be preferable to block a CI by setting the install_status or hardware_status attributes to retired, without
deleting the CI.
In the following example, the three JBoss application servers on three different physical machines have a
TCP connection to the local Apache application on www.online1.com. Additionally, there are five Apache
web servers using the local JBoss application.
4. In the dependency map that appears, right-click the arrow connectors between CIs to display the
relationship type.
2. Select the appropriate relationship type from the list at the top of the page.
3. Create and run an appropriate filter to make the CI visible in the list of available CIs.
4. Move the CI from the list of available CIs to the dependent list.
5. Click Apply to.
An example of the importance of selecting the proper relationship type is when the
Business Service dependency, PeopleSoft CRM, is added to the sannnm-01 virtual server
in the following diagram. If PeopleSoft CRM is added with a relationship type of Depends
on, it is protected from the cascade delete triggered by the deletion of the Windows server
Important: With this extension, Discovery can only modify the state of a VM which exists in
the CMDB. When an event with "CreatedEvent" occurs in its name, such as VmCreatedEvent,
Discovery scans that VM and then creates the CI using the data it collects. When a new event
occurs involving that CI, Discovery can update the existing record without launching another scan.
See Discovery for VMware vCenter on page 295 for supported versions of vCenter.
The following events are the only vCenter events handled by the base system when Discovery is activated.
If you have upgraded your instance from an earlier version, you might not have the default events added
with later releases. To use the missing events, manually add them.
VM events
The instance converts the subscribed vCenter events into system events (sysevent) that the system uses
to perform tasks, such as email notification.
The MID Server listens for the vCenter events configured in the vCenter Event Collector form. When one of
these events is returned from vCenter, the instance parses the payload with a business rule that converts
the vCenter event into a system event. The resulting sysevents contain these values:
• Name: Name of the system event created from the vCenter event. This value is always
automation.vcenter.
• Parm1: vCenter event that was returned. This event must be associated with an event collector record.
• Parm2: Event data provided by vCenter, in JSON format.
5. To select a different vCenter event, click Edit in the vCenter Event related list and browse for the
event.
The slushbucket does not display all the available events in the opening list. Use the filter to browse
for events not displayed.
6. Under Related Links click Start to save the events in this collection and start the collector.
The Related Links in this form work as follows:
Field Description
You can add or remove supported events from the vCenter event collector. Multiple MID Servers can listen
to the same vCenter instance, but each MID Server can only have one event collector record associated
with it. Make sure you configure the events on the event collector record that specifies the correct MID
Server.
1. Navigate to the relevant vCenter Event Collector record. This can be the same collector record as the
one containing the supported vCenter events, but only if the same MID Server is used to gather the
events.
2. Click the Edit button in the vCenter Event related list.
3. Select the events from the list and save the selection.
Not every event is supported by the base system event handlers. See Supported vCenter events
for a list of the events you can use without needed to create your own event handlers. If you want to
handle other events, you must create a script action to process the events. For instructions, see Script
actions. As a reference, the instance processes the vCenter events in the base system with a script
action called Discovery: Process vCenter events. Do not edit or delete this script action.
5. In Related Links, run any of the following actions against the SNMP trap collector extension:
Related Link Description
Discovery behaviors
Discovery behaviors determine the probes that Shazzam launches, and from which MID Servers these
probes are launched.
Unlike a scan performed by a single MID Server on a designated IP address range, a behavior can assign
different tasks to multiple MID Servers on the same IP address segment or on different network segments.
Behaviors are available in Discovery schedules for discoveries in which configuration items (CI) are
updated in the CMDB.
Behaviors can be used in the following scenarios:
• Load balancing: A behavior enables load balancing in systems that use multiple MID Servers deployed
across one or more domains.
• Multiple protocols in multiple domains: Configure one MID Server to scan for all protocols on one
domain and another MID Server to perform a WMI scan on a second domain.
• Access Control Lists (ACL): Discovery can scan SNMP devices protected by an ACL if the MID Server
host machine is granted access by that ACL. Use a behavior to configure a MID Server to scan devices
protected by an ACL.
• Devices running two protocols: Some devices might have two protocols running at the same time.
Examples of this are the SSH and SNMP protocols running concurrently on one device (most common).
A behavior can control which of the two protocols is explored for certain devices. The behavior then
prevents the other protocol from being explored.
Behaviors also enable the efficient Discovery of SSH and SNMP devices and WMI devices running on
multiple Windows domains, using multiple MID Servers.
For example, an organization has two Windows domains in its network and a variety of UNIX computers
and SNMP devices. The challenge is to discover all the devices efficiently, without duplicating effort. Each
domain contains aWindows MID Server which is used to scan the IP addresses from the two domains
specified in the Discovery Schedule, as well as the SSH and SNMP devices. We need a Behavior that
divides the work appropriately to avoid scanning anything twice. In this example, we assume that both
domains are in the same geographical location, and that a single schedule is sufficient.
Note: The preferred method for running Discovery over multiple Windows domains is to use
PowerShell, which allows a single MID Server to authenticate on machines on different domains
using credentials stored on the instance.
Field Description
Field Description
Note: Functionality criteria are required for Windows MID Servers only, and only when
the behavior controls Discovery across multiple domains. When the instance launches the
Shazzam probe for a Discovery in which a behavior defines multiple MID Servers to scan
multiple domains, the functionality criteria determine which MID Server process the results of
the probe.
Create a Discovery Schedule of type Configuration Item, and select Use Behavior for the MID Server
selection method.
Note: All Windows functionality requires criteria to identify the Windows domain and define
any special behavior for the MID Servers named.
The completed criteria appear in the Discovery Functionality form for the Windows MID Servers.
MID Servers Name the MID Server that will scan for all
other devices. If we want to use automatic
load balancing, we can add an additional
MID Server to this field.
The completed Discovery Behavior form looks like this. It is not necessary to create Functionality
Criteria for this MID Server.
MID Servers Name the MID Server that will scan for all
other devices. If we want to use automatic
load balancing, we can add an additional
MID Server to this field.
The completed Discovery Behavior form looks like this. It is not necessary to create Functionality
Criteria for this MID Server.
6. Create a schedule record for each time zone and name the behavior we just created..
a) Navigate to Discovery > Discovery Schedules and click New in the record list.
b) Select a Discover type of Configuration items.
This action displays the Behavior field.
c) Click the magnifier icon and select the behavior to use. In our example, we select LoadBalanced.
d) Select the appropriate time to run Discovery for this location.
e) Click Quick Ranges and define the IP address ranges, networks, or lists to scan for this location.
f) Save the record.
g) Create additional schedules for each time zone or region in the network and select the same
behavior.
We configure a MID Server to scan for the WMI protocol on Domain A. WMI scans authenticate on
Windows machines using the domain credentials of the Windows MID Server machine. Windows MID
Servers cannot scan for the WMI protocol outside their own domains.
Create the first functionality using the following values:
Field Input Value
All Windows functionality requires criteria to identify the domain and the MID Server. In our example, we
will create two criteria for this functionality. To create Functionality Criteria, click New in the Related List
and enter the following values for the MID Server doing the WMI scanning on Domain A:
The completed criteria appear in the Discovery Functionality form for this behavior.
In our network, we want to scan for UNIX computers and netgear, but we don't want to classify these
devices twice. One of our MID Servers will be configured to classify SSH and SNMP using a different
functionality than it does for WMI scans. We do not need to create criteria for non-WMI functionality.
Create the second functionality using the following values:
Field Input Value
All that remains is to create a functionality for the WMI scans on Domain B. Because of the Windows
authentication mechanism, we must configure a Windows MID Server to scan Domain B that is a member
of that domain.
Create the third functionality using the following values:
Field Input Value
All Windows functionality requires criteria to identify the Windows domain and the MID Server. In our
example, we will create two criteria for this functionality. To create Functionality Criteria, click New in the
Related List and enter the following values for the MID Server doing the WMI scanning on Domain B:
Field Input Value
The completed criteria appear in the Discovery Functionality form for this behavior.
c)
Note: In the second functionality, Discovery will scan our domain for WMI and SSH
protocols. Because WMI is one of the protocols, the MID Server for this functionality must
be installed on a Windows machine and must have criteria.
d)
Note: Functionalities that scan for WMI require criteria to identify the domain and the MID
Server. In our example, we will create two criteria for this functionality.
To create functionality criteria for a WMI scan, click New in the related list.
e) Enter the following values for the MID Server doing the WMI scanning on our domain:
The completed criteria appear as follows in the Discovery Functionality form for this behavior.
Patterns are gradually replacing out-of-box probes and sensors, especially for the discovery of
applications. If you are upgrading your instance to a new version and if you customized probes and
sensors or a classifier, the upgrade might not replace these components with the new pattern or the
components necessary for pattern discovery. If you want to manually migrate from probes and sensors that
were not automatically upgraded to patterns, see the instructions in Add the Horizontal Pattern probe to a
classifier on page 210.
Using patterns
Both Discovery and Service Mapping can use the same pattern for horizontal and top-down discovery. But
they are edited differently. See Create a pattern on page 738 for all steps. If you take a pattern that was
exclusively used for top-down discovery and you want to use it for horizontal discovery, you have to make
a few modifications. See Use a pattern for horizontal discovery on page 210 for instructions.
2. On the instance, create or modify the classification for the CI type you want to discover. Configure the
classifier as follows:
a) Navigate to Discovery Definition > CI Classification > {classification type}.
b) Open the relevant classifier.
c) Configure the classifier as follows:
• Relationship type: Select Runs on::Runs (for process classifiers only)
• Condition: Configure the same condition you defined in the pattern.
• Triggers probes Related list: Add the Horizontal Pattern probe, and then add the pattern you
are using to the Pattern column.
See Create a Discovery CI classification on page 91 for a description of the other fields on the
classifier.
Run the pattern in Debug mode to test it. When you are sure the pattern works, you can run discovery by
setting up a discovery schedule or running an on-demand discovery. See Schedule discovery on page
28 for more information.
5. Click Edit, and add the Horizontal Pattern probe. The probe appears in the related list.
6. From the related list view, double click the field under the Pattern column and add the pattern you
want to associate with the classification.
7. Remove or deactivate the other probes from the Triggers probe related list.
Note: If you delete a pattern, the Horizontal Pattern probe is not automatically removed from
the classifier. You must select another pattern for the Horizontal Probe, or you can switch back
to using identification and exploration probes specific to the classifier. If you use the Horizontal
Probe without a pattern specified, discovery stops after the classification stage.
Note: With each release, patterns are replacing many probes and sensors for Discovery. Consider
creating new patterns or editing existing ones if you want to customize what Discovery can find.
The information on probes and sensors is intended for customers who are not using patterns yet
and for customers who already have customized probes that are retained upon upgrade. See
Patterns and horizontal discovery on page 209 for more information on patterns.
Discovery phases
Discovery always uses probes and sensors during the first two phases of discovery: scanning and
classification. For the last two phases, identification and exploration, Discovery can use probes and
sensors or patterns. This topic refers to probes and sensors only. See Discovery basics on page 4 for
an explanation of these phases. See Patterns and horizontal discovery on page 209 for more information
on patterns.
The probe collects the information and the sensor processes it. Both get their instructions from the ECC
queue. There is a worker job on the MID Server that monitors the queue for work. The monitor checks for
any entries where the Queue is output and the State is ready.
The MID Server then processes all the output ECC messages, runs the necessary probes, and returns the
probes results to the ECC queue. These results are put in the ECC Queue as input entries.
After an entry is inserted in the ECC Queue table, a business rule fires (on insert) that takes that
information and runs it through a sensor processor. The sensor processor's job is to take the input data,
find any sensors interested in that data, and pass it along to be processed. Those sensors ultimately
update the CMDB.
The MID Server launches probes to collect information about a device. The probe sends back information
to the sensor to be processed. If the probe has a post-processing script defined, the post-processing script
does some data processing on the MID Server before data is sent back to the sensor on the ServiceNow
instance. Otherwise the probes sends back all the data collected and the sensor performs this data
processing. In both cases, the sensor updates the CMDB.
A multi-probe is a probe that contains probes. A multi-sensor processes the data from a multi-probe. To
process the data from the multi-probe, the multi-sensor contains individual scripts to process the data
returned by each probe contained in the Multiprobe, as well as a main multi-sensor script. The individual
scripts pass their processed data to the main multi-sensor script.
Probe types
Note: With each release, patterns are replacing many probes and sensors for Discovery. Consider
creating new patterns or editing existing ones if you want to customize what Discovery can find.
The information on probes and sensors is intended for customers who are not using patterns yet
and for customers who already have customized probes that are retained upon upgrade. See
Patterns and horizontal discovery on page 209 for more information on patterns.
These are the things you can do with probes and sensors:
Review the base system probes Review the List of Discovery probes on page
214 to see the probes that exist in the base
system. Probes need to be active on classifiers for
Discovery to trigger the probes. However, not all
probes are active on classifiers as more patterns
replace probes and sensors.
Create or modify a probe You can create a new probe to discover additional
CIs that Discovery does not find with the base
system probes or patterns, or modify an existing
probe to collect additional information on the type of
CI. After you create or modify a probe, test it. You
can also create multiprobes and multisensors. See
Discovery multiprobes and multisensors on page
269 for more information.
Upgrade to the latest version of a probe or If you upgrade your instance, the Discovery
sensor application is also updated, along with components
like probes and sensors. However, if you
customized any probes or sensors, they do not
upgrade. You need to copy your customizations to
a text file, upgrade the probes and sensors, and
reapply your customizations. See Align versions of
customized probes and sensors on page 266 for
more information.
Set probe parameters Probe parameters control several aspects about
how probes function. With each probe provided in
the base system, certain parameters are allowed.
These are specified in the list of Discovery probes.
See Set probe parameters on page 264 for
instructions on how to set a parameter.
Review probe permissions Certain probes need permissions to run on the
target machines or CIs that you are trying to
discover. See Discovery probe permissions on page
262 for more information.
CIM probe
The CIM probe uses WBEM protocols to query a particular CIM server, the CIM Object Manager, for a set
of data objects and properties.
For instructions on configuring probe parameters, see Set probe parameters on page 264.
The following parameters may be passed to the CIM probe:
The CIM Intermediate Query Language (CimIQL) uses keys, filters, and dot-walking to traverse the CIM
schema.
Parameter Expansion
The CIM query language supports standard SNC preprocessed probe parameter expansion. Place
variables in queries by encapsulating their names like this:
${foobar}.CIM_RunningOS[0].Name
CIM_ComputerSystem.${barfoo}
The text ${foobar} is replaced with the contents of the foobarprobe parameter passed to the CIM probe;
likewise for barfoo.
CIMIQL
The CIM Intermediate Query Language (CimIQL) is an intermediate language designed to simplify the
process of querying CIM providers.
CimIQL currently supports the standard Web-Based Enterprise Management (WBEM) protocol stack, but
others, such as Web Services-Management (WS-MAN), may be added in the future. The query language
syntax borrows from elements of Microsoft's WMI query language and UNIX's wbemcli command. The
CimIQL library is a pure Java implementation.
CimIQL syntax
CimIQL syntax consists of several elements, including a query and different tokens.
Element Description
Element Description
CIM_ComputerSystem{CreationClassName='Linux_ComputerSystem',Name='runtime'}.*
CIM_ComputerSystem{{Name!='runtime'}}.*
• Retrieves objects associated with each result from the preceding query.
• The condition tokens and parameter tokens are enclosed by two pairs of curly brackets {{ ... }}. The
curly brackets are optional if there are no properties filters or parameters necessary.
• The <association classname> is the name of the many-to-many or one-to-many class that associates
two objects together. By default, objects of the specified class and of any extended classes are
retrieved.
• The <parameter token>, ResultClass, may be specified to filter results based on the resulting object's
classname.
• The index token is optional.
• This token must not be used as the first query in a statement.
• Returns: class object
• Example:
CIM_ComputerSystem{{Name='runtime'}}[2].*
Substitution Token
${<statement name>}
• A no-op token that feeds the results of a previous named statement as input into the next query of its
own statement.
• Returns: void
• Example:
$(lastComputer).ElementName
Token Details
Token Details
Properties Token
CIM_ComputerSystem[0].*
. (Period)
• Separates queries.
• Example:
CIM_ComputerSystem.PrimaryOwnerContact
Index Token
[index]
• Reduces a preceding query's results to a single object at the specified integer index.
• This token is always optional.
• Example:
CIM_ComputerSystem[0].*
Key Token
<key name>='<value>'
• Matches an object property designated as a key by exact value.
• The <key name> is the name of the property used as a key.
• Example:
CIM_ComputerSystem{CreationClassName='Linux_ComputerSystem',Name='runtime'}.*
Condition Token
• Example:
CIM_ComputerSystem{{Name!='runtime'}}.*
Parameter Token
<parameter name>:'<value>'
• Passes a parameter by <parameter name> to the operation being called. The parameter may be
consumed during CimIQL pre-processing or by the Common Information Model Object Manager
(CIMOM) via request, depending on the parameter.
• Example:
CIM_ComputerSystem.CIM_RunningOS{{ResultClass:'Win32_ComputerSystem'}}.*
CimIQL tutorial
This is a tutorial by example where each example builds on the previous example.
2 CIM_ComputerSystem.PrimaryOwnerContact
Retrieves all instances of
CIM_ComputerSystem
and their descendants.
Retrieves only one property,
PrimaryOwnerContact.
3 CIM_ComputerSystem{CreationClassName='Linux_ComputerSystem',Name='ru
Retrieves a single
unique instance of
CIM_ComputerSystem and its
descendants. All key tokens
must be specified within the { }
identity token.
4 CIM_ComputerSystem{{Name! Retrieves all instances
='runtime'}}.* and descendants of
CIM_ComputerSystem that do
not have a Name property of
'runtime'.
The filter token {{ }} filters out
instances that do not contain all
of the properties/keys specified.
6 CIM_ComputerSystem{{Name='runtime'}}
Retrieves the second
[2].* result of all instances of
CIM_ComputerSystem and
its descendants where the
instances have a property
Name of 'runtime'.
The order of operations follows
the query syntax.
1. Query server for all
CIM_ComputerSystem and
descendants.
2. Filter results based on
Name property.
3. Retrieve the second
instance that passed the
filter.
7 CIM_ComputerSystem.CIM_RunningOS[0].Name
Retrieves the Name
property for the first
CIM_OperatingSystem instance
of each CIM_ComputerSystem
instance.
The middle-token,
CIM_RunningOS, is the name
of the Associator class, not the
end-result.
8 CIM_ComputerSystem.CIM_RunningOS{{Name=/
Retrieves the Name
CentOS/}}[0].Name property for the first
CIM_OperatingSystem instance
of each CIM_ComputerSystem
instance, where each
CIM_OperatingSystem instance
has a Name property containing
'CentOS'.
CimIQL results
CIM Probe results are passed to the probe sensor as an XML document embedded within the <output>
element.
The following is a commented example of a CimQuery batch result.
Splitting payload
When Discovery uses patterns to find certain devices like load balancers, large amounts of data could be
sent to the ECC Queue. To better handle large amounts of data, the horizontal pattern probe can split the
payload into chunks, and then create multiple ECC Queue records.
Control how the MID Server splits payloads and handles payloads using properties. See MID Server
properties for more information.
PowerShell probe
The PowerShell Probe executes PowerShell V2 scripts on the MID Server host.
PowerShell scripts are defined as probe parameters with the filename as the parameter name. It is
available as a Probe probe type by specifying PowerShell as the probe's ECC queue topic.
Scripting requirements
Any custom PowerShell scripts must use environment variables to pass any non-Boolean command line
parameter. Replace non-Boolean parameters in the Param() portion of the script with script variables
of the same name. Define the script variable as part of the environment with an SNC_ prefix. So a string
parameter such as this:
Param([string]$paramName)
if(test-path env:\SNC_paramName) {
$paramName = $env:SNC_paramName
}
For example, this parameter definition from the PSScript.ps1 script contains several string parameters that
need to be redefined as script variables:
Defining the non-Boolean parameters as script variables would result in this type of script:
if(test-path env:\SNC_script) {
$script=$env:SNC_script
}
if(test-path env:\SNC_user) {
$user=$env:SNC_user
$password=$env:SNC_password
}
SCPRelay probe
The SCP Relay Probe copies a single file or the contents of a directory from one host to another, using the
MID Server as a relay.
The SCP Relay probe uses the same parameters as SSHCommand. The commands may be sent in or out
of the context of a terminal (tty), and with or without sudo (for those commands, such as lsof, that require
being executed in the context of root to cough up the information we need). When commands are sent in
the context of a terminal, the path is automatically widened to include a set of default paths (and this can
be further widened with the path_override parameter). If the target machine is the local machine, SSH is
not used; instead, a local shell is run to execute the command.
For instructions on configuring probe parameters, see Set probe parameters on page 264.
The following parameters may be passed to the SCP Relay probe:
SNMP probe
The SNMP Probe uses the SNMP protocol to query a particular device for a list of OIDs, which are then
traversed and the results passed back to the sensor.
Discovery supports SNMP versions 1, 2c, and 3. Discovery uses version 1 and 2c by default. You must
enable support for version 3.
MID Servers support all SNMP protocol versions by default. You can set a MID Server to only support
specific versions of SNMP.
SNMP probe parameters
This list of parameters may be passed to the SNMP probes.
For instructions on configuring probe parameters, see Set probe parameters on page 264.
= equals
!= does not
equal
# contains
Note: When
use_getbulk is set to
true, the timeout value
is for an individual
GETBULK request.
1
This parameter is for internal use only and is not supported.
2
This parameter is for internal use only and is not supported.
Note: SNMP
GETBULK is only
available on devices
that support SNMPv2c
or SNMPv3. Requests
to devices configured
to use SNMPv1 only,
automatically revert to
using GETNEXT with
no further configuration
required.
You can view any errors associated with loading a MIB module in the agent log.
1. Navigate to MID Server > SNMP MIBs.
2. Check whether dependencies are met.
If your new MIB has dependencies on another MIB, the MIB that fills the dependency must exist
before you create your new record. Search the existing MIBs to check that the required MIBs are
already loaded. If they are not, use this procedure to also add them to the instance.
3. Click New to create a new record.
The MID Server MIB File form opens to create a new ecc_agent_mib record.
4. Use the following information to fill out the form:
• Name: The name of the MIB.
• Version: The version of the MIB.
• Source: Use this field to note where the MIB was acquired, such as a URL.
• Description: The description that appears in the ecc_agent_mib table.
• Active: This check box denotes whether the MIB module is enabled or disabled in the instance.
5.
Click the Add Attachment icon ( ) in the upper right to attach the actual MIB file to the new record.
The MIB name must begin with an alphabetical character. Remaining characters must be one of the
following: alphanumeric, hyphen ( - ), or underscore ( _ ). The file name must not have an extension.
You can reference the existing MIBs for examples. Use the actual name of the MIB for both the MIB
record name and the attachment name, but it is not required.
EtherLike-MIB CISCO-TC
FRAME-RELAY-DTE-MIB CISCO-VTP-MIB
HOST-RESOURCES-MIB F5-BIGIP-COMMON-MIB
HOST-RESOURCES-TYPES F5-BIGIP-LOCAL-MIB
IANA-ADDRESS-FAMILY-NUMBERS-MIB F5-BIGIP-SYSTEM-MIB
IANAifType-MIB FOUNDRY-SN-AGENT-MIB
IP-MIB FOUNDRY-SN-ROOT-MIB
IP-FORWARD-MIB LanMgr-Mib-II-MIB
IPMROUTE-STD-MIB MSFT-MIB
IPV6-MIB NetWare-Host-Ext-MIB
ISDN-MIB NetWare-Server-MIB
Printer-MIB PowerNet-MIB
RFC-1212 QMS-MIB
RFC-1215
RFC1155-SMI
RFC1213-MIB
RFC1398-MIB
RFC1406-MIB
RMON2-MIB
RSVP-MIB
SNMPv2-SMI
SNMPv2-TC
UPS-MIB
SSHCommand probe
A probe using the ECC queue topic name SSHCommand executes a shell command on the target host,
and returns the resulting output to the sensor.
Discovery supports Bourne Shell (sh) and Bourne-again Shell (bash) commands. Enter shell script
commands in the probe's ECC queue name field. The shell script can use variables and file operations
supported by the target UNIX shell.
• The SSH engine that is active by default on new instances.
• Customers on upgraded instances can manually enable ServiceNow SSH for a particular probe by
setting the use_snc_ssh parameter to true. Alternatively, enable it for all probes on the MID Server by
setting the MID Server parameter mid.ssh.use_snc to true.
Note: To discover network devices, such as routers and switches, use SNMP credentials
credentials, not SSH credentials. If you have Orchestration active,
SSHCommand parameters
Parameter Description
Parameter Description
Parameter Description
Parameter Description
SSHCommand path
The SSHCommand probe computes the default path from the following sources.
If you set the MID Server path override parameter, Discovery appends this path to the default path. If
you set the probe path parameter, Discovery uses this path instead of the default path. Discovery always
appends a user's default execution path to the MID Server and probe parameter paths.
Default Path
By default, the MID Server searches for SSH commands in the following paths:
• /usr/sbin
• /usr/bin
• /bin
• /sbin
WMIRunner probe
WMI Runner is a probe type that fetches data from Windows operating systems via the Windows
Management Instrumentation (WMI) interface.
The probe handles multiple user-specified WMI Paths to be queried, using a basic form of native WMI
query. Each field to be probed must be uniquely named (within the domain of the probe). The probe results
returned to the sensor will provide the data found for each field queried, indexed by its name.
When creating a WMI probe, the probe type must be set to WMI Probe and the ECC Queue Topic must be
set to WMIRunner.
For instructions on configuring probe parameters, see Set probe parameters on page 264.
The following parameters may be passed to the WMI Probe:
Note: The default timeout for WMI/Powershell is 5 minutes, except for the Windows Installed
Software probe, which has a default timeout value of 15 minutes. Adding wmi_timeout to a probe
parameter can change the default timeout of a Windows probe.
Port probes
Port probes are used in Discovery by the Shazzam probe to detect protocol activity on open ports on
devices it encounters.
When a port probe encounters a protocol in use, the Shazzam sensor checks the port probe record to
determine which classification probe to launch. The common protocols WMI, SSH, and SNMP in the base
system have priority numbers that control the order in which they are launched.
In the base system, the WMI probe is always launched first, and if it is successful on a device, no other
port probes are launched for that device. If the WMI probe is not successful, then the SSH probe is
launched to gather information on the device. If it is not successful, the SNMP probe is launched. This
method allows Discovery to classify a device correctly if the device is running more than one protocol (e.g.
SSH and SNMP).
To access the Port Probe form, navigate to Discovery Definition > Port Probes.
Name Simple name for the port probe that reflects its
function (e.g. snmp).
Description Definition of the acronym for the protocol. (e.g.
ssh is Secure Shell Login).
Scanner Shazzam techniques for exploring a port. Some
of these are protocol specific, and others are
generic. For example, a WMI port probe will use
a Scanner value of Generic TCP, and the snmp
port probe uses a value of SNMP.
Active Indicates whether this port probe is enabled or
disabled.
CIs Indicates whether this port probe is enabled or
disabled for discovering "Configuration Items".
IPs Indicates whether this port probe is enabled or
disabled for discovering "IP addresses".
Triggered by services Indicates which services define the port usage.
Use this setting to define non-standard port
usage and pair the port number with the
protocol.
Triggers probe Indicates which probe is triggered by the results
of this port probe. This is the name of the
appropriate classify probe.
Use classification Names the appropriate classification table,
based on the protocol being explored.
Classification priority Establishes the priority in which this port probe
runs. If the first port probe fails, then the next
probe runs on the device, and so forth, until
the correct data is returned. This allows for the
proper classification of a device that has two
running protocols, such as SSH and SNMP. The
default priorities for the Discovery protocols are:
• 1 - WMI
• 2 - SSH
• 3 - SNMP
The order in which Port Probes are run is now prioritized by protocol. Prioritization enables the proper
classification of devices that have two protocols running, such as SSH and SNMP, without having to create
a complex Discovery Behavior. Previously (in Basic discoveries), Discovery launched all port probes
at once and attempted to classify devices based on the activity returned for any protocol. The common
protocols WMI, SSH, and SNMP in the out-of-box system now are assigned configurable priority numbers
that control the order in which they are launched. The WMI probe is launched first, and if it is successful
on a device, no other port probes are launched for that device. If the WMI probe is not successful, then the
SSH probe is launched. The SNMP probe is the last to launch, after the other port probes have failed.
The field called Classification priority was added to the Port Probe form. The out-of-box system
prioritizes the use of port probes as follows:
• 1 - WMI
• 2 - SSH
• 3 - SNMP
The WMI port probe runs first and then the WinRM probe. If WMI or WinRM activity is detected on a
device, the Windows - Classify probe is launched (and no other port probes). If no WMI or WinRM activity
is detected, Shazzam runs the SSH probe. If Shazzam successfully detects SSH activity, the UNIX
classifier is launched. The SNMP port probe is launched only if no WMI or SSH activity is detected on a
device. This ensures that the correct classifier probe is launched and the correct device data is returned.
Configure Shazzam probe parameters
When you run Discovery, the Shazzam probe finds your active network devices by scanning specified
ports on specified IP address ranges.
Role required: admin
You control the behavior of individual Shazzam probes using basic and advanced parameters.
For instructions on configuring probe parameters, see Set probe parameters on page 264.
1. Navigate to Discovery Definition > Probes.
2. Select Shazzam.
3. Add or edit parameters in the Probe Parameters related list.
4. Configure the basic Shazzam parameter.
These parameters are defined in the config.xml file on the MID Server, but you can edit the values in
the Shazzam probe record as well. Changes to specific parameters that could disconnect you from the
MID Server are prohibited in the probe record and can only be made in the configuration file.
Parameter Description
Parameter Description
Parameter Description
Parameter Description
Most probes require access to Windows classes, properties, and registry entries. Certain probes also
require additional access to Windows directories and resources. Security policies vary by organization, so
there is no one specific role or right to grant. Ensure that the Windows user has local admin permission
for these Windows components.
Windows Classes
Most Windows probes access Windows classes and properties contained in those classes. Microsoft
provides a list of classes and properties, and information on setting permissions to view these classes.
Some Windows classes do not specify a file namespace path. ServiceNow uses root\cimv2\<Class>
by default if an explicit path is not specified.
Several Windows Registry entries are available for Discovery Windows probes.
Probe Windows Registry Entries
Important: You need an advanced knowledge of scripting to modify probes or their associated
sensors. Many existing probes provide parameters that you can set, rather than modifying the
probe itself. See Set probe parameters on page 264 for more information.
Field Description
Field Description
Add the probe to the Triggers Probe related list on the appropriate classifier. See Create a Discovery CI
classification on page 91 for a description of the fields and related lists on the classifier form.
Discovery uses the Windows - Active Connections probe to access active connection information. The
application dependency mapping feature requires this probe to function.
Discovery uses these probes to access application profile information. The application profile discovery
feature requires these probes to function.
Attention: VMware workstation Discovery is deprecated in the Helsinki release. This probe and its
sensor will be removed from the product in a future release.
Discovery uses the Windows - Get VMware Workstation probe to access information about VMware
virtual machines installed on Windows.
Discovering MSSQL
Discovery uses the Windows - MSSQL probe to access information about Microsoft SQL Server installed
on Windows.
Field Description
Discovery sensors
Every probe in Discovery must have a corresponding sensor to process the data returned.
For example, if incoming data is the result of a WMI probe, then the WMI sensor is triggered to process the
payload.
Note: If you create a multiprobe, you must create a multisensor to process the data returned from
this probe. For details, see Multiprobes and Multisensors.
Navigate to Discovery > Discovery Definition > Sensors and edit or create a sensor.
Field Description
g_probe_parameters['node_port']=16001;
Field Description
A sensor and its corresponding probe must have the same major version. It is recommended they also
have the same minor version. This version matching ensures that the data sent back from the probe is
understood and properly processed by the sensor. All members of a multi-probes bundle must have the
same major and minor version.
By default, Discovery tracks major version mismatches and displays version mismatch errors in the Active
Discovery Errors section on the Discovery Dashboard and in the Discovery Log. You can control whether
or not Discovery tracks minor versions mismatches by setting the Warn on Minor Version Mismatch
(glide.discovery.warn_minor_version) Discovery property. Minor version mismatches are tracked
in the Discovery log, but are not displayed on the Discovery Dashboard.
Versions for multi-probes and multi-sensors are checked as follows:
• The versions of the individual probes contained in the multi-probes are compared with the Responds to
Probes scripts that process their data.
• The versions of the Responds to Probes scripts and the main multi-sensors script are compared.
If a probe and its corresponding sensor do not have the same major version, a sensor does not process
information during a discovery and sends error messages to the log file. Errors also show up on the
Discovery Dashboard when you run a discovery job. If the major version is the same, but the minor version
is not, a sensor processes information during a discovery.
See the Discovery Probe and Sensor Versioning video for a tutorial on how to resolve version conflicts:
4. In the Includes probes related list, add the probes you want to include in the multiprobe.
5. Click Save.
Note: † This probe requires the installation of a command line tool from Oracle called SNEEP. To
download and install this tool, log in to the Oracle website. After this tool is installed, the Solaris
- Serial Number probe runs automatically when Discovery detects a Solaris device. For Fujitsu
PRIMEPOWER devices, you must run this probe with root credentials.
3. In the MultiProbe record, click New in the Includes probes related list.
4. Select a simple probe from the collection list in the left column and move it into the included list..
5. Click Save.
2. Click New.
3. Complete the form using the following settings:
• Sensor type: MultiSensor.
• ECC queue topic: MultiSensor.
• Reacts to probe: Select the probe whose payload this sensor must process.
• Sensor type: Select one of these options.
• Javascript: Returned data from the probe is processed in the sensor itself, outside the
application, and is visible to the user. This is the most common sensor type.
• XML: The XML data from the probe is broken into pieces. Some pieces can be used to launch
other probes that the original sensor needs to complete all the necessary information about a
device.
7. Click Save.
Example custom Discovery probe and sensor: populate a CI with text file
values
This custom Discovery probe helps you if you need to read a text file from a Windows computer and
populate a CI in the CMDB with the values from the file.
In this example the user wanted to read files created by BGinfo.
Note: When you have completed the probe and sensor, place the probe in the appropriate
Windows classifier at Discovery Definition > CI Classification > Windows.
3. Right-click in the header bar and select Save from the context menu.
4. Select the Probe Parameters tab in the Probe form, and then click New.
5. Enter WMI_GetFiles.js as the Name of this parameter.
6. Copy the script below into the Script field and edit as needed.
7. Click Submit.
//
// Use ServiceNow WMIAPI to gather stats
//
var CMD_RETRIES = 3;
var scanner = getScanner();
if (scanner) {
var output = "";
for(var i = 0; i < CMD_RETRIES; i++) {
output = scanner.winExec("%SystemRoot%\\system32\\cmd.exe /C type \
\\"C:\\Information Systems\\BgInfo\\*.txt\\\"");
if (output)
break;
}
scanner.appendToRoot("output", output);
}
9. Click Submit.
this.parseOutput(result.output);
this.update(this.data);
},
parseOutput: function(output) {
var currentFile;
var files = {};
if (output.startsWith("<wmi")) {
var bgout = new XMLHelper(output).toObject();
if (!bgout)
return;
output = bgout.output;
}
files[currentFile] += (files[currentFile]?
"\n" : "") + newLine;
}
}
this.data['u_jack_id'] = files['JackID.txt'];
this.data['warranty_expiration'] = files['Warranty.txt'];
this.data['po_number'] = files['Ponum.txt'];},
type: "DiscoverySensor"
});
Operating system level virtualization (OSLV) OSLV discovery finds image and container
information from OSLV engines, including Docker.
See Operating-system-level virtualization (OSLV)
discovery on page 473.
Discovering computers
Discovery identifies the following computers, clusters, and virtual machines.
Name cmdb_ci_db_hbase_instance
name HBase
Instance@hostname
Root Directory cmdb_ci_db_hbase_instance
root_dir hbase-site.xml
TCP port cmdb_ci_db_hbase_instance
tcp_port running process
Site XML cmdb_ci_db_hbase_instance
site_xml hbase-site.xml
Version cmdb_ci_db_hbase_instance
version HBase shell
HBase Home cmdb_ci_db_hbase_instance
hbase_home running process
ZooKeeper Quorum cmdb_ci_db_hbase_instance
zookeeper hbase-site.xml
Requirements
Data collected
Relationships
KVM Relationships
You must provide SSH credentials for the systems you want to discover.
Prerequisites
Discovery stores data in the [cmdb_running_process] table with truncated command line parameters up to
80 characters. This can cause multiple applications to be merged into one CI. To get the full command line
and prevent this issue, run pargs -a and parse the result.
Discovery stores information about Solaris computers in the following tables and fields.
3
* To discover Fujitsu PRIMEPOWER devices, you must install Oracle SNEEP, and run Solaris discovery
with root credentials.
In the following example, a Solaris global zone contains two local zones: zone01 and zone02. Each local
zone is represented by a physical Solaris CI record and a Virtual Machine Instance record. Each of the
local zones is tied to a Zone Server, demonstrating how virtualization relates to the global zone (mmp1).
Use cases
The TCP connection and process information for local zone servers must be collected by running
commands on their parent global zone. The relationship path between the local and global zone physical
machines must be established before TCP connection and process information for local zone servers can
be collected.
Case 1: Global zone discovered first.
• The system creates the Solaris server CI for the global zone.
• Discovery detects the local zones, creates a hypervisor zone server record, and creates a virtual
machine instance record for each Solaris device in the local zone.
• Discovery creates the relationship between the hypervisor record and the VM instance record.
Case 3: Global zone discovered after creation of local zone Solaris server CIs.
• Global zone Discovery detects local zones.
• Discovery creates a hypervisor zone server record, and creates a virtual machine instance record for
each Solaris device in the local zone.
• Discovery creates the relationship between the hypervisor record and the VM instance record. In
addition, it creates the relationship between the physical local zone VM and its virtual machine instance
record.
• The global zone runs the Solaris - ADM probe on itself, filtering by the local zone, and updates the
physical local zone VMs with this data.
Case 4: The relationship path between physical local and global zone machines is established.
Subsequent discoveries of the global zone refresh the TCP connection and process information for the
contained local zones.
When the system discovers a global zone, the Solaris - Zones & ADM Launcher probe triggers the Solaris
- ADM probe to explore the global zone and each local zone found. Because the Solaris - ADM probe must
run on the global zone to detect TCP connection and process information from its local zones, you might
see multiple ECC queue records that appear identical.
Upon examining the payload, however, you will see that each probe is actually targeting a different zone CI
to filter on and update.
Note: If you are discovering SUSE Linux hosts for vCenter appliance version 6.0 and earlier, there
are restrictions when using SSH. Please review VMWare documentation.
See vCenter data collected on page 301 for a description of the VMware architecture and component
relationships.
In addition to finding vCenter data through the standard discovery process, Discovery can also update the
CMDB by detecting vCenter events through a MID Server extension called the vCenter event collector.
See vCenter event collector on page 180 for details.
Windows credentials are not necessary for vCenter Discovery, when valid VMware credentials are used.
Note:
The VMware credentials must have the read-only role in vCenter.
Upgrade changes
In upgraded systems, the vCenter process classifier and the vmapp port probe are configured to trigger the
VMware - vCenter Datacenters probe. This probe then triggers the probes that discover individual vCenter
objects such as hosts, storage, and datastores. The legacy probe, VMware - vCenter, is still in the system
but is not triggered to run in the updated instance.
vCenter probe parameters allow you to disable the probes for the objects you are not interested in
discovering. You can also reduce the size of a probe's payload by specifying a page size.
Important: Before disabling a probe, be aware of any dependencies that proble might have. If the
probe you disable triggers another probe, the dependent probe is also disabled, and cannot collect
data.
• Page size: Page size parameters control the number of CIs to discover with a single probe. Use this
parameter to limit payload size by reducing the number of vCenter elements discovered at a time by
any probe. The page size expressed in parentheses is the default in the base system.
• Debug: Set the debug parameter in the VMWare - vCenter Datacenters probe to allow debugging for all
the vCenter probes. Debugging returns the raw vCenter data in each probe payload.
Probe Parameters
If you have customized the VMware - vCenter probe in an earlier version, your changes are not applied,
because this probe is ignored, starting with the Istanbul release. Retain the customized behavior is to
modify the appropriate probe in the upgraded system. If you prefer, you can reconfigure your instance to
launch the legacy probe that contains your customizations.
To revert to the VMware - vCenter probe:
1. Navigate to Discovery Definition > Port Probes and select vmapp.
2. In the vmapp record, select VMware - vCenter in the Triggers probe field, and then click Update.
3. Navigate to Discovery Definition > CI Classification > Process and select vCenter.
4. In the Triggers probes related list, change the condition for VMwareProbe-VMware - vCenter to true
and the condition for VMWareProbe-VMWare - vCenter Datacenters to false.
5. Click Update.
After classifying vCenter, Discovery launches the VMware - vCenter Datacenters probe, which in turn
launches specific probes that return information about ESX machines, virtual machines, and other vCenter
objects.
Note: The vmapp port probe is also configured to launch the VMware - vCenter Datacenters
probe.
If you use a domain account to access vCenter, specify the domain with the user name in the credential
record in one of the supported formats such as Domain\UserName.
When a vCenter CI, such as a virtual machine, is removed, the ServiceNow instance marks it as "stale" in
the CMDB, using either of these procedures:
• When Discovery runs, it creates an audit record in the CMDB Health Result [cmdb_health_result] table
for the missing CI and marks the CI "stale".
• If the ServiceNow instance is configured to collect vCenter events, the system can also create
a "stale" audit record for the CI in the CMDB Health Result [cmdb_health_result] table from the
VmRemovedEvent event, without having to run Discovery.
Note: When the Staleness setting is configured, the dependency view (BSM map) grays out stale
CIs in its relationship diagram to indicate that they were removed from vCenter.
You have the option of creating a CMDB remediation rule to automatically execute a remediation workflow
that can, for example, delete stale CIs. For more information on stale CIs, see CMDB Health Metrics.
Configure an alternate port for vCenter
You can specify an alternate port for the VMWare - vCenter Datacenters probe.
Role required: admin
By default, the VMWare - vCenter Datacenters probe runs on port 443, which is the standard port for the
https protocol. The port probes for vCenter run on these ports:
• vmapp6_https: 9443
• vmapp_https: 5480
2. Specify an alternate port number for vCenter in the VMWare - vCenter Datacenters probe.
a) Navigate to Discovery > Probes.
b) Open the VMware - vCenter Datacenters probe record.
c) In the Probe Parameters related list, click New.
d) In the Name field of the Probe parameter record, enter vcenter_port.
e) In the Value field, enter your alternate port number.
f) Click Submit.
vCenter data
Discovery uses multiple vCenter probes to collect this data from vCenter.
Name name
Full name fullname
Instance UUID instance_uuid
URL rul
Effective CPU effectivecpu
Name name
Managed object reference ID morid
Top level folder for hosts host_morid
Top level folder for VMs folder_morid
Accessible accessible
Managed object reference ID morid
Object ID object_id
Name name
CPUs cpus
Disks disks
Disks size (GB) disks_size
Guest ID guest_id
Memory (MB) memory
MAC Address mac_address
BIOS UUID bios_uuid
Correlation ID correlation_id
VM Instance UUID vm_instance_uuid
Image path image_path
State state
vCenter Instance UUID vcenter_instance_uuid
vCenter Reference vcenter_ref
Name name
CPU expandable cpu_expandable
CPU limit (MHz) cpu_limit_mhz
CPU reserved (MHz) cpu_reserved_mhz
CPU shares cpu_shares
Full path fullpath
Memory expandable mem_expandable
Memory limit (MB) mem_limit_mb
Memory reserved (MB) mem_reserved_mb
Memory shares mem_shares
Owner owner
Owner Managed Object Reference ID owner_morid
Name name
Accessible accessible
Capacity (GB) capacity
Free space (GB) freespace
Type type
URL url
vCenter relationships
Discovery automatically creates relationships for vCenter components using data from a key class.
Subsequent Discoveries use the same key class to automatically validate and remove relationships that
are no longer valid.
vCenter CIs can be members of folders or clusters, which affect how Discovery creates their relationships.
• If a CI is in a folder, Discovery creates a relationship between that CI and the folder. If that CI is not in a
folder, Discovery creates the relationship between the CI and the datacenter. These vCenter CIs can be
in a folder:
• VM Instance
• VM Template
• vCenter Network
• Datastore
• vCenter Folder
• vCenter Cluster
• If an ESX server is in a cluster, Discovery creates a relationship between the ESX server and the
cluster. If an ESX server is not a member of a cluster, then Discovery creates a relationship to the
datacenter.
• If a resource pool is in a cluster, Discovery creates a relationship between the resource pool and the
cluster. If the resource pool is not a member of a cluster, then Discovery creates a relationship to the
ESX server.
Datastores
Discovery uses the VMWare - vCenter Datastores probe to collect this data from datastores.
HostMounts
Discovery uses both the VMWare - vCenter ESX Hosts and VMWare - vCenter Datastores probes to
collect datastore host mount data.
Field label Table Column
Important: ESX server discovery is done through vCenter. Do not specify the IP address of
the ESX server in a Discovery Schedule. Instead, discover the vCenter through the Discovery
Schedule.
For a description of the VMware architecture and component relationships, see vCenter data collected on
page 301.
For a discussion of how Discovery collects information on datastores and establishes the relationships
between the ESX hosts and the storage disks attached to the datastores, see Datastore Discovery on page
313
Required Roles
Users with the itil and asset roles can access ESXii and ESXi configuration item (CI) records. To run
discovery on vCenter servers, users must have the discovery_admin role.
Credentials required
To run a complete Discovery of vCenter/ESX servers, you need vCenter credentials. If the vmapp port
probe is disabled, you must use Windows credentials to access the Windows host on which the vCenter
server runs.
After running the vCenter classifier, Discovery launches the VMware - vCenter Datacenters probe, which
launches the probes that explore the ESX server. For the complete list of vCenter probes, see List of
Discovery probes on page 214.
Data collected
Basic server data from ESX hosts is collected by the VMware - vCenter ESX Hosts probe.
Relationships
Resource pools are configured in vCenter and define the maximum amount of resources that templates
using that pool can consume. An ESX server property enables resource pools to expand when necessary
if the ESX server has additional resources to spare. The Name and Owner fields of each resource pool on
the ESX server must be configured in the ESX Resource Pool [cmdb_ci_esx_resource_pool] table. When
Orchestration for VMware executes its manual provisioning tasks, the provisioner must select the proper
resource pool for the virtual server requested. Discovery finds resource pools on ESX machines and
populates the fields on the ESX Resource Pool form automatically. For more information, see Configure
ESX resource pools on page 311.
ESX resource pools requires the Orchestration - VMware Support plugin.
Note: Ensure that vCenter and the ESX server have been fully configured, including the creation of
the templates and resource pools.
enables the automatic cloning of virtual machines by an ESX server managed by vCenter. For information
about data collected by Discovery on vCenter, see vCenter data collected on page 301.
VMware on Windows or Linux server without vCenter - Legacy
In the basic VMware system, the VMware application runs on a Windows or Linux host machine.
Attention: The Discovery of VMware Workstation is not supported in the Istanbul release. VMware
workstation probes and sensors are not included in new instances of Istanbul. However, customers
who upgrade to Istanbul from an earlier release can continue to use those probes and sensors to
discover VMware Workstation.
This system can clone instances from templates, but cannot be automated. The relationships between
VMware components for this type of installation are shown in the following diagram:
Component Relationships
Datastore Discovery
A datastore is a storage object for virtual machines that are hosted on an ESX server.
A datastore is a collection of one or more physical disks, such as an iSCSI disk, and can be used by more
than one ESX host. However, a physical disk can only connect to one datastore. Because ESX hosts can
share datastores, it is easy to move virtual machine between hosts that have a common datastore.
Note: From the perspective of a virtual machine attached to a datastore, storage is provided by a
single disk.
For information about the tables used by Discovery to store the data for physical disks and their
relationship to datastores and ESX hosts, see ESX server discovery on page 307.
In this example configuration, two ESX hosts share a common datastore that uses different types of
storage.
Relationships
ServiceNow provides tables that contain the relationships between an ESX host and its datastore and the
specific disks to which the datastore is connected.
ServiceNow Discovery establishes the relationships between a datastore, the disks attached to the
datastore, and the ESX server that hosts the virtual machines using that datastore. From the perspective
of the ESX host, iSCSI and fibre channel disks connected to the datastore are treated as physical disks.
Discovery does not show the direct relationship of the storage disks to the ESX host.
Note: Storage can be hosted on computers that are not discovered by Discovery. ESX hosts are
discovered through vCenter, and storage is discovered separately through CIM. The system can
only establish the relationship between the two when storage is discovered before ESX.
Table Description
Windows discovery
Discovery identifies and classifies information about Windows computers.
Note: For fiber channel discovery on a Windows 2008 host, the Microsoft Fibre Channel
Information Tool (fcinfo.exe) must be installed on that machine. The fcinfo executable
should be available on the environment path. The Microsoft Fibre Channel Information Tool tool is
available for download at http://www.microsoft.com.
The Windows classifier supports the following Windows workstation operating systems:
• Windows 7
• Windows 8
• Windows 8.1
• Windows 10
Configure Windows credentials. The user configured in the credentials must have local admin access to
the Windows machine.
The credential that you configure must have the following:
• Access to the WMI service to the current namespace and sub-namespaces.
• Access to the Powershell service.
• Membership in the Distributed COM Users local security group.
* Core counts and threads per core might not be accurate, due to issues with Microsoft reporting.
Discovery can find software that has been installed on a Windows machine by looking at the Windows
Registry. Discovery can find the following attributes of discovered software:
• Product Name, which is a combination of name and version, such as Windows Imaging Component 3.0.
• Name, which is the name of the product only without the version.
• Version
• Uninstall String, such as C:\Program Files\Notepad++\uninstall.exe
• Part of, which is the update that this a part, such as Windows Internet Explorer 8 - Software U.
• Install Date
• Installed on, which is the name of the asset it is installed on.
Note: Discovery collects cluster resources based on the MSCluster_ResourceType WMI class. To
see a list of default cluster resources, see Resource Types. To understand how cluster resources
are related to process classifiers, see Relate the process classifier to Windows cluster resources on
page 396.
Cluster relationships
Discovery creates a CI Relationship [cmdb_rel_ci] record for each node, using these relationships:
• Cluster of::Cluster: Relationship between the cluster nodes and the cluster. Service Mapping uses this
relationship to map the Windows cluster and it's nodes.
• Hosted on::Hosts: Relationship between the cluster nodes and the Windows server.
Cluster references
Note: References are maintained in Discovery to provide backward compatibility for systems that
use them.
Hyper-V discovery
Microsoft Hyper-V is a virtualization application that is included with the Windows Server 2008 operating
system.
About Hyper-V
A physical machine running Hyper-V is divided into partitions (virtual machines), including a parent partition
running Windows Server 2008 and child partitions running supported guests. The parent partition manages
the virtual machines with the Hyper-V Manager application. On Windows Server 2008 this is done through
the Microsoft Management Console (MMC) service. On Windows 7, use the Remote Server Admin tools.
Hyper-V supports the following functionality:
• Failover clustering: Failover is managed with Failover Cluster Manager.
• Live migration: Virtual machines can be moved between failover cluster nodes without bringing down
the virtual machine.
Required roles
Users with the itil and asset roles can access Hyper-V configuration item (CI) records. To run discovery on
Hyper-V servers, users must have the discovery_admin role.
Supported versions
Support for discovery of Hyper V instances running on Windows 2016 is not supported.
Credentials
Configure Windows credentials with Domain administrator rights. You should also Enable PowerShell for
the MID Server used to discover Hyper-V servers and instances.
Hyper-V architecture
Table Purpose
Hyper-V Server [cmdb_ci_hyper_v_server] Contains data about the physical machine running
the Hyper-V server. This table has a reference
relationship with the existing Windows Server
[cmdb_ci_win_server] table.
Hyper-V Cluster [cmdb_ci_hyper_v_cluster] Contains data about Hyper-V clusters. This table
has a reference relationship with the existing
Windows Cluster [cmdb_ci_win_cluster] table.
The ServiceNow ITSA Suite modifies these tables for use with multiple virtualization products:
Table Purpose
Virtual Machine Instance [cmdb_ci_vm_instance] Contains data on all discovered virtual machine
instances.
Table Purpose
Virtual Machine Object [cmdb_ci_vm_object] Contains data about various objects associated
with a Hyper-V server, such as partitions, networks,
resource pools, and clusters.
* Discovery can only return this information if the virtual machine is running.
3. If you import the Hyper-V clone using the Move or restore selection, be sure to delete the original
image from the Hyper-V server.
When Discovery encounters two virtual machines with the same equivalent serial numbers, it creates
only one configuration item (CI).
• Windows NT Server
• Windows 2000, 2003, 2008, and 2012 Server
• Windows XP and Vista
• Windows 7, 8, and 10
Data collected
Help the Help Desk stores information about Windows computers in the following tables and fields.
* Core counts and threads per core might not be accurate, due to issues with Microsoft reporting. See
http://msdn.microsoft.com/en-us/library/windows/desktop/aa394373(v=vs.85).aspx for details.
Network devices
Discovery identifies the following network devices.
Important: Discovery treats hardware load balancers as network devices and attempts to discover
them primarily using SNMP. If a load balancer in your system running on a Linux host has SNMP
and SSH ports open, Discovery might classify it based on the SSH port, which has classification
priority over SNMP. To ensure that Discovery properly classifies your hardware load balancers,
create a Discovery behavior for load balancers that includes SNMP but not SSH. Software load
balancers are treated as applications.
Several probes are available to discover load balancers, applications, interfaces, and pools. See Load
balancer fields and probes on page 328 for more information.
Load balancer fields and probes
Discovery stores load balancer information in the following fields.
Note: By default, the system uses the discovered IP address of a load balancer for the CI record.
This can be the management IP created for the device that is used in the Discovery schedule. For
instructions on how to force Discovery to use the IP address of the load balancer's NIC rather than
that of a management IP, see IP address selection properties on page 66.
*not visible on the form by default. Customize the form to add this field.
*not visible on the form by default. Customize the form to add this field.
*not visible on the form by default. Customize the form to add this field.
Table 87: Fields for Load Balancer Pool Member [cmdb_ci_lb_pool_member] data
*not visible on the form by default. Customize the form to add this field.
*not visible on the form by default. Customize the form to add this field.
Discovery also collects data on Apache web serer load balancing modules using SSH. See Apache mod_jk
and mod_proxy discovery on page 335 for more information on probes for Apache web server data. For
information on the tables, fields, and data sources that discovery populates for Apache web servers, see
Data collected by Discovery on Apache web servers on page 375.
Load Balancer: A10
Discovery identifies and classifies information about A10 load balancers. The discovery of the A10 load
balancer is supported by SNMP.
Important: Discovery treats hardware load balancers as network devices and attempts to discover
them primarily using SNMP. If a load balancer in your system running on a Linux host has SNMP
and SSH ports open, Discovery might classify it based on the SSH port, which has classification
priority over SNMP. To ensure that Discovery properly classifies your hardware load balancers,
create a Discovery behavior for load balancers that includes SNMP but not SSH. Software load
balancers are treated as applications.
Data sources
Discovery uses the following SNMP MIBs to collect data for A10 load balancers:
• A10 COMMONS MIB file for common data
• A10 AX MIB file for extras
Data collected
The A10 load balancer follows the generic model of a load balancer and its components. The abstract
class is Load Balancer [cmdb_ci_lb], and the implementation class, extended from cmdb_ci_lb, is
A10 [cmdb_ci_lb_a10]. The load balancer components are modeled as follows:
Relationships created
An A10 Discovery creates relationships between the application and the load balance service if the
application CI exists.
In this example from the ServiceNow dependency view (BSM map), an A10 load balancer is represented
by these elements:
• Vip1: Virtual IP address that is the entry point for all business services attempting to contact the load
balancer.
• vthunder: Actual A10 load balancer CI created by Discovery.
• webApp-group1: Group containing the web server pool members IBM web01 and IBM web02.
Discovery creates these relationships from the perspective of the load balancer group:
If there is a match on one of these criteria, a record is created in the Web Server [cmdb_ci_web_server]
table if one does not already exist for that running process.
The following probes are triggered after classification:
The sensor processing of the Apache – Get Configuration probe identifies whether either the mod_jk or
mod_proxy modules are present and triggers the appropriate probe.
Apache – Get JK Module If the mod_jk module is running echo, sed, httpd, cut, grep,
as a load balancer on the egrep (within the Borne shell
server, the sensor of this probe script)
populates the information in
the Load Balancer Service
[cmdb_ci_lb_service],
Load Balancer Pool
[cmdb_ci_lb_pool] and Load
Balancer Pool Member
[cmdb_ci_lb_pool_member]
tables.
Apache – Get Proxy Module If the mod_proxy module is grep, egrep (within the Borne
running as a load balancer on shell script)
the server, the sensor of this
probe populates the information
in the Load Balancer Service
[cmdb_ci_lb_service],
Load Balancer Pool
[cmdb_ci_lb_pool] and Load
Balancer Pool Member
[cmdb_ci_lb_pool_member]
tables.
In addition to data population, the following relationships are created in the CI Relationship [cmdb_rel_ci]
table:
• The records in the cmdb_ci_lb_appl table run on the cmdb_ci_web_server table records.
• The records in the cmdb_ci_lb_service table use the cmdb_ci_lb_pool table records.
• The records in the cmdb_ci_pool table are used by the cmdb_ci_service table record.
• The records in the cmdb_ci_pool table are members of the cmdb_ci_pool_member table.
• The records in the cmdb_ci_pool_member table are members of the cmdb_ci_pool table.
Requirements
Important: Discovery treats hardware load balancers as network devices and attempts to discover
them primarily using SNMP. If a load balancer in your system running on a Linux host has SNMP
and SSH ports open, Discovery might classify it based on the SSH port, which has classification
priority over SNMP. To ensure that Discovery properly classifies your hardware load balancers,
create a Discovery behavior for load balancers that includes SNMP but not SSH. Software load
balancers are treated as applications.
Consider the following requirements for discovering Citrix Netscaler load balancers:
• The Citrix Netscaler device is installed and running on the network.
• The Citrix Netscaler probes require SNMP credentials to run commands.
Probes
The SNMP - Classify probe detects Citrix Netscaler load balancers with the following SNMP MIB: NS-
ROOT-MIB
The following probes are triggered after classification:
Probe Description
SNMP - Netscaler – Identity MultiProbe that queries for the serial number and
other identifying information.
SNMP - Netscaler – Identity - Serial Retrieves the Netscaler chassis serial number,
which is globally unique for this vendor.
SNMP - Netscaler - System Collects information on the Netscaler, including
pools, services, and VLANs.
The Citrix Netscaler load balancer model represents a generic load balancer and its components. The
abstract class is stored in the Load Balancer [cmdb_ci_lb] table. The implementation class, extended from
Load Balancer, is stored in the Citrix Netscalers [cmdb_ci_lb_netscaler] table.
Note: See Load balancer fields and probes on page 328 for a list of the fields that are populated
by these probes.
Note:
Discovery does not gather the load balancer interface information that is necessary to map the
network path in Service Mapping. Service maps might not appear correct.
CI relationships
A Citrix Netscaler discovery creates relationships between the application and the load balance service
as well as groups and group members if the application CI exists. However, if the application CI was not
discovered, the SNMP - Netscaler - System sensor will map between the computer and the load balance
service instead.
In this example from the ServiceNow dependency view (BSM map), a Netscaler load balancer is
represented by these elements:
• netscaler: Actual Netscaler load balancer CI created by Discovery.
• VIP1: Virtual IP address that is the entry point for all business services attempting to contact the load
balancer.
• New_Service_Group: Group containing the web server pool members.
Important: Discovery treats hardware load balancers as network devices and attempts to discover
them primarily using SNMP. If a load balancer in your system running on a Linux host has SNMP
and SSH ports open, Discovery might classify it based on the SSH port, which has classification
priority over SNMP. To ensure that Discovery properly classifies your hardware load balancers,
create a Discovery behavior for load balancers that includes SNMP but not SSH. Software load
balancers are treated as applications.
Note: You can download VMware images of BIG-IP with a free 90-day key from https://
www.f5.com/trial.
Components
Probe Description
HAProxy is an open-source load balancer that can manage any TCP service. It is particularly suited for
HTTP load balancing because it supports session persistence and Layer 7 processing. Discovery supports
HAProxy for HTTP load balancing. TCP load-balancing is not supported.
Consider the following requirements for discovering the HAProxy:
• The HAProxy software is installed and running on a Linux server.
• The MID Server is deployed to explore the server and the MID Server has access to the server
HAProxy configuration file.
• The configuration probe checks for the haproxy.cfg file using one of the following methods:
• Using the f parameter for the HAProxy process output.
• Using the default /etc/haproxy/haproxy.cfg path.
• The HAProxy probes require credentials and execute privileges to run commands.
Discovery uses the Unix - Active Processes probe to identify an HAProxy load balancer when the name
of the process is haproxy. If this criterion matches, a record is created in the HAProxy Load Balancers
[cmdb_ci_lb_haproxy] table if one does not already exist for that running process.
The following probes are triggered after classification:
In addition to populating the data, the following relationships records are created in CI Relationships
[cmdb_rel_ci] table:
• The records in the cmdb_ci_lb_appl table run on the cmdb_ci_web_server table records.
• The records in the cmdb_ci_lb_service table use the cmdb_ci_lb_pool table records.
• The records in the cmdb_ci_pool table are used by the cmdb_ci_service table records.
• The records in the cmdb_ci_pool table are members of the cmdb_ci_pool_member table records.
• The records in the cmdb_ci_pool_member table are members of the cmdb_ci_pool table records.
The Nginx Process Classifier detects a running process that matches the following criteria during the
exploration of a UNIX server:
• The name of the process starts with nginx.
• The name of the process contains master.
In addition to populating the data, the following relationships records are created in CI Relationships
[cmdb_rel_ci] table:
• The records in the cmdb_ci_web_server table run on the cmdb_ci_linux_server table records.
• The records in the cmdb_ci_lb_service table use the cmdb_ci_lb_pool table records.
• The records in the cmdb_ci_pool table are used by the cmdb_ci_service table records.
• The records in the cmdb_ci_pool table are members of the cmdb_ci_pool_member table records.
• The records in the cmdb_ci_pool_member table are members of the cmdb_ci_pool table records.
Classifiers
Discovery uses the Cisco GSS Load Balancer classifier, which contains the condition: sysdescr contains
Global Site Selector to classify the device.
Probes
Discovery uses several probes, including an SNMP probe to discover the specific GSS device and a Serial
Number probe to identify the serial number of the device. See the table below for the list of related probes.
Warning: You must have SNMP and SSH credentials configured correctly for the GSS load
balancers on your network.
Tables
Discovery creates a record for each GSS device in the Cisco GSS [cmdb_ci_lb_cisco_gss] table. Domains
are populated in the DNS Names [cmdb_ci_dns_name] table. Host names and IP addresses are stored in
the IP Address to DNS Name [cmdb_ip_address_dns_name] table. Service Mapping uses this information.
You can see the DNS name information on the DNS Names for CIs related list of the Load Balancer form.
Note: The Cisco GSS pattern is available by default to use with the Pattern Designer.
Classifiers
Discovery uses the Cisco CSS Load Balancer classifier, which contains the OID Classification:
1.3.6.141.9.9.368.4.5: .
Probes
Tables
Discovery creates a record for each CSS device in the Cisco CSS [cmdb_ci_lb_cisco_css] table, which
includes the device's Name, Model, Serial Number, Manufacture, and NIC (ip_address). It also creates a
record for each service in the Load Balancer Services [cmdb_ci_lb_service] table.
.
Layer 2 Discovery
Discovery can detect the physical connections, known as Layer 2, between network devices.
Discovery uses multiple probes to gather information about network adapters and their Layer 2
connections.
For example, if Discovery finds a switch in a network, it triggers the SNMP - Switch - Vlan probe and the
SNMP - Network - ARPTable probe. For every Vlan Discovery finds, it triggers various switch probes. If
a switch has routing capabilities, Discovery triggers the SNMP - Routing probe to collect network adapter
information in the Network Adapter [cmdb_ci_network_adapter] table. If Discovery finds a server, it triggers
the appropriate Address Resolution Protocol (ARP) probe for that operating system. Discovery also
supports the use of patterns, such as the Network Switch and Network Router patterns, which are available
by default in Discovery. See Router and switch discovery on page 353 for more information.
During the discovery of a network device, Discovery creates records in the Router Interface
[dscy_router_interface] table and the Switchport [dscy_switchport] table. This information contains network
adapter information for that device. For SNMP-enabled devices, Discovery gathers the information from a
routing probe during the exploration phase. The Layer 2 protocol cache probe runs next to collect neighbor
data from the device.
As Discovery gathers network information from the probes on a device, it populates the Device Neighbors
[discovery_device_neighbors] table. In some cases, the neighbors of this device might not yet be known to
the instance. The neighbor's interface cannot be resolved to a record until Discovery eventually finds the
neighbor's side of the relationship. When Discovery runs on the neighboring device, Discovery completes
the information for the neighbor's interface for the original reporting device.
Discovery can retrieve neighbor data from these caches on a network device:
• Cisco Discovery Protocol (CDP) : Cache on Cisco devices that contains device neighbor information in
the form of a protocol specific neighbor ID.
• Link Layer Discovery Protocol (LLDP): Generic cache that contains device neighbor information in the
form of a protocol specific neighbor ID.
• Address Resolution Protocol (ARP): Cache that contains the IP and MAC addresses of all connecting
devices and servers.
Layer 2 connectivity
Discovery can create Connects to:Connected by relationships between connected layer 2 devices and
between the interfaces and adapters of host servers and layer two devices. Service Mapping must be
active in addition to Discovery.
In the following example, Discovery found a server running AIX, and was also able to find two IP switches
on the network. These relationships were created:
• An IP Connection relationship between the AIX Server and the two IP switches A and B.
• A reference between the AIX server and its own network adapter.
• A Connects to relationship between the adapters on the two IP switches (not shown in image below).
• A Connects to relationship between the network adapter of the AIX server and the switch port of IP
switch A (highlighted in red). This kind of relationship requires Service Mapping to be active.
To view these relationships, open the dependency view for the server. To view the relationship between
the two IP switches, open the dependency view from one of the switches and select the Physical Network
Connections option for the Dependency Type in the map settings.
Network Infrastructure Item [dscy_net_base] table. Devices that support SNMP, such as Linux computers
and network devices, cache two types of address information:
• Static: Manually added address resolutions.
• Dynamic: Hardware name and IP address pairs added to the cache by previous, successful ARP
resolutions.
When the ARP table Discovery completes, the system collects all static and dynamic table entries
from devices via SNMP. If a new ARP entry is available, it is added to the Network ARP Table
[discovery_net_arp_table]. If any previously discovered ARP entries are no longer cached in the device’s
ARP table, the system removes the corresponding records from the CMDB using the reconciliation
process.
Note: If new ARP entries are created after Discovery runs, they are not discovered until the next
Discovery schedule. If ARP entries are removed from the device after Discovery runs, the CMDB
ARP table is not updated until Discovery runs again.
ARP probes
Discovery provides these probes for extracting IP and MAC address resolution information:
Probe ECC queue topic Command Description
These probes return bridging information from VLANs connected across network switches, including port
selection, forwarding tables, and the use of the spanning tree protocol.
SNMP - Switch - BridgePortTable This probe returns all the ports from a switch that
are used to create a bridge between network
segments.
SNMP - Switch - SpanningTreeTable This probe returns the active path between any two
network nodes bridged by a switch.
Table Switch
Forwarding Table
[discovery_switch_fwd_table]
• Cisco
BRIDGE MIB:
oid_spec_list
= 'table
iso.org.dod.internet.mgmt.
dot1dTpFdbAddress,dot1dTpF
SNMP - Switch - Vlan This probe returns VLAN IDs from a network switch.
OIDs • iso.org.dod.internet.private.enterpris
• iso.org.dod.internet.private.enterpris
• iso.org.dod.internet.private.enterpris
• iso.org.dod.internet.mgmt.mib-2.sys
Note:
Other switch
types are not
supported.
Required credentials
Discovery explores many kinds of devices, such as switches, routers, and printers, using the SNMP
protocol. Credentials for SNMP do not include a user name, just a password, which is the community
string. The default read-only community string for many SNMP devices is public, and Discovery will try that
automatically. Enter the appropriate SNMP credentials if they differ from the public community string.
Port discovery_switch_bridge_port_table
port SNMP, dot1dBridge
MIB
MAC Address discovery_switch_fwd_table
mac_address SNMP, dot1dBridge
MIB
Port discovery_switch_fwd_table
port SNMP, dot1dBridge
MIB
Status discovery_switch_fwd_table
status SNMP, dot1dBridge
MIB
VLAN Id discovery_switch_fwd_table
vlan_id SNMP, dot1dBridge
MIB
Designatged Bridge discovery_switch_spanning_tree_table
designated_bridge_mac SNMP, dot1dBridge
MAC MIB
Designated Root discovery_switch_spanning_tree_table
designated_root SNMP, dot1dBridge
MIB
Port discovery_switch_spanning_tree_table
port SNMP, dot1dBridge
MIB
Port Enable discovery_switch_spanning_tree_table
port_enable SNMP, dot1dBridge
MIB
Port State discovery_switch_spanning_tree_table
port_state SNMP, dot1dBridge
MIB
Base IP address dscy_swtch_partition base_ip_address SNMP, dot1dBridge
MIB
Base MAC address dscy_swtch_partition base_mac_address SNMP, dot1dBridge
MIB
Base netmask dscy_swtch_partition base_netmask SNMP, dot1dBridge
MIB
Type dscy_swtch_partition type SNMP, dot1dBridge
MIB
Transparent dscy_swtch_partition transparent SNMP, dot1dBridge
MIB
Sourceroute dscy_swtch_partition sourceroute SNMP, dot1dBridge
MIB
Name dscy_swtch_partition name SNMP, dot1dBridge
MIB
Status dscy_swtch_partition status SNMP, dot1dBridge
MIB
Interface number dscy_swtch_partition interface_number SNMP, dot1dBridge
MIB
Type dscy_switchport type SNMP, dot1dBridge
MIB
Status dscy_switchport status SNMP, dot1dBridge
MIB
Discovery uses the following SNMP MIBs to collect data for load balancers.
• F5-BIGIP-COMMON-MIB
• F5-BIGIP-LOCAL-MIB
• F5-BIGIP-SYSTEM-MIB
Discovery uses the following tables to store configuration records for load balancers.
Relationships
Starting in Fuji, the Layer 3 BSM Discovery feature will map Layer 3 relationships between a server and
other network devices. The relationship begins at the router or switch and shows relationship and IP
address information for associated servers and network devices.
The system property glide.discovery.L3_mapping toggles the Layer 3 BSM discovery of these devices.
When glide.discovery.L3_mapping is true, the relationship created is of type IP Connection::IP Connection.
In this example, the BSM resulting from the discovery of Linux Server db14242 shows that it has an IP
Connection::IP Connection relationship to switch cor-001a.
Prerequisites
System Properties
Table 110: System Properties for logical mapping for the TCP/IP layer
Data collected
URL cmdb_ci_outofband_device
url SNMP walk:
drsProductURL
(racURL for iDRAC7)
Note: 1 Host is a reference to the cmdb_ci_computer table via the serial number. Therefore, in
order for the cmdb_ci_outofband_device.host field to be populated correctly, the host machine
needs to be discoverable or exist within the CMDB with the appropriate serial number.
Pattern
By default, Discovery uses the following pattern to perform the discovery: DataPower Server.
Credentials
The classifier kicks off the Horizontal Pattern probe on page 223 to perform identification and exploration
of the device.
Data collected
The following data is collected on the Data Power Hosting Server [cmdb_ci_datapower_server]
table.
Label Field Name
By default, Discovery uses the following pattern to perform the discovery: UCS - HD.
Use the UCS classifier, or add an SNMP classifier and specify the Cisco UCS Equipment
[cmdb_ci_ucs_equipment] table. Then create these SNMP OIDs on the classifier:
OID Manufacturer Model Table
Only B-series and C-series devices are discovered. The UCS - HD pattern does not find S or E series
devices. The following data is collected on the Cisco UCS Equipment [cmdb_ci_ucs_equipment]
table.
Name name
IP address ip_address
Model model_id
Distinguished name dn
The following data is collected on the Cisco UCS Chassis [cmdb_ci_ucs_chassis] table.
Label Field Name
Name name
Model number model_number
Serial number serial_number
Chassis ID chassis_id
Operational status operational_status
Operability operability
Distinguished name dn
The following data is collected on the Cisco UCS Blade [cmdb_ci_ucs_blade] table.
Label Field Name
Name name
Model number model_number
Serial number serial_number
CPU count cpu_count
CPU core count cpu_core_count
Ram ram
Number of Adapters number_of_adapters
Chassis ID chassis_id
Slot ID slot_id
Service ID server_id
Description short_description
Distinguished name dn
The following data is collected on the Cisco UCS Rack Mount Units [cmdb_ci_ucs_rack_unit] table.
Name name
Vendor vendor
Server ID server_id
Availability availability
UUID uuid
Rack serial rack_serial
Number of CPUs num_of_cpus
Number of cores num_of_cores
Number of adapters num_of_cpus
Model ID model_id
Total Memory total_memory
Distinguished name dn
State state
Relationships
The following example shows the relationships between a Cisco UCS device, chassis, and blades:
Table 114:
CI Relationshp CI
Software Discovery
Discovery identifies several types of software.
Important: Discovery does not support Oracle or MySQL database catalog discovery in the base
system. You must create probes and sensors to find these.
To view a database catalog, navigate to Configuration > Database Catalogs and select a database.
Note: The original schema used by previous versions is not affected when the Software Asset
Management plugin is not activated.
When it runs, Discovery populates the Software Installation [cmdb_sam_sw_install] table. The
appropriate configuration item (CI) references the software data. A business rule on this table runs,
searching for a matching record on the Software Discovery Model [cmdb_sam_sw_discovery_model]
table. If a record exists in the Software Discovery Model table, then the Software Installation table
updates the record. If no record exists in the Software Discovery Model table, then one is created. Data
from this table is then passed to the Software [cmdb_ci_spkg] table to preserve backward compatibility.
The Software Model [cmdb_software_product_model] table is not populated or used by Discovery.
Important: When SAM is installed, the Software Installation [cmdb_sam_sw_install] table is the
appropriate source for all current software data. This means you need to update any related lists
or customized reference fields you added to CI records.
Figure 109: Discovery and the Software Asset Management Template Table schema
no other processing can occur. To avoid pausing process threads, the ServiceNow ITSA Suite disables
active connection probes for Solaris machines in the base system, but can be configured to use these if
the pauses are not an issue. For AIX machines, Discovery is configured to use the active connection probe
for AIX targets. However, the user that executes the shell script on AIX machines might require additional
privileges to execute commands, such as kdb, employed in the script.
On Linux machines, Discovery uses the lsof command (installed by default on Linux) to detect TCP
connections. The Linux classification probe triggers the Linux - Active Connections probe, which uses
lsof to discover application relationships and does not produce any pauses. The lsof command is the
recommended method of returning active TCP connections and can be installed on Solaris and AIX target
machines to eliminate any issues produced by the shell script.
5. Click Save.
6. Click Save.
7. Repeat this procedure for the AIX classification probe.
Name cmdb_ci_apache_web_server
name apcfg Apache – Get
Configuration
Version cmdb_ci_apache_web_server
version httpd Apache – Version
Description cmdb_ci_apache_web_server httpd
short_description Apache – Get
Configuration
Name must_sudo
Value true
7. Click Submit
Probe Command
Apache – Get Configuration echo, sed, httpd, cut, grep, egrep (within the
Borne shell script)
Apache – Version httpd
Apache – Get JK Module echo, sed, httpd, cut, grep, egrep (within the
Borne shell script)
Discovery uses the Unix - Active Processes probe to identify an Apache server that contains the mod_jk
module:
1. The Unix - Active Processes probe detects a running process that matches one of the following
criteria:
• The name of the process is httpd.
• The name of the process is apache.
2. If there is a match on one of these criteria, a record is created in the Web Server table
[cmdb_ci_web_server] if one does not already exist for that running process. The following probes are
also triggered:
• Apache – Version: the sensor of this probe populates the Apache version information in the Web
Server record.
• Apache – Get Configuration: this probe contains a Bourne shell script and an argument that
determines the path of the Apache configuration file. The sensor of this probe populates some
additional information in the Web Server record.
3. The sensor processing of Apache – Get configuration probe results triggers the following probes if the
mod_jk module is running on the web server:
• Apache – JK Module: if the mod_jk module is running as a load balancer on the
server, the sensor of this probe populates the information in the Load Balancer Service
[cmdb_ci_lb_service], Load Balancer Pool [cmdb_ci_lb_pool] and Load Balancer Pool Member
[cmdb_ci_lb_pool_member] tables.
Data Collected
For the mod_jk module with no load balancer, the following data is collected by default:
If the mod_jk module is enabled for load balancing, Discovery connects the following data:
Relationships
In addition to data population, the following relationships are created in the CI Relationship [cmdb_rel_ci]
table:
• The records in the cmdb_ci_lb_appl table run on the cmdb_ci_web_server table records.
• The records in the cmdb_ci_lb_service table use the cmdb_ci_lb_pool table records.
• The records in the cmdb_ci_pool table are used by the cmdb_ci_service table record.
• The records in the cmdb_ci_pool table are members of the cmdb_ci_pool_member table.
• The records in the cmdb_ci_pool_member table are members of the cmdb_ci_pool table.
Probe Commands
Apache – Get Configuration echo, sed, httpd, cut, grep, egrep (within the
Borne shell script)
Apache – Get Proxy Module grep, egrep (within the Borne shell script)
Apache – Version httpd
Discovery uses the Unix - Active Processes probe to identify an Apache server that contains the
mod_proxy module. The probes and sensors operate in the following manner:
1. The Unix - Active Processes probe detects a running process that matches one of the following
criteria:
• The name of the process is httpd.
• The name of the process is apache2.
2. If there is a match on one of these criteria, a record is created in the Web Server table
[cmdb_ci_web_server] if one does not already exist for that running process. The following probes are
also triggered:
• Apache – Version: the sensor of this probe populates the Apache version information in the Web
server [cmdb_ci_web_server] record.
• Apache – Get Configuration: this probe contains a Bourne shell script and an argument that
determines the path of the Apache configuration file. The sensor of this probe populates some
additional information in the Web server [cmdb_ci_web_server] record.
3. The sensor processing of the Apache – Get configuration probe results triggers the following probes if
the mod_proxy module is running on the web server:
• Apache - Get Proxy Module: if the mod_proxy module is running as a load balancer on
the server, the sensor of this probe populates the information in the Load Balancer Service
[cmdb_ci_lb_service], Load Balancer Pool [cmdb_ci_lb_pool] and Load Balancer Pool Member
[cmdb_ci_lb_pool_member] tables.
Data Collected
For the mod_proxy module with no load balancer, the following data is collected by default:
If the mod_proxy module is enabled for load balancing, Discovery connects the following data:
Relationships
In addition to data population, the following relationships are created in the CI Relationship [cmdb_rel_ci]
table:
• The records in the cmdb_ci_lb_appl table run on the cmdb_ci_web_server table records.
• The records in the cmdb_ci_lb_service table use the cmdb_ci_lb_pool table records.
• The records in the cmdb_ci_pool table are used by the cmdb_ci_service table records.
• The records in the cmdb_ci_pool are members of the cmdb_ci_pool_member table records.
• The records in the cmdb_ci_pool_member is a member of the cmdb_ci_pool table records.
• Each VPC that is discovered must have a separate discovery schedule for the IP addresses in that
VPC.
The account used for the AWS discovery scan needs the ReadOnlyAccess permissions policy applied to
it.
When Discovery finds a Linux or Windows host, it determines if that host is running on an Amazon
Web Services (AWS) EC2 instance and then attempts to create a relationship between an
cmdb_ci_ec2_instance and the cmdb_ci_server CI.
Discovery gets EC2 instance information, such as instance ID, region, and account ID, from a host server
running on an EC2 instance when the host is discovered. Discovery then checks for a CI in the EC2 Virtual
Machine Instance [cmdb_ci_ec2_instance] table with matching information. If a match is found, Discovery
creates the Virtualized by::Virtualized relationship between that instance CI and the host CI.
Discovery launches these probes to establish the relationship:
• Linux - AWS Relationship: Discovery launches this probe for any Linux server discovered.
• Windows – AWS Relationship: Discovery launches this PowerShell probe for any Windows server
classified as 2008, or 2012.
Note: You can disable the collection of EC2 metadata by setting the
glide.discovery.discover_aws_ec2_host_metadata property to false. To access this property,
navigate to Discovery Definition > Properties and locate the property labeled When doing IP-
based discovery against a given host, also run probes that retrieve AWS EC2 metadata.
Table 122: Data Collected by Discovery for AWS Auto Scaling Group table
Table 123: Data Collected by Discovery for AWS Auto Scaling Group Launch Config table
Table 124: Data Collected by Discovery for AWS Availability Zone table
Table 125: Data Collected by Discovery for AWS EBS Volume table
Table 126: Data collected by Discovery on an AWS Elastic Block Store Snapshot table
Table 127: Data Collected by Discovery on AWS Elastic Load Balancer table
Table 131: Data Collected by Discovery for AWS VPC Security Group table
Table 133: Data Collected by Discovery for EC2 Key Pairs table
When Discovery determines that a Linux or Windows host is running on an AWS EC2 instance, Discovery
creates a Virtualized by::Virtualized relationship between the cmdb_ci_ec2_instance record and the host.
Table 134: Data Collected by Discovery for EC2 Virtual Machine Instance table
Field Description
Field Description
form
Field Description
The isMatch value is a function that contains two input variables, process and resource:
• process: Process is the GlideRecord of the process application. It is determined by
the Table field in the classifier. In this example, it is the GlideRecord entry of the
Application table (cmdb_ci_app) for the process that is being classified. You have
access to any field values for the CI type such as name or version.
• resource: Resource is the GlideRecord entry in the Windows Cluster Resource
table after the resourceType condition has been applied. In the example, it is the
GlideRecord entry of the sixth row.
The following script indicates that if if there is a resource of type SQL Server, and the
application name is equal to the resource name, then the process is classified as a
clustered application.
If there are multiple matches to the resourceType condition, the matching function is
called multiple times. For the following resourceType example, the matching function is
called twice because there are two entries that have Physical Disk in the Resource Type
column in the sample Windows Cluster Resources table.
tab
Version cmdb_ci_microsoft_iis_web_server
version Windows registry
Name cmdb_ci_web_site name wmi
Log directory cmdb_ci_web_site log_directory wmi
Description cmdb_ci_web_site short_description wmi
Correlation ID cmdb_ci_web_site correlation_id Internal
IP address cmdb_ci_web_site ip_address wmi
TCP port cmdb_ci_web_site tcp_port wmi
Note: You must install IIS Management Scripts and Tools on a Microsoft IIS Server in order for
Discovery to collect data from it.
Requirements
The Microsoft SQL Server probe is triggered by a running process that has a command that contains
sqlservr.exe.
Note: The SMO requires the Common Language Runtime (CLR) library to be installed first.
Both libraries can be downloaded from the Microsoft website.
• Install PowerShell v2.0 and above.
• Install the Remote Registry Service on target computers running Microsoft SQL Server.
Credentials
Note: If there are no matching discovery credentials, probes may attempt to connect using the MID
Server service credentials. The MID Server service credentials are only valid if the MID Server host
is on same domain as the Microsoft SQL Server host.
Domains
• Install the MID Server host and the Microsoft SQL Server host on the same domain or, if they are on
different domains, enable a trust relationship between the domains such that users in the Microsoft SQL
Server host domain are trusted by the MID Server host domain.
• If a domain trust relationship is in place, do not install the MID Server on a domain controller.
Process classifiers
For Powershell version 2 or later, the MID Server must be within the same domain that it is trying to
discover.
Discovery collects data from the following versions:
Table 139:
Dictionary entries
The running processes of the database (the actual SQL server) is referred to as the database instance.
Discovery writes the following information to the MSFT SQL instance [cmdb_ci_db_mssql_instance] table.
Name Name
Instance instance_name
TCP port tcp_port
Portdynamic port_dynamic
Version name version_name
Edition edition
Version version
Engine edition engine_edition
For SQL2000 servers, Discovery can match instance names to their process ID. The database is referred
to as the catalog. The following data is collected in the MSFT SQL Catalog [cmdb_ci_db_mssql_catalog]
table:
Name name
Status status
Note: You can find the data for the actual server on which the MSSQL instance and catalogs
resides in the Windows computer [cmdb_ci_win_server].
By default, a CI record for a server (node) hosting an SQL Server instance that is part of a cluster does not
display the instance or the cluster as related items.
To see the cluster information on a CI record:
1. Add the Cluster Nodes related list to the form. This list displays the name of the node and the parent
cluster.
2. Click the cluster name in the related list to drill into the CI record for that cluster. Individual SQL
Server instances running on that cluster are listed in the Related Items hierarchy. The names in
the list contain the instance name prepended to the host name with the @ symbol. In this example,
MSSQLCLUSTER is the SQL Server instance name, and cluster-node2 is the name of the Windows
host (not the Windows cluster).
3. Click the name of the SQL Server instance in the Related Items list to open the record for that CI.
The instance name is expressed in the format instance@host name and displays the relationship
hierarchy to the configured number of levels, including the system databases that the instance uses,
the cluster on which the instance runs, and the server on which the cluster runs.
1. Open the desired process classifier (Discovery Definition > CI Classification > Process).
2. Add the Parameters related list (if not visible).
3. Click New in the Parameters related list and add a new parameter, using the following values:
• Name: Unique name of your choosing.
• Type: Enter Cluster.
• Value: Enter the following statement.
Parameter Description
MySQL discovery
Discovery creates or updates a CMDB record when it detects a running instance of MySQL on UNIX or
Windows systems.
Discovery searches for the MySQL configuration file location from the following areas:
• UNIX: Discovery searches for the MySQL configuration file location from the Mysqld process, or port
3306.
• Windows: Discovery searches for the MySQL configuration file location from the Mysqld.exe process, or
port 3306.
For each process, the following process parameters are explored in the following order:
--defaults-extra-file
--defaults-file
If the MYSQL configuration file location is not found from that search, then the following occurs:
• UNIX: The configuration file location defaults to /etc/my.cnf.
• Windows: No default configuration file location exists, and the probe to read the configuration file
location is skipped.
Version cmdb_ci_db_mongodb_instance
version mongod (UNIX) or
mongod.exe (Windows)
Mongo configuration cmdb_ci_db_mongodb_instance
mongodb_conf mongod.conf
TCP port(s) cmdb_ci_db_mongodb_instance
tcp_port Process Classification
or mongod.conf
PostgreSQL discovery
Discovery creates or updates a CMDB record when it detects a running instance of PostgreSQL on UNIX
systems.
The user must have root-level access to the database to access the postgresql.conf file.
Data collected
Name cmdb_ci_db_postgresql_instance
name PostgreSQL
Instance@hostname
Data Directory cmdb_ci_db_postgresql_instance
data_dir running process
TCP port cmdb_ci_db_postgresql_instance
tcp_port running process
SQL Configuration cmdb_ci_db_postgresql_instance
postgres_conf data_directory/
postgresql.conf
Version cmdb_ci_db_postgresql_instance
version postmaster/postgres
2. If there is a match:
• A record is created in the NGINX Web Server [cmdb_ci_nginx_web_server] table.
• A Runs on relationship is created in the CI Relationship [cmdb_rel_ci] table for a Linux server
(Linux Server [cmdb_ci_linux_server]) and for an NGINX web server (NGINX Web Server
[cmdb_ci_nginx_web_server]) .
The following two probes are triggered:
• NGINX – Version: This probe contains a Bourne shell script. It determines the version of NGINX and
populates the NGINX Web Server [cmdb_ci_nginx_web_server] table.
• NGINX – Get Configuration: This probe contains a Bourne shell script and an argument that
determines the path of the NGINX configuration file. The probe identifies configuration parameters
based on keywords within the configuration file and returns them as a single payload result.
The sensor on the ServiceNow instance parses the payload result and populates the CMDB.
Requirements
• Ensure that the NGINX software is installed and running on the server.
• Grant the MID Server has access to the NGINX configuration file, which is /etc/nginx/nginx.conf
by default.
• Enable secure shell (SSH) commands to identify the following associated elements:
• NGINX Version
• NGINX Get Configuration
Probe Commands
2. If there is a match:
• A record is created in the Web Server [cmdb_ci_web_server] table.
• A Runs on relationship is created in the CI Relationship [cmdb_rel_ci] table for the Linux Server
[cmdb_ci_linux_server] table and the Web Server [cmdb_ci_web_server] table.
4. The sensor on the ServiceNow instance processes the payload and populates the CMDB.
Data Collected
Discovery creates or updates CMDB records when it detects a running NGINX process. The following data
is collected.
Relationships
The following data is collected by Discovery for an Oracle database instance running on the UNIX
operating system.
The following data is collected by Discovery for an Oracle database instance running on the Windows
operating system.
3. With the activation of the Puppet Configuration Management plugin, the sensor processing of
Puppet - Master Info triggers the following simultaneously:
• Puppet – Certificate Requests: The sensor of this probe populates the Puppet Certificate Request
[puppet_certificate_request table] with open requests. Open requests are requests that are not
already signed or rejected.
• MultiProbe Puppet – Resources: This probe contains the following probes:
• Puppet – Module: The sensor of this probe populates records within the Puppet Module
[puppet_module] table.
• Puppet – Manifests: The sensor of this probe populates records within the Puppet Manifest
[puppet_manifest], Puppet Class [puppet_class], and Puppet Parameter [puppet_parameter]
tables.
By default, Discovery identifies Puppet Masters running on UNIX servers. Discovery uses secure shell
(SSH) commands to collect information.
With the addition of the Puppet Configuration Management plugin, Discovery identifies the following
associated elements:
• Puppet Certification Requests
• Puppet Manifests
• Puppet Modules
The credentials used to discover the UNIX server must have privileges to execute the following commands.
The use of sudo is supported, but you must add the must_sudo parameter to the probe.
Probe Commands
Puppet – Master Info puppet, echo, hostname (within the Borne shell
script)
Puppet – Certificate Requests puppet
Puppet – Manifests echo, sed, find (within the Bourne shell script)
Puppet – Modules puppet
Data collected
Table 147: Data collected by Discovery for Puppet automation software, by default
Data collected
The classifier that finds Tomcat server processes uses the condition: Parameters contains
org.apache.catalina.startup.Bootstrap.
WebLogic discovery
Discovery creates or updates a CMDB record when it detects an instance of an Oracle or BEA Weblogic
application server running on a Windows or Linux system.
Requirements
• The Linux - Weblogic - Find config.xml probe requires the use of these Bourne shell commands: find,
cat, and dirname. The SSH credential must also have read permissions on the config.xml file.
• WebLogic administration server instances started via NodeManager must have the -
Dweblogic.RootDirectory=<path> parameter defined and visible through the Linux ps process stat
command (for each AdminServer) for the rest of the Linux WebLogic application server and web
application information to be populated in the CMDB.
• PowerShell must be enabled on the MID Server for the WebLogic probes to gather full server and web
application information.
• The WebLogic Administration Server instances that start via WebLogic NodeManager must have the -
Dweblogic.RootDirectory=<path> parameter defined upon server startup. The Windows credential must
also have read permissions on the config.xml file.
3. The Linux - Weblogic - Find config.xml probe attempts to find the related config.xml file for the
server by either:
• Using the -Dweblogic.RootDirectory=<path> parameter defined in the running process.
• Searching for the parent process that started the WebLogic server (only viable if the weblogic jvm
was started via the startWeblogic.sh or related custom script and not the init process).
3. The Windows - Weblogic - Find config.xml probe attempts to find the related config.xml file for the
server by:
• Using the -Dweblogic.RootDirectory=<path> parameter defined in the running process.
• Searching for config.xml files under the –Dplatform.home=<path> parameter defined in the
running process (not as efficient using the parameters of the process).
4. If there are associated web applications found in the WebLogic config.xml file, the Windows –
Weblogic find web.xml probe triggers for each application. This probe reads the WebLogic web.xml
file for each web application and the sensor, and then populates additional information.
Data collected
Table 152: Data collected from the config.xml file for Windows
Name cmdb_ci_app_server_weblogic
name running process
TCP port cmdb_ci_app_server_weblogic
tcp_port running process
Version cmdb_ci_app_server_weblogic
version config.xml
Weblogic domain cmdb_ci_app_server_weblogic
weblogic_domain config.xml
Name cmdb_ci_web_applicationname config.xml
Document base cmdb_ci_web_applicationdocument_base config.xml
Description cmdb_ci_web_applicationdescription web.xml
Servlet class cmdb_ci_web_applicationservlet_class web.xml
Servlet name cmdb_ci_web_applicationservlet_name web.xml
App server cmdb_ci_web_applicationapp_server config.xml
TCP port cmdb_ci_app_server_weblogic
tcp_port web.xml
Relationships
Name must_sudo
Value true
5. Click Submit
3. The WebSphere - Cell probe searches for the cell.xml file for the instance by using the parameters
in the running process, and then searching in the related <config_path>\cells\<cell_name>\
directory.
4. If the probe successfully finds the cell.xml file, the sensor reads its contents and populates
additional Websphere Cell [cmdb_ci_websphere_cell] table records as necessary.
5. If the probe successfully finds the serverindex.xml file, the sensor reads its contents and
populates additional Web Application [cmdb_ci_web_application] table records as necessary.
6. If the probe successfully finds the server.xml file, the sensor reads its contents and populates
additional Web Service [cmdb_ci_web_service] table records as necessary.
3. The Windows - WebSphere - Cell probe searches for the cell.xml file for the instance by using
the parameters in the running process, and then searching in the related <config_path>\cells
\<cell_name>\ directory.
4. If the probe successfully finds the cell.xml file, the sensor reads its contents and populates
additional Websphere Cell [cmdb_ci_websphere_cell] table records as necessary.
5. The Windows - WebSphere - Web Applications probe searches the serverindex.xml file for
the instance by using the parameters in the running process, and then searching in the related
<config_path>\cells\<cell_name>\nodes\<node_name> directory.
6. If the probe successfully finds the serverindex.xml file, the sensor reads its contents and
populates additional Web Application [cmdb_ci_web_application] table records as necessary.
7. The Windows WebSphere - Web Services probe searches for the server.xml file for the instance
by using the parameters in the running process, and then searching in the related <config_path>
\cells\<cell_name>\nodes\<node_name>\servers\<server_name> directory.
8. If the probe successfully finds the server.xml file, the sensor reads its contents and populates
additional Web Service [cmdb_ci_web_service] table records as necessary.
Data collected
Relationships
These relationships are created in the CI Relationship [cmdb_rel_ci] table.
Adobe JRun
Discovery creates or updates a CMDB record when it detects a running instance of Adobe JRun.
By default, Discovery uses the following patterns to perform the discovery: Jrun.
The following data is collected in the Jrun [cmdb_ci_app_server_jrun] table:
Name name
Version version
Installation directory install_directory
Configuration directory config_directory
Configuration file config_file
Sybase
Discovery creates or updates a CMDB record when it detects an instance of Sybase.
By default, Discovery uses the following pattern to perform the discovery: Sybase.
The following data is collected in the Sybase Instance [cmdb_ci_db_syb_instance] table:
Name name
Version version
Installation directory install_directory
Instance instance
Configuration file config_file
Name name
Version version
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Installation directory install_directory
Type type
HP Service Manager
Discovery creates or updates a CMDB record when it detects a running instance of HP Service Manager.
By default, Discovery uses the following pattern to perform the discovery: HP Service Manager Application
Server.
The following data is collected on the HP Service Manager [cmdb_ci_appl_hp_service] table.
Name name
Vendor vendor
Name name
Name name
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Version version
Installation directory install_directory
Type type
Instance Name instance
Model ID model_id
Name name
Version version
TCP port(s) tcp_port
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Configuration directory config_directory
Configuration file config_file
Installation directory install_directory
Project project
Name name
Version version
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Installation directory install_directory
TCP port(s) tcp_port
Project project
Configuration directory config_directory
Configuration file config_file
Installation directory install_directory
Name name
Version version
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Installation directory install_directory
Type type
IBM WebSphere Message Broker (WMB) and WMB HTTP Listener discovery
Discovery creates or updates a CMDB record when it detects a running instance of IBM WMB and the
WMB HTTP listener for both Linux and Windows.
By default, Discovery uses the following patterns to perform the discovery:
• WMB On Unix Pattern
• WMB On Windows Pattern
• WMB HTTP Listener On Unix Pattern
• WMB HTTP Listener On Windows Pattern
The following data is collected in the IBM Web Sphere Message Brokers [cmdb_ci_appl_ibm_wmb]
table and the IBM WMB Http Listener [cmdb_ci_appl_ibm_wmb_listener] table:
Name name
Class sys_class_name
IP address ip_address
Version version
Installation directory install_directory
Name name
Version version
IP Address ip_address
Installation directory install_directory
Class sys_class_name
Fully qualified domain name (Windows only) fqdn
Exchange MailBox
Discovery creates or updates a CMDB record when it detects a running instance of Exchange Mailbox.
Prerequisites
The service winRM must be running on the host machine. Run the following command in the Powershell
window to verify: New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri
"http://$hostname/PowerShell/" -Authentication Kerberos
Pattern
By default, Discovery uses the following pattern to perform the discovery: MailBox On Windows Pattern.
Credentials
Applicative credentials The user must be able to run Powershell commands against the
Exchange hosts.
Use Exchange Mailbox as the CI type for the applicative credential.
Windows credentials The OS user must be a domain user, not a local user for the server.
Data collected
Name name
Version version
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Installation directory install_directory
Type type
Name name
Version version
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Installation directory install_directory
Type type
Name name
Version version
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Installation directory install_directory
TCP port(s) tcp_port
Configuration directory config_directory
Configuration file config_file
Installation directory install_directory
Port port
Name name
IP Address ip_address
Class sys_class_name
Fully qualified domain name fqdn
Microsoft SharePoint
Discovery creates or updates a CMDB record when it detects a running instance of Microsoft SharePoint.
By default, Discovery uses the following pattern to perform the discovery: SharePoint.
The following data is collected in the SharePoints [cmdb_ci_appl_sharepoint] table:
Name name
Class sys_class_name
Fully qualified domain name fqdn
IP Address ip_address
Version version
Category category
Type type
Name name
Version version
Installation directory install_directory
Configuration directory config_directory
Configuration file config_file
Instance name instance_name
Instance number instance_number
System ID sid
System directory system_directory
System type system_type
Transport domain transport_domain
Name name
Version version
Installation directory install_directory
Configuration directory config_directory
Configuration file config_file
Instance name instance_name
Instance number instance_number
System ID sid
System directory system_directory
System type system_type
Transport domain transport_domain
Name name
Version version
Installation directory install_directory
Configuration directory config_directory
Configuration file config_file
Instance name instance_name
Instance number instance_number
System ID sid
System directory system_directory
System type system_type
Transport domain transport_domain
Name name
Version version
Installation directory install_directory
Configuration directory config_directory
Configuration file config_file
Instance name instance_name
Instance number instance_number
System ID sid
System directory system_directory
System type system_type
Transport domain transport_domain
Name name
IP Address ip_address
Class sys_class_name
Fully qualified domain name fqdn
Type type
Version version
Installation directory install_directory
Name name
Server Name server_name
Version version
Name name
Version version
Installation directory install_directory
Configuration directory config_directory
Configuration file config_file
Instance name instance_name
Instance number instance_number
System ID sid
System directory system_directory
System type system_type
Transport domain transport_domain
Name name
IP Address ip_address
Class sys_class_name
Fully qualified domain name fqdn
Storage discovery
Discovery collects information on Direct Attached Storage (DAS), Storage Area Networks (SAN), and
Network Attached Storage (NAS).
Storage can be located on specialized devices, such as Storage Arrays, Fibre Channel Switches, iSCSI
disks, or on host operating systems, including Windows, Linux, and Solaris.
Discovery finds and maps dependencies for the following types of storage:
• Direct-attached storage (DAS), network-attached storage (NAS), or storage area network (SAN).
• NAS or SAN storage that is discovered via a Storage Management Initiative Specification (SMI-S) and
Common Information Model (CIM).
• Virtual storage for VMware EXS servers and Linux Kernel-based Virtual Machines (KVM). Discovery
maps this storage to the underlying physical storage.
Discovery of storage via a host reconciles data and creates relationships between the host's file systems
and associated local storage devices. The local storage devices represent the storage available to the
host, whether it's directly attached or provided by Fibre Channel or iSCSI. This reconciliation assumes that
the storage server has been discovered first.
Discovery collects and creates CIs in the CMDB for the following information:
• File systems (local and NAS).
• Disks (both SAN disks and DAS drives).
• Fibre Channel (FC) HBAs and ports.
• Linux Volume Manager (LVM) volumes. LVM volume data resides in the Storage Pool
[cmdb_ci_storage_pool] table.
• Veritas Volume Manager disks, subdisks, disk groups, plexes, and volumes.
Note: For details about the discovery of direct attached or multipath block storage provisioned on a
Linux host, see KB0622583.
This diagram shows the relationship of SAN storage devices to a host computer. Storage accessible to
the Linux host consists of two physical fibre channel (FC) disks, mpatha and mpathb, connected to the
host via a multipath FC SAN. The two HBAs on the host interface are connected with two fibre cables each
to separate FC switches for failover capability. The FC fabric switches are connected to two FC storage
controllers. In this example, each FC Controller is configured to exclusively manage only one physical disk.
• Windows - Storage 2008 - WMI: identifies storage attached to systems running Windows 2008,
using WMI Runner.
• Windows - Storage 2012: identifies storage attached to systems running Windows 2012 and later.
• Windows - Storage 2012 - PS: identifies storage attached to Windows systems, using PowerShell.
• Windows - Storage 2012 - WMI: identifies storage attached to Windows systems, using WMI
Runner.
• VMWare - vCenter ESX Hosts Storage: collects information about ESX servers and creates
relationships from datastores to underlying disks.
Requirements
Relationships
[cmdb_rel_ci]
• Provides: /dev/sda
• Provided by:/
dev/mapper/lvm-
root-333-0
[cmdb_ci_storage_pool_member]
• Pool: /dev/mapper/
lvm-root-333-0
• Storage: /dev/sda2
Discovery creates CIs for the logical sub-components in NAS and SAN environments, such as fibre
channel disks and pool components, as well as for host bus adapters (HBA) and physical block storage. In
multipath environments, Discovery creates CI relationships within the switched fibre fabrics that connects
the Linux host to the physical storage devices. In this diagram, the fibre fabrics have redundant paths that
the SAN environment can use for failover if connections fail.
Note: If the host also connects to a NAS or SAN storage array, set up the SMI-S Provider and
CIM credentials.
2. For all Windows hosts including Windows 2008, add Windows credentials to the Discovery Credentials
table.
3. On the ServiceNow instance, create a Discovery Schedule for each host IP address.
4. Optional: Create a behavior that uses a functionality definition with a wbem port probe to make the
initial port-scanning phase (Shazzam) more efficient.
5. Run network discovery.
Note: If the host also connects to a NAS or SAN storage array, set up the SMI-S Provider and
CIM credentials.
Table Probes
Table Probes
cmdb_ci_storage_device.computer
refers to the cmdb_ci_computer
cmdb_ci_storage_device.provided_by
refers to the cmdb_ci_fc_port (for FC only)
cmdb_ci_disk_partition.disk is a partition of cmdb_ci_disk
cmdb_ci_storage_hba.computer is the cmdb_ci_computer
cmdb_ci_fc_port.controller contains the cmdb_ci_storage_hba
cmdb_ci_fc_port.computer is the same cmdb_ci_computer as
cmdb_ci_storage_hba.computer
cmdb_ci_storage_volume.computer
is the cmdb_ci_computer
cmdb_ci_storage_volume.provided_by
is the cmdb_ci_storage_pool or
cmdb_ci_storage_pool or
cmdb_ci_storage_device
(providing storage)
cmdb_ci_storage_pool.hosted_byis the cmdb_ci_computer
cmdb_ci_computer the pool is on cmdb_ci_storage_pool.container
cmdb_ci_storage_pool.container is the cmdb_ci_storage_pool or
cmdb_ci_storage_pool_member
containing the pool (if the pool
is present)
cmdb_ci_storage_pool_member.pool
is the cmdb_ci_storage_pool
cmdb_ci_storge_pool_member.storage
is the cmdb_ci_storage_pool,
cmdb_ci_disk_partition or
cmdb_ci_storage_device
providing storage
HBA fields
Discovery populates these fields in the Storage HBA [cmdb_ci_storage_hba] table, using the Solaris -
Storage probe and sensor. Discovery on Solaris supports HBAs manufactured by Emulex and QLogic.
Name name
Computer computer
Device ID device_id
Manufacturer manufacturer
Model ID model_id
Serial number serial_number
Discovery populates these fields in the Fibre Channel Port [cmdb_ci_fc_port] table, using the Solaris -
Storage probe and sensor.
Name name
Computer computer
Controller controller
WWPN wwpn
WWNN wwnn
Port type port_type
Speed speed
Operational status operational_status
Discovery populates these fields in the Fibre Channel Disk [cmdb_ci_fc_disk] table, using the Solaris -
Storage probe and sensor.
Label Name
Name name
Storage type storage_type
Device interface device_interface
Device LUN device_lun
Discovery populates these fields in the Fibre Channel Targets [cmdb_fc_target] table, using the Solaris -
Storage probe and sensor.
Label Name
FC Disk fc_disk
WWNN wwnn
WWPN wwpn
Note: Ensure that VxVM is correctly installed and configured. If you are using 3rd party drivers,
you must configure sudo permission for vxdmpadm.
Table schema
Data collected
• start_offset
6
• end_offset
7
• size_bytes
Displaying data
VxVM Discovery maps file systems to the disks that supply storage. By default this is the only information
displayed in the UI. The file system to disk relationship is shown as an upstream Stored on relationship on
the File System form. To see the remainder of the information that Discovery captures for VxVM, you must
add the appropriate related list to the Linux Server form. The available lists are:
• Veritas Disk Group
• Veritas Disk
• Veritas Plex
• Veritas Subdisk (Computer)
• Veritas Subdisk (Storage)
• Veritas Volume
4
Offset in the disk from which this subdisk is partitioned.
5
Offset in the disk from which this subdisk is partitioned.
6
Offset of this subdisk in the plex that is using it.
7
Offset of this subdisk in the plex that is using it.
Requirements
Note: Because the SMI-S Provider caches storage device information, the Discovery query to the
provider does not affect storage device performance.
This diagram shows the tables in the SAN schema and the default references defined between them.
CIM architecture
CIM probes can explore any device based on the Common Information Model (CIM) by querying a CIM
server, also referred to as a CIMOM - Common Information Model Object Manager. By default, Discovery
uses CIM probes to explore storage systems as well as to get the serial numbers of ESX servers.
The following components are part of CIM:
• Common Information Model (CIM): CIM allows multiple parties to exchange information about managed
elements. CIM represents these managed elements and the management information, while providing
the mechanism to actively control and manage the elements.
• Storage Management Initiative Specification (SMI-S): SMI-S is a standard of use that describes
methods for storage discovery on the vendor's side. ServiceNow uses SMI-S to determine how to
discover CIM. SMI-S is based on the Common Information Model (CIM) and the Web-Based Enterprise
Management (WBEM) standards, which define management functionality via HTTP. The main objective
If you are using multiple storage vendors with custom namespaces not specified as one of the
defaults, add the new namespaces to the comma-separated list in this property. If you intend to
continue using any of the default namespaces, make sure to include them in the property.
SLP is required for CIM Discovery as it is part of the Storage Management Initiative Specification (SMI-S)
stack. Some storage devices may support the WBEM protocol, but may not support SLP.
You can manually register the WBEM services on SLP using a common Linux tool like slptool. This tool
has a command line interface that you can use to make SLPv2 User Agent (UA) requests, which usually
come with the SLP daemon package. To register a service, provide a URL and list of attributes. An
example can be extracted from a working SLP server by using the same tool.
This diagram displays the disk hierarchical schema for storage Discovery.
Discovery uses the following tables and probes to gather information about storage devices that are
managed by a SMI-S provider.
Table Probe
Table Probe
Processing flow
1. The Shazzam probe launches the wbem port probe as part of network discovery.
2. The wbem port probe detects activity on target ports SLP 427, CIM 5989 and 5988, and then
examines the Service Registry Queries related list, at Discovery Definition > Port Probes, for the
SLP query. The base system provides this query is provided to detect the service:wbem service type,
which indicates the presence of an SLP server.
3. The Shazzam probe launches a scanner for the WBEM service type. The scanner retrieves:
• The attributes of the service from the SLP server.
• The interop namespaces of CIM servers in the network.
4. The scanner appends the namespace values it finds to the port probe results.
5. The wbem port probe appends the SLP data it carries to the CIM Classify probes.
6. The CIM Classify probe uses that information to explore the CIM servers.
The wbem probe stores the data it retrieves in the CIM Classification [discovery_classy_cim] table.
To view the wbem port probe, navigate to Discovery Definition > Port Probes.
SLP query
The SLP query detects the wbem service (service:wbem) on an SLP server and gathers the attributes of
the service. To view the SLP Query record, open the wbem port probe record and select SLP Query from
the Service Registry Queries related list.
The wbem port probe appends the SLP data it carries, including namespaces, to the CIM - Classify probe
before launching it. The CIM classification probe extracts VMware ESX serial numbers and connector
relationships between the SAN and NAS components from CIM Servers in the network.
To access the CIM classification probe, navigate to Discovery Definition > Probes and select CIM -
Classify from the list of probes.
If you are using multiple storage vendors with custom namespaces not specified as one of the
defaults, add the new namespaces to the comma-separated list in this property. If you intend to
continue using any of the default namespaces, make sure to include them in the property.
Probe/Sensor Description
Storage relationships
Discovery establishes the correct relationships between Network-Attached Storage (NAS) storage devices
and remotely mounted client servers that consume the storage. Discovery maps NAS file shares by
matching the IP or hostname of a remotely-mounted disk on the client computer to the IP or hostname of
the storage server providing the exported file system.
Discovery creates the following relationships for storage CIs running on Storage Area Networks.
Discovery maps NAS file shares by resolving the hostname of a remotely mounted disk on the client
computer to an IP address and then matching it to the IP address of the storage server that provides the
exported file system. Discovery extracts the hostname or IP address from the NAS path to determine
the identify of the storage server. If the hostname is an actual hostname, the system immediately
resolves that value into an IP address and stores it in the nas_ip_address field of the NAS File System
[cmdb_ci_nas_file_system] table.
Discovery creates the following relationships for storage CIs running on Network Attached Storage (NAS).
These relationships are the same between Linux and Windows operating system hosts.
Note: Currently, the ServiceNow® platform supports the discovery of Docker virtualization
containers only.
Table schema
Table Description
Data collected
Table Fields
Table Fields
Docker virtualization
Discovery can collect data about specific objects in a Docker engine, running on a Linux host.
The ServiceNow® platform supports the discovery of Docker release 1.11.0 or later.
• Containers: Virtual wrappers, found on the Docker engine, which contain running instances of images.
User privileges
The user whose credentials are used to perform Docker Discovery must have privileges defined by one of
these methods:
• The user is the root user, since the Docker daemon runs as the root user.
• Assigned to a group named docker, which has special privileges for running Docker commands. For
instructions on setting up a group, see Create a Docker group.
The Docker tables are installed with the Discovery plugin and extend tables used to store data on the
operating-system-level virtualization (OSLV) engine.
Table Description
Figure 131: Parent and child relationships for the OSLV and Docker dependent tables
Discovery uses an application rule identifiers to find the Docker engine and then applies other rules to
identify specific Docker objects.
Identifiers
Name Table Attributes
Containment and hosting rules Docker Discovery uses these containment and
hosting rules to create configuration items (CI)
from the data retured by the Docker Pattern.
After Discovery identifies the Docker engine by
its relationship to the Application [cmdb_ci_appl]
table, it uses these rules to identify the specific CIs
connected to that engine from their relationships to
one another. By connecting the components to one
another in this fashion, from the application down,
starting with the engine, Discovery avoids creating
duplicate CIs for components from other Docker
engines that use the same name or image_id.
Pattern
To view the pattern for discovering the docker engine (cmdb_ci_docker_engine) and its components,
navigate to Pattern Designer > Discovery Patterns and open Docker Pattern. For more information
about Discovery patterns, see Pattern customization on page 730.
Discovery runs the Docker Engine process classifier in the network. If the classifier identifies the dockerd
or docker daemon process, the classifier triggers the Horizontal Pattern (HorizontalDiscoveryProbe)
probe, which launches the Docker Pattern and begins collecting data from Docker components.
The pattern collects data for the main CI type, cmdb_ci_docker_engine, and for these related CI types:
• cmdb_ci_docker_image
• cmdb_ci_docker_local_image
• cmdb_ci_docker_image_tag
• cmdb_ci_docker_container
Data collected
These attributes are discovered, in addition to the attributes derived from the parent OSLV tables.
Table Fields
Table Fields
Discovery troubleshooting
The Knowledge Base on Hi contains several articles to help you troubleshoot discovery issues. There are
also a number of error messages that can help you figure out what went wrong with a failed discovery.
KB0535181
Start here if you are experiencing symptoms, such as the inability for Discovery to
classify a discovered host, or the Shazzam probe not finding certain types of devices.
• SSH_INVALID_SESSION
Message: org.xml.sax.SAXParseException: The A version of this message occurs if you have used
entity name must immediately follow the '&' in special characters in a password that is saved in an
the entity reference XML file.
Message: Identified, ignored extra IP This message can appear during the identification
phase of Discovery if a targeted IP address belongs
to a device that is being discovered at the same
time.
For example, a Windows server has two NIC
cards with two IP addresses. Discovery targets
both IP addresses within the same Discovery
schedule. This message is generated to note that
that second IP address is ignored because we don't
want to update the same CI twice within the same
Discovery run.
This message is a warning and is expected. No
action is needed.
Message: Authentication failures The discovery process could not discover the
CI because the discovery application could not
authenticate. To resolve, add the credentials of that
machine in to the discovery credentials table.
Message: Identified, not updating CI, now No match on any of the CI Identifiers.
finished
Message: The impersonation of user failed This message originates in the Powershell. Check
that the domain is specified along with the user
name in the credentials.
Message: Connection failed to WMI service. This message originates in WMI. Check that the
Error: Permission denied MIS Server service is running with the correct
credentials and has access to the target device.
Message: Connection failed to WMI service. This message originates in WMI. Check that the
Error: The remote server machine does not exist MID Server service account has access to the
or is unavailable targeted machine. Check if a domain admin account
is used as the MID server service account? Check
if any existing firewalls are open to the connection.
To check this, run the following command from the
command prompt on the MIS Server host.
Execute for runner_type=WMI:
wmic /node:"<target>" /
user:"<user>" /password:"<password>"
path win32_operatingsystem
From within a Powershell console on
the mid server host, execute for
runner_type=Powershell:
gwmi win32_operatingsystem -computer
<ip> -credential '<username>'
Message: Provider is not capable of the WMI repository was corrupted. After following
attempted operation this article the problem was corrected: http://
blogs.technet.com/b/askperf/archive/2009/04/13/
wmi-rebuilding-the-wmi-repository.aspx
Message: The result file can't be fetched PRB581515 - Powershell does not work when
because it doesn't exist customer has locked down write rights to admin
share.
Message: Please run sneep as root to ensure The Oracle Sneep command line tool must be
correct serial number from fserial data source installed for the Solaris - Serial Number probe to
work correctly. There is a known limitation with
Fujitsu PRIMEPOWER devices. To work around
this limitation, run the Solaris discovery with root
credentials.
Message: sensor_name sensor's major version The above error messages indicate that there is a
= X while its related probe's major version = Y major version mismatch in the probe and sensor
versions and the sensor will stop processing until
this condition is resolved.
Message:sensor_name multisensor's minor
version = X while probe_name responding
script's minor version = Y
Message: sensor_name multisensor's
responding script's minor version = X while its
referenced probe probe_name minor version =
Y
Message: sensor_name sensor's minor version The above error messages indicate that there is a
= X while its related probe's minor version = Y minor version mismatch in the probe and sensor
versions. Processing will continue, but you may
want to resolve this condition
Message: Message: Sensor error when Every active probe looks for a corresponding sensor
processing . . . : No sensors defined to process the data that is collected by the probe.
The No sensors defined message indicates that the
corresponding sensor for the probe is missing or
inactive.
Message: Message: Sensor error when This type error message occurs to indicate the
processing . . . : typeError: . . . sensor has one of the core error constructors in
JavaScript
Message: Message: Sensor error when This XML parsing error occurs when the discovery
processing ...: Exception while processing sensor fails to parse the input ECC queue payload.
sensor: XML Parse Problem: ![CDATA] The payload is not formatted according to XML
specifications. Possible solutions include:
• Escape problematic characters. For example,
change < to < and & to &.
• Escape entire blocks of text with CDATA
sections, or put encoding declarations at the
start of the text.
• Remove any whitespace characters (space,
tabs, newlines) before the XML declarations.
Message: Message: Sensor error when The system displays the No XML data error when
processing Linux - Network ARP Tables: the XML sensor processor fails to #nd the expected
Exception while running probe post processing XML data in the probe output. During sensor
script: No XML data processing, Discovery attempts to retrieve the
probe results but finds the probe output empty.
Verify that the probe returns the correct output for
the sensor to process.
Message: Message: Sensor error when The Probe not found error occurs when the
processing Shazzam: Exception while running sensor processor fails to #nd the probe record
probe post processing script: Probe not found: supposedly associated with the sensor. During
null sensor processing, Discovery gets the probe
record from the probe cache and stores it for later
reference. The "Probe not found" error occurs either
when the sys_id of the probe cannot be found or
there is an issue with the probe cache.
In this example, a sensor TypeError occurred in the Windows – Installed Software sensor.
• Line 4: This script_includes line indicates the sysid 7780111 ... d768 and the error occurred at
approximately line 26 of the JavaScript file.
8. After determining the error details, you can fix the JavaScript file.
b) Click on the link to go to the details page for that script include.
c) Modify the JavaScript script on that page to fix the error indicated.
d) Click Update.
2. Navigate to Discovery Definition > Probes and open the record for the probe you want to inspect.
3. Right-click in the record's header bar.
4. Select Copy sys_id from the context menu. Follow browser instructions to copy the sys_id if browser
security measures restrict this function.
5. Compare this value with the sys_id of the probe from the payload.
If the sys_id of the probe record does not match the value in the payload, try to determine the cause of
the incorrect value.
Note: You can activate Performance Analytics content packs and in-form analytics on instances
that have not licensed Performance Analytics Premium to evaluate the functionality. However, to
start collecting data you must license Performance Analytics Premium.
Content packs
The Performance Analytics widgets on the dashboard visualize data over time. These visualizations allow
you to analyze your business processes and identify areas of improvement. With content packs, you can
get value from Performance Analytics for your application right away, with minimal setup.
Note: Content packs include some dashboards that are inactive by default. You can activate these
dashboards to make them visible to end users according to your business needs.
To enable the content pack plugin for Discovery, an admin can navigate to System Definitions > Plugins
and activate the Performance Analytics - Content Pack - Discovery plugin.
system (host, node, server, platform, or other device that is present in the CMDB) where probes launched
by the MID Server are configured to look for it. Discovery uses the data in this file to establish the
relationships between the application and the business services it supports. In addition to the environment
file, the administrator creates a version file containing general information about the application and the
version that Discovery uses to describe the application.
The diagram shows a hypothetical relationship between the SAP application and another application
created by a common business service. APD is able to discover both SAP and the dependencies created
by the shared business service by using information contained in an environment file called sap.xml
installed on the managed system. Version information is contained in the sap_version.xml file which is
installed in a different location on the same managed system or proxy server.
Requirements
• Enable Application Profile Discovery: To enable APD, navigate to Discovery Definition >
Properties and select Yes in the Application Profile Discovery property.
• Select a MID Server
Name Description
Table
Name Description
<subservices>
<subservices_provided>
<subservice> … label … </subservice>
</subservices_provided>
<subservices_consumed>
<subservice> … label … </subservice>
</subservices_consumed>
</subservices>
</environment>
</config>
</apd>
Attributes
version Mandatory Version of this application.
application Optional. Discovery uses the Name of this application.
application name from the
version file to populate the
CMDB.
system Mandatory if this environment The system name represents
configuration file is installed on the managed system’s name,
a proxy server (the managed unique to a given system,
system cannot store custom to which this environment
files or is accessible to the MID configuration file belongs.
Server).
Tags
<label> Mandatory This label has information
important to the organization,
such as Production, UAT, QA,
or Development. The label and
description appear only in the
Environment related list in the
application record. You must
personalize the form to add the
Environment related list.
<description> Optional This is a short description of the
application.
Note: Never place the version file in the same directory as the environment file. Discovery treats
all files found in that location as environment files and cannot parse the version file correctly.
UNIX
Place the environment files in one of the following locations. Discovery searches for the file in the order
shown.
1. Environment variable: Put the environment configuration file in a path such as /usr/local/APD
and set the variable, $APDCONF, to describe the location of the file. How this environment variable is
maintained is left to the administrators of the local environment.
2. Discovery property: Navigate to Discovery Definition > Properties and set the file
path in the Application Profile Discovery location for UNIX based systems property
(glide.discovery.application_discovery.unix_location). The default path is /etc/
opt/APD/.
Windows
Place the environment configuration file in one of the following locations on a Windows OS version that
supports a local machine registry.
1. Registry key: Before the MID Server retrieves a file from a managed Windows system, it checks the
HKEY_LOCAL_MACHINE (HKLM) hive, in the key HKLM\SOFTWARE\APD\APD\CONFIGPATH for
a REG_EXPAND_SZ or REG_SZ type registry value. The key value can require expansion for terms
such as %CommonAppDataFolder%. See Microsoft’s CSIDL documentation for further details.
2. Discovery property: Navigate to Discovery Definition > Properties and set the
file path in the Application discovery location for Windows systems property
(glide.discovery.application_discovery.windows_location). The sample path is
%ALLUSERSPROFILE%\Application Data\APD, in which %ALLUSERSPROFILE% is the environment
variable for any Windows system. This variable points to different paths on different versions of
Windows. For example, on Windows 7, %ALLUSERSPROFILE% opens C:\ProgramData, and on
Windows XP, it opens C:\Documents and Settings\All Users.
Proxy Locations
Some managed systems, such as network routers, switches, and network load balancers, do not support
locally stored files or general purpose file systems that could store APD configuration files. In this case, you
can name the target managed system in an environment configuration file installed on a separate proxy
server. Make sure to put the configuration XML file in one of the approved locations on the proxy machine.
The MID Server searches for this file on every machine it discovers. When it finds an XML file in one of the
expected locations, it identifies the correct managed system by parsing the value for <managed system
name> in the following line:
<config version =" 1.0" application =" (optional ) application - name" type
="environment" system =" (optional , if not local system ) managed system
name" >
<subservices>
<subservices_provided>
<subservice> … label … </subservice>
</subservices_provided>
<subservices_consumed>
<subservice> … label … </subservice>
</subservices_consumed>
</subservices>
</version>
</config>
</apd>
Application Details
Attributes
version Mandatory The version must be unique
to the installed version of an
application. This label displays
the application version or other
distinguishing name.
4. Click Save.
Option Description
2. Give the environment file a logical name, such as the name of the application (ResourceNow.xml) and
save it to the /APD directory.
Note: Do not save the version file in the same directory as the environment file. Discovery cannot
process version data found in the environment directory.
1. In an XML editor, open the version file template and complete it for the ResourceNow application
using the following values:
Option Description
Option Description
3. Save the file in the /APD_version directory created in Create the APD Directories.
8. Click Update.
Computer CI
The CI Relations Formatter in the host computer CI record displays the relationships of all the business
services (including shared services) that run on the machine.
Application CI
The application CI record is populated with version and environment information and displays the business
services connected to that application in the CI Relations Formatter.
To see a graphical representation of the relationship between this application, its business services, and
the computer on which it runs, open the CI record for the computer host. Then, click the dependency view
(BSM map) icon in the header bar of the CI Relations Formatter. The map shows all the business
services and applications associated with the computer host, as well as the installed hardware.
2. Create an APD folder in the All Users folder for the environment file.
3. In ServiceNow, navigate to Discovery Definition > Properties.
4. Locate the Application discovery location for Windows based systems property
(glide.discovery.application_discovery.windows_location).
5. Using the environment variable, enter the entire path in the Windows file location property, including
the APD folder you just created.
In this example, enter %ALLUSERSPROFILE%\APD, which resolves to C:\Documents and
Settings\All Users\APD on Windows 2003.
6. Click Save.
7. Continue the procedure from Create the Environment File and copy the completed environment file
from that task into the APD folder you created.
2. Give the environment file a logical name, such as the name of the application (ResourceNow.xml) and
save it to the /APD directory.
Note: Do not save the version file in the same directory as the environment file. Discovery cannot
process version data found in the environment directory.
1. In an XML editor, open the version file template and complete it for the ResourceNow application
using the following values:
Option Description
Option Description
3. Save the file in the /APD_version directory created in Create the APD Directories.
As the IT lead, you decide to use Application Profile Discovery (APD) to supply Discovery with information
about the application for all the systems on which it is installed.
Some devices that you want to discover might be inaccessible to the MID Server. Examples
of this are mainframe computers, a server whose credentials are not in the Credentials
[discovery_credentials] table, and a switch that does not have a file system for storing XML files.
For such devices, create an environment file that identifies the inaccessible device, and then put that file on
a proxy server where the file is visible to the MID Server. Place the version file on the same proxy server,
but in a different directory than the environment file. The MID Server locates the environment file in one of
the expected locations on the proxy server and reads the path to the version file. The MID Server includes
the environment and version data in the sensor package, without exploring the hidden system.
Complete the following tasks in the order in which they are listed:
Note: By default, Discovery looks for the environment file in this location on all UNIX
machines.
3. Use the template to create an SAP environment file for a Solaris server that does not have Discovery
credentials.
Option Description
This example adds a value to the environment file that populates the comments field of the ResourceNow
application record.
1. Open the existing environment file in an XML editor.
2. Add a new XML tag within the <environment> tags, at the end of the last subservices section.
The tag name is arbitrary. This example uses <custom>, but the tag could also be named <notes>.
3. Create a nested tag inside the custom tag that names the field in the Application [cmdb_ci_appl]
table you want to populate.
This tag name is also arbitrary, but should be descriptive of the field selected. This example uses
<comments>.
4. Add a value to this tag that appears in the Discovery record for this application.
This example provides information about how this application is used in the company.
5. Save the environment file.
This script uses the appdata parameter to identify the field tags in each file. The remainder
of the statement shows the position in each XML file in which the data appears.
Legacy APD customization example: Run Discovery to add APD data to the
CMDB
Finally, run Discovery.
Run Discovery to add the custom Application Profile Discovery data to the CMDB. The values you
specified in the XML files appear in fields on the Application record.
Service Mapping
The ServiceNow® Service Mapping application discovers all business services in your organization and
builds a comprehensive map of all devices, applications, and configuration profiles used in these business
services. Service Mapping maps dependencies, based on a connection between devices and applications.
This method is referred to as top-down mapping. The top-down mapping helps you immediately see the
impact of a problematic object on the rest of the business service operation.
Service Mapping is available as a separate subscription and requires activation by ServiceNow personnel.
Business service maps show infrastructure objects and semantic connections between them. Service
Mapping regenerates business service maps regularly, to keep them updated and relevant. Any faulty
objects are shown along with the devices and applications they affect, providing a visual clue of the state of
the business service.
Data collected and organized by Service Mapping is visible in Event Management and Dependency Views.
With Event Management, you can view events to take actions for recovering your organization business
services. Dependency Views shows relationships between devices and applications in the context of
business services they belong to.
Service Mapping supports domain separation. If your ServiceNow platform uses domain separation,
administrators and users can only see and manage business services belonging to their own domain.
Business services
A business service is a set of interconnected applications and hosts which are configured to offer a service
to the organization. Business service can be internal, for example organization email system, customer-
facing, or an organization web site.
ServiceNow applications, including Service Mapping, refer to devices, applications, and connections that
comprise a business service as configuration items (CIs). Service Mapping creates a map of a business
service by communicating with its individual CIs belonging to this business service, identifying the CI
connections, and mapping the CIs.
To reflect the importance or impact of each business service on the operations of your organization, you
can select a criticality level for them. For example, you might select a high level for a business service
that supports sales functionality using the organization web site. You might then select a lower level for a
business service that provides internal printing for the organization employees.
Criticality assigned in Service Mapping defines the prominence of a business service on the Event
Management dashboard. You can also use a criticality value to define recovery strategies.
Figure 155: Example of a business service criticality on the Event Management dashboard
The starting point of any discovery process is an entry point. An entry point is a property of a connection
to a configuration item (CI). For example, to map your electronic mailing business service, define an email
address. The discovery and mapping process begins from Discovery performing the horizontal discovery
to identify the host. Once the host discovery is complete, Service Mapping starts the top-down discovery to
find and map applications running on this host.
Service Mapping uses MID Servers to communicate with CIs in your organization. MID Servers are located
inside your organization network and Service Mapping can communicate with them without traversing
firewalls.
c. Discovery creates the first set of probes for port discovery (Shazzam) and places them as a
discovery request in the External Communication Channel (ECC) queue.
d. The MID Server checks the ECC queue and retrieves the discovery request assigned to it.
e. The MID Server runs the probes against the host and discovers open ports.
f. The MID Server passes information on the host ports to the ECC queue.
g. Discovery checks the ECC queue and receives information on the host ports.
h. These steps are repeated for other types of probes: classification, identification, and exploration.
i. Discovery adds the host to CMDB.
j. During the host discovery using probes, Service Mapping checks the ECC queue if this process is
complete. When the host discovery is complete, Service Mapping checks whether this host exists
in CMDB.
3. Once the host is found in CMDB, Service Mapping discovers the application running on this host:
a. Service Mapping creates an application discovery request for the IP address of the entry point. It
then writes the request in the ECC queue and assigns a MID Server to the request.
b. The MID Server checks the ECC queue and retrieves the discovery request assigned to it.
c. The MID Server starts running identification sections of the pattern to find the match for the entry
point. When the identification section matches the entry point, the pattern discovers a CI.
d. The MID Server starts running connectivity sections of the pattern to find outgoing connections of
the newly discovered CI.
e. The MID Server passes information on the discovered CI, its attributes, and connections to the
ECC queue.
f. Service Mapping checks the ECC queue and receives information on the newly discovered CI.
g. Service Mapping writes the information into the CMDB and adds this CI to the business service
map.
h. Service Mapping creates the discovery requests for all applications to which the newly discovered
CI connects. Mapping is complete after Service Mapping maps a CI that does not have any
outbound connections or is marked as a boundary.
While using traffic-based discovery creates a more inclusive map, it may also result in mapping many
redundant CIs that do not influence the business service operation. You can choose to hide CIs discovered
using this method from a business service map. In that case, these CIs are still part of the business
service, but they are not on its map.
Operating systems
• AIX
• HP-UX
• Linux
• Solaris
• Windows
Applications
Business Service User preferences This table contains user preferences associated
with a specific business service.
[sa_business_service_user_prefs]
Service Mapping CI attributes This table contains CI attributes relevant only for
discovery process.
[sa_ci_attr]
Table Description
Pattern Debugger Session Status for UI This internal table is used for the Pattern
Designer components.
[sa_debug_session_status]
Sa Deferred Service Discovery This internal table is used for Service Mapping
components.
[sa_deferred_service_discovery]
Planned Custom Entry Point This table contains data on planned custom
entry points in Service Map Planner.
[sa_dw_custom_entry_point]
Planned Entry Point This table contains planned entry points used in
Service Map Planner.
[sa_dw_entry_point]
Table Description
AWS VPC flow logs This internal table contains information used for
Netflow-based discovery.
[sa_flcon_vpc_flow_log]
Flow Server Communication This internal table contains information used for
Netflow-based discovery.
[sa_flow_server_comm]
Flow Services IP/Port and Statistics This internal table contains information used for
Netflow-based discovery.
[sa_flow_service]
Table Description
Horizontal Discovered Paged Payloads Service Mapping uses data in this table for
splitting payload sent from the MID Server.
[sa_paged_payload]
Service Mapping Relation attributes This table contains relation attributes used
during service discovery process.
[sa_rel_attr]
Service Mapping Server Usage This table contains count of servers used by
Service Mapping.
[sa_server_usage]
Service Group Members This table maps service group to its business
service members.
[sa_service_group_member]
File System To Storage Path This table contains data on storage paths
discovered by Service Mapping.
[sa_storage_paths]
Storage Path Connectivity Map This table maps file system to storage volume.
[sa_storage_path_connectivity_map]
Business Service Group Responsibilities This table contains data on users having access
to business service groups.
[sa_svc_group_responsibilities]
Property Description
Property Description
Property Description
MID Server for Service Mapping This role allows accessing [rest_service]
administrator Service Mapping tables
[pd_mid]
with minimal, but sufficient
[sm_mid]
permissions.
Enable view map Business Service If there are no entry points, this
script disables the View map
[cmdb_ci_service_discovered]
button on the business service
form.
QBS_ISQueryDefined Technical Service N/A
[cmdb_ci_query_based_service]
rewrite endpoint link Business Service Displays the Edit entry point
dialog box when you click an
[cmdb_ci_service_discovered]
entry point link on the Entry
Points tab on the Business
Service form.
ShowEditPatternButton Discovery patterns Controls display of the Edit
button in the Pattern Designer
[sa_pattern]
module.
Validation functions Endpoint A set of functions for validation
of several attributes for different
[cmdb_ci_endpoint]
entry point types.
Remove "--None--" from OS Uploaded file Prevents the value — None —
Types from being added to the OS
[sa_uploaded_file]
Types list.
Service Mapping is part of the ServiceNow platform and deploys some of its platform-wide mechanisms
and features. At the same time, there are some configurations that are specific to Service Mapping only.
Perform the following tasks in the exact order they are listed below:
1. Request Service Mapping on page 538.
2. Install and configure MID Server. MID Servers, which are located in the enterprise private network,
facilitate communication between servers on the network and some ServiceNow applications, such
as Service Mapping, Discovery, and Service Analytics . For more information, see MID Server
configuration for Service Mapping on page 702.
3. Configure credentials required for host discovery.
4. Configure generic credentials required for Service Mapping to access hosts and map applications. For
more information, see Create credentials for Service Mapping on page 540.
5. On Unix-based hosts, configure elevated rights for Service Mapping to access these hosts using
Service Mapping commands requiring a privileged user on page 543.
6. Configure rights and permissions required for Service Mapping.
7. Grant the following Service Mapping roles to relevant users:
In addition to the obligatory configurations described here, you can perform additional configurations to
improve your experience as described in Advanced Service Mapping configuration on page 702.
3. Click Submit.
Discovery and Orchestration explore UNIX and Linux devices by using commands executed over Secure
Shell (SSH), so they need SSH credentials.
To provide sufficient permissions, configure one of the following Unix and Linux credentials:
• Non-root user and password and using the ‘sudo’ utility to run selected commands as root
• Root user and password
For information on commands requiring sudo-level rights, see Service Mapping commands requiring a
privileged user on page 543 and UNIX and Linux commands requiring root privileges for Discovery and
Orchestration.
To access Unix-based hosts with non-root credentials, make sure you have the read access to the
following files and directories:
• /etc/*release
• /etc/bashrc
• /etc/profile
• /proc/cpuinfo
• /proc/vmware/sched/ncpus
• /var/log/dmesg
• APD directory
Note: You may need domain administrator credentials only in some cases. For example, when
discovering domain controllers.
Typically, it is enough to create credentials for hosts only, but some applications require separate
credentials from credentials of the host on which they run. This type of credentials is referred to in
ServiceNow as applicative credentials.
You also need to configure SNMP-related credentials to allow Service Mapping and Discovery to query
network devices using the SNMP protocol.
Use the following guidelines to decide for which MID Server to create credentials:
• If all CIs, belonging to the same CI type, share a credential, you do not need to specify a MID Server for
it. In that case, it is used on all MID Servers by default.
In addition to generic credentials, you configure on MID Servers, you must configure sudo-level credentials
on all Unix-based hosts in your organization.
1. Navigate to Service Mapping > Administration > Credentials.
2. Click New.
3. Click the credential type for the credential you want to create:
• For hosts with UNIX, Linux, IBM AIX, HP-UX, or Solaris OS, select SSH credentials.
• For hosts with Window OS, select Windows credentials.
• For applications requiring separate from their host credentials, select Applicative Credentials.
• For querying network devices using SNMP, select SNMP Community Credentials (Password
Only).
4. For the SSH credentials, fill in the information as described in SSH credentials.
5. For the Windows credentials, fill in the information as described in Windows credentials.
6. For the applicative credentials, fill in the information as described in Applicative credentials for Service
Mapping on page 542.
7. For the SNMP credentials, create SNMP community credentials if you use SNMP v1/v2 or SNMPv3
credentials if you use SNMP v3. For more information, see SNMP credentials.
8. Click Submit.
9. If necessary, repeat the procedure to create more credentials.
If you used applicative credentials for any CIs on your infrastructure, you must configure these credentials
in Service Mapping so that it can access such CIs.
Note: If your organization does not use applicative credentials, it is enough to configure credentials
necessary for accessing hosts.
Field Description
Field Description
Some commands are mandatory. The discovery process fails if Service Mapping cannot run these
commands with elevated rights. Configure servers in your organization to allow Service Mapping to run
these commands with elevated rights.
Attention: If you do not configure credentials for a mandatory command, a process or a pattern
using it fails leading to discovery failure.
Some of these commands do not require elevated rights, unless directories that Service Mapping must
access are protected.
-1HF
-w 1
-1
netstat -anu For Solaris 11.2 and Lists the open ports.
later, it is mandatory to Required for Solaris
use either the lsof or version 11.2 or later.
netstat command.
Ifconfig Ifconfig -a Mandatory Shows interface
information (need
sudo to get the MAC
addresses).
pwdx Process_id Mandatory Gets the process
information.
pargs -e process_id Mandatory Gets the process
information.
ps -eo user, pid, ppid, Mandatory Gets the process list.
comm, args
inetadm -l or without params Optional Handles the case of
application using the
inet daemon.
ls -1HF
ls -w 1
ls -1
Table 194: IBM Customer Information Control System (CICS) (on Unix)
Table 196: IBM CICS Transaction Gateway CTG (on Unix or Windows)
Table 230: Oracle WebLogic On-demand Router Load Balancer (on Unix)
You do not run these commands directly. Service Mapping uses commands requiring elevated rights as
part of the following processes:
• host detection
• process identification on port
• discovering CIs using patterns
Some commands are mandatory. The discovery process fails if Service Mapping cannot run these
commands.
Operating systems
Query Description
Select * from Win32_ComputerSystem Gets the server basic information like serial
number.
Select LastModified from CIM_LogicalFile Where Gets the last modification time of a file.
Name=...
Select AddressWidth from Win32_Processor Gets the computer architecture (32 bit or 64 bit).
Select * From Win32_Process where (ProcessId Gets the process list running on the computer.
= ?)
Select Name from CIM_LogicalDisk Gets the file systems on the computer, such as
C: or D:.
Select FileName, Extension from CIM_Directory Gets the list of files in a specific directory.
where Drive=’drive’ and Path='path'
ip_address/host
Applications
Table 254: ABAP SAP Central Services (ASCS) (on Unix or Windows)
Table 267: Cisco ACE Command Line Interface (on Cisco CSM)
Table 283: IBM CICS Transaction Gateway CTG (on Unix or Windows)
Table 311: Microsoft Internet Information Services (IIS) Virtual Directory (on Windows)
connection_strings_browser.exe
[encrypted_configs[*].Name]
Optional: This command is
used only for creating
database encrypted
connections.
This file is necessary
for performing the
put file operation on
theConnectionStringsBrowser
alias.
Decrypts the database
connections.
Table 341: Oracle WebLogic On-demand Router Load Balancer (on Unix or Windows)
Table 349: SAP Evaluated Receipt Settlement (ERS) (on Unix or Windows)
Table 354: SQL Server Integration Services (SSIS) Job (on Windows)
Table 361: Tibco Enterprise Message Service (EMS) (on Unix and Windows)
SNMP-based queries
Service Mapping accesses network infrastructure devices like load balancers and routers using Simple
Network Management Protocol (SNMP) v1/v2c/v3. Configure SNMP community credentials to enable this
type of access.
SNMP-based queries have the format of strings of integers.
• 1.3.6.1.4.1.9.9.46.1.3.1.1
• 1.3.6.1.4.1.9.9.68.1.2.1.1
• 1.3.6.1.2.1.17.1.4.1
• 1.3.6.1.2.1.17.4.3.1
• 1.3.6.1.2.1.17.7.1.2.2.1
• 1.3.6.1.2.1.17.2.15.1
• 1.3.6.1.4.1.9.9.23.1.2.1.1
• 1.3.6.1.4.1.45.1.6.13.2.1.1
• 1.0.8802.1.1.2.1.4.2.1
• 1.0.8802.1.1.2.1.4.1.1
• 1.3.6.1.4.1.1872.2.5.1.1.1.11
• 1.3.6.1.4.1.1872.2.5.3.1.3.2.1
• 1.3.6.1.4.1.1872.2.5.3.1.2.4.1
• 1.3.6.1.4.1.1872.2.5.1.3.1.10
• 1.3.6.1.4.1.1872.2.5.4.1.1.4.2.1
• 1.3.6.1.4.1.1872.2.5.3.1.6.3.1
• 1.3.6.1.4.1.22610.2.4.1.1.1
• 1.3.6.1.4.1.22610.2.4.1.6.2
• 1.3.6.1.4.1.22610.2.4.3.4.1.2.1
• 1.3.6.1.4.1.22610.2.4.3.2.1.2.1
• 1.3.6.1.4.1.22610.2.4.3.3.1.2.1
• 1.3.6.1.4.1.22610.2.4.3.4.3.1.1
• 1.3.6.1.4.1.22610.2.4.3.3.3.1.1
• 1.3.6.1.4.1.2467.1.16.4.1
• 1.3.6.1.4.1.2467.1.31.3
• 1.3.6.1.2.1.1.2
• 1.3.6.1.4.1.2467.4.1
• 1.3.6.1.4.1.2467.4.2
• 1.3.6.1.4.1.2467.4.3
• 1.3.6.1.4.1.2467.4.4
• 1.3.6.1.2.1.1.1
• 1.3.6.1.4.1.2467.1.15.2.1
• 1.3.6.1.4.1.2467.1.16.4.1
• 1.3.6.1.4.1.2467.1.18.2.1
Citrix Netscaler
• 1.3.6.1.4.1.5951.4.1.1.14
• 1.3.6.1.4.1.5951.4.1.1.1
• 1.3.6.1.4.1.5951.4.1.1.26.1
• 1.3.6.1.4.1.5951.4.1.3.1.1
• 1.3.6.1.4.1.5951.4.1.1.23.2
• 1.3.6.1.4.1.5951.4.1.1.23.3
• 1.3.6.1.4.1.5951.4.1.1.7
• 1.3.6.1.4.1.5951.4.1.3.1.1
• 1.3.6.1.4.1.5951.4.1.2.1.1
• 1.3.6.1.4.1.5951.4.1.2.7.1
• 1.3.6.1.4.1.5951.4.1.3.2.1
• 1.3.6.1.4.1.5951.1.5.2.1
• 1.3.6.1.4.1.5951.4.1.3.3.1
• 1.3.6.1.4.1.5951.4.1.3.11.1
• 1.3.6.1.4.1.5951.4.1.2.1.1
• 1.3.6.1.4.1.5951.4.1.2.7.1
• 1.3.6.1.4.1.5951.4.1.3.2.1
• 1.3.6.1.4.1.5951.1.5.2.1
• 1.3.6.1.4.1.5951.4.1.3.3.1
• 1.3.6.1.4.1.5951.4.1.3.11.1
BIG-IP Local Traffic Manager (LTM ) F5 (on F5 BIG-IP) and BIG-IP Global Traffic
Manager (GTM) F5
• 1.3.6.1.4.1.3375.2.1.6.3
• 1.3.6.1.4.1.3375.2.1.4.1
• 1.3.6.1.4.1.3375.2.1.3.3.3
• 1.3.6.1.4.1.3375.2.1.6.2
• 1.3.6.1.4.1.3375.2.1.2.1.1.2.1
• 1.3.6.1.4.1.3375.2.2.10.1.2.1
• 1.3.6.1.4.1.3375.2.3.12.4.2.1
• 1.3.6.1.4.1.3375.2.3.12.1.2.1
• 1.3.6.1.4.1.3375.2.1.4.1
• 1.3.6.1.4.1.3375.2.1.6.2
• 1.3.6.1.4.1.3375.2.1.4.2
• 1.3.6.1.4.1.3375.2.2.10.1.2.1
• 1.3.6.1.4.1.3375.2.2.5.6.2.1
• 1.3.6.1.4.1.3375.2.2.10.8.2.1
• 1.3.6.1.4.1.3375.2.2.5.6.2.1
• 1.3.6.1.4.1.3375.2.2.10.1.2.1
• 1.3.6.1.4.1.3375.2.1.4.1
• 1.3.6.1.4.1.3375.2.1.4.2
• 1.3.6.1.4.1.3375.2.1.6.2
• 1.3.6.1.4.1.3375.2.3.11.1.2.1
• 1.3.6.1.4.1.3375.2.3.12.3.2.1
• 1.3.6.1.4.1.3375.2.3.12.4.2.1
• 1.3.6.1.4.1.3375.2.2.5.6.2.1
• 1.3.6.1.4.1.3375.2.3.12.3.2.1
• 1.3.6.1.4.1.3375.2.3.12.4.2.1
• 1.3.6.1.4.1.3375.2.3.12.5.2.1
• 1.3.6.1.4.1.3375.2.3.6.7.2.1
• 1.3.6.1.4.1.3375.2.3.11.4.2.1
• 1.3.6.1.4.1.3375.2.3.3.1.2.1
• 1.3.6.1.4.1.3375.2.3.3.3.2.1
• 1.3.6.1.4.1.3375.2.3.9.1.2.1
• 1.3.6.1.4.1.3375.2.3.9.3.2.1
• 1.3.6.1.2.1.1.1
• 1.3.6.1.2.1.1.5
• 1.3.6.1.4.1.14685.3.1.11.1
Juniper Junos
• 1.3.6.1.4.1.2636.3.1.3
• 1.3.6.1.4.1.789.1.1.2
• 1.3.6.1.4.1.789.1.1.9
Radware Alteon NG
• 1.3.6.1.4.1.1872.2.5.1.1.1.11
• 1.3.6.1.4.1.1872.2.5.3.1.3.2.1
• 1.3.6.1.4.1.1872.2.5.3.1.2.4.1
• 1.3.6.1.4.1.1872.2.5.1.3.1.10
• 1.3.6.1.4.1.1872.2.5.4.1.1.4.2.1
• 1.3.6.1.4.1.1872.2.5.3.1.6.3.1
• 1.3.6.1.4.1.1872.2.5.4.1.1.4.2.1
• 1.3.6.1.2.1.1.1
• 1.3.6.1.4.1.1872.2.5.4.1.1.4.5.1
• 1.3.6.1.4.1.1872.2.5.4.1.1.3.3.1
• 1.3.6.1.4.1.1872.2.5.4.1.1.2.2.1
• 1.3.6.1.2.1.4.24.4.1
• 1.3.6.1.4.1.89.35.1.13.1.2
• 1.3.6.1.4.1.89.2.12
• 1.3.6.1.4.1.89.2.4
• 1.3.6.1.4.1.89.35.1.40.52.1.1
• 1.3.6.1.4.1.89.35.1.13.1
• 1.3.6.1.4.1.89.35.1.30.1
• 1.3.6.1.2.1.68.1.3.1
• 1.3.6.1.4.1.89.35.1.13.1
• 1.3.6.1.4.1.89.35.1.11.1
• 1.3.6.1.4.1.89.35.1.40.52.1.1
Table 364: Applications and network devices requiring special rights and permissions
BIG-IP Local Traffic Manager (LTM ) F5 (on Provide a user with permissions necessary to
F5 BIG-IP) and BIG-IP Global Traffic Manager run:
(GTM) F5
• bigpipe commands (for BIG-IP LTM F5 or
BIT-IP GTM F5 version 9)
• bigpipe and Traffic Management Shell
(TMSH) commands (for BIG-IP LTM F5 or
BIG-IP GTM F5 version 10)
• Traffic Management Shell (TMSH)
commands (for BIG-IP LTM F5 or BIG-IP
GTM F5 version 11)
• Traffic Management Shell (TMSH) advanced
commands (for BIG-IP LTM F5 or BIG-IP
GTM F5 version 10, 11, and 12)
Citrix XenApp, Citrix Presentation Server Provide a user who has permissions
• To run Citrix services
• To read and query the Citrix repository
IBM DB2 (on Linux) (For storage discovery only) Provide a DB2 OS
user who has permissions to run DB2 services.
IBM DB2 (on Windows) (For storage discovery only) Provide a DB2 OS
user who has permissions to run DB2 services.
IBM WebSphere Message Broker Flow (on Unix Provide an IBM WebSphere Message Broker
or Windows), IBM WebSphere Message Broker OS user with permissions to run the WebSphere
(on Unix or Windows) Message Broker service.
IBM WebSphere MQ (on Unix or Windows), IBM Provide an IBM WebSphere MQ OS user with
WebSphere MQ Queue permissions to run the WebSphere MQ services.
IBM WebSphere DataPower • Configure SNMP credentials.
• Configure applicative credentialsthat allow to
query devices using SOAP commands.
SQL Server Reporting Server (SSRS) Provide an SSRS OS user with permissions to
run the SSRS Service.
You do not have to accomplish planning phases in Service Mapping in any particular order. On the
contrary, you can simultaneously make progress on testing and reviewing business services belonging
to different phases.
2. Phase population on page 616
To begin the actual work on creating business services, you must add planned business services to
phases. It creates an association between business services and their phases, which helps to manage
creating business services. In our example, you add all business services used by the EMEA office to
the EMEA phase.
3. Creation of actual business services during planning on page 621
You create each business service individually even if you simultaneously added multiple business
services to the phase. During this stage, an administrator responsible for planning business
services collects data and runs the mapping process. Then a staff member who is familiar with the
infrastructure and applications reviews mapping results for correctness and approves the newly
created business service.
Going through these actions may take a long time. As a rule, a phase contains planned business
services at different stages depending on how much progress you were able to make.
4. Click Submit.
The new phase appears in the list of phases.
Phase population
You associate planned business services to phases.
To begin the actual work on creating business services, you must add planned business services to
phases. It creates an association between business services and their phases, which helps to manage
creating business services. In our example, you add all business services used by the EMEA office to the
EMEA phase.
You can associate business service using different methods depending on how you have been managing
your business services up until now and what information about them you gathered.
Import from the CSV This method suits you You associate multiple You must prepare
file if your organization business services at a all the necessary
has performed cross- time. information in a very
organization mapping specific format before
There is a relatively low
and analysis and you can perform the
chance of mistakes.
collected some import.
information about
planned business
services. If so, you can
organize the collected
information in a specific
order and save it as
a CSV file. Service
Mapping extracts
information from this
file and associates
multiple business
services to a phase in
one go.
Import from a load If there is no You associate multiple The result of the import
balancer information available business services at a is not precise. There
about planned time. may be many false
business services, or duplicate business
There is no
you can use your services created by
preparation.
load balancers as the mistake because the
source of information. data from the load
Service Mapping can balancer is raw.
extract entries directly
from load balancers
and convert them into
planned business
services.
Add business services Use this method if There is no If you have many
manually you do not have any preparation. business services for
organized data on your a phase, the process
There is a relatively low
business services or may take a long time.
chance of mistakes.
you do not want to use
other methods.
Warning: If information on business services is not formatted in the .xlsx file according to the
template, Service Mapping fails to create planned services or creates them with errors.
3. Save the file in a drive that you can access during the import with the .csv extension.
This method suits you if your organization has performed cross-organization mapping and analysis
and collected some information about planned business services. If so, you can organize the collected
information in a specific order and save it as a CSV file. Service Mapping extracts information from this file
and associates multiple business services to a phase in one go.
1. Navigate to Service Mapping > Service Map Planner > Phases.
2. Click the relevant phase.
3. On the Phase page, click Import from CSV.
4. In the Select CSV file to import window, click Choose file.
5. Navigate to the CSV file to use for the import and click Choose.
6. Click Import.
The newly associated business services appear under Business Service Planning.
7. If necessary, troubleshoot import-related errors.
a) Click the Import Errors tab.
b) Review information on business services that Service Mapping failed to import.
c) Create a separate CSV file to these business services.
Note: Make sure that the information and its format are correct.
d) Import business services from the second CSV file by repeating 3 on page 619-6 on page
619.
443 HTTPS
80 HTTP
5. Click Submit.
Notice that although service deployment owner fixes the actual business service, the application owner
provides feedback and approves the draft version of it - the planned business service. It allows you to have
all the information related to planning and creation of the business service in one central place.
Creating an actual business service during planning has the following flow:
Collect data
A service deployment owner or an application owner collects data for the planned business service.
Role required: sm_admin or sm_app_owner
At this stage you are planning your business service and collect information that Service Mapping uses
later to create, review and approve this business service.
You provide information on people responsible for creating and reviewing your business service.
Make sure that your planned business service has some entry points assigned to it. An entry point is a
property of a connection to a configuration item (CI). Service Mapping starts the mapping process from
this point. You cannot create an actual business service, review, and approve the planned business
service unless it has at least one entry point. Even if this planned business service has some entry points
assigned to it during import, you can add additional entry points if necessary.
Some information, such as information about components, is not used to create a business service, but to
review it at a later stage.
1. Navigate to Service Mapping > Service Map Planner > Phases.
2. Select the phase which contains the business service.
3. Click the name of the business service.
4. (Mandatory) Define the application owner for this business service:
The application owner is familiar with the infrastructure and applications making up the service. This
user provides information necessary for successful mapping of a business service. Once a service
is mapped, this user reviews the results and either approves it or suggests changes. The application
owner cannot create or modify business services.
a) Click the Add me icon next to Application Owner to define yourself as the application owner.
Or
b) Click Edit Application Owner (Padlock) icon next to Application Owner and enter a single or
multiple names. If necessary, enter an email.
5. Define the service deployment owner for this business service in the Service Deployment Owner
field.
The service deployment owner is responsible for creating and fixing business services during
planning.
Select the name from the list. Separate multiple names with a semicolon (;).
6. If there is no entry point, add it:
An entry point is a property of a connection to a configuration item (CI). Service Mapping starts the
mapping process from this point. For example, to map your electronic mailing business service, define
an email address.
a) Click New on the Planned Entry Points tab.
b) Select the relevant type for the entry point from Entry point type.
If there is no entry point type you need, enter its description on the Other Entry Points tab.
c) Define other entry point attributes that vary depending on the type you selected.
Attribute Description
Attribute Description
d) Click Submit.
7. Provide information for components that are part of this business service, and that you know of:
a) Click the Components tab.
b) Click New.
c) Enter description of the business service component.
d) Click OK.
The number of hops performed by the actual test depends on the ability of Service Mapping to connect
to hosts in the next level. If you use Netflow to collect data, you can see CIs on all levels even if
Service Mapping cannot connect to them.
8. Click OK.
The Connectivity Test page opens.
9. Verify that the status of the test is Completed.
10. Review the result of the test:
• If the status and the message state that the IP address is found, Service Mapping can connect to
this host.
• If the status is Info, there is a possibility that there is a connectivity problem or that this host is not
in the CMDB.
• If the status is Not found, either there is a connectivity problem or this host is not in the CMDB.
11. Click the IP address to read the full message. Use this information to troubleshoot connectivity issues.
To troubleshoot, you may need to:
• Run discovery on a problematic CI either from Discovery or Service Mapping.
• Define missing or wrong credentials for this CI.
12. Perform this procedure on the same entry points to verify that Service Mapping does not experience
the same connectivity issues.
Ideally, only the service deployment owner should create the actual business service, however, any user
with the sm_admin role can perform this task.
When Service Mapping creates an actual business service during planning, it performs the following
validation:
• Checks that the entry point is not used for another business service.
• Checks that the business service name is unique and does not exist in the CMDB.
7. (Mandatory) Enter the name of the application owner. The application owner is familiar with the
infrastructure and applications making up the service. This user provides information necessary for
successful mapping of a business service. Once a service is mapped, this user reviews the results and
either approves it or suggests changes.
8. Click Send for Review.
At this stage, you let the application owner review mapping results and provide feedback in the form of
notes on the planned business service page.
the business service. Typically, it takes several iterations to arrive at the desired result. Once the service
deployment owner fixes all issues, you can approve the business services.
Ideally, only the application owner should approve the planned business service, however, any user with
the sm_app_owner role can perform this task.
1. Navigate to the planned business service:
a) Navigate to Service Mapping > Best Practice > Planned Business Services.
b) Sort the list of planned business services by status and scroll to see services in In Review status.
Or
Use the filter to narrow the list. For example, filter by your name if you are the application owner.
c) Click the required business service.
the domain to which the business service belongs from the domain picker ( ).
It must be a domain without any child domains.
• Pattern-based mapping:Service Mapping discovers and maps configuration items (CIs) using
patterns. A pattern is a sequence of algorithms that establish the parameters of a CI as well as its
outbound connections. Service Mapping always uses the pattern-based method.
• Traffic-based mapping: Service Mapping discovers and maps CIs by detecting their inbound and
outbound traffic. You can use this method in addition to the pattern-based mapping. This method is
especially useful during the initial stages of mapping.
• Entry point: Whichever method you choose for mapping, specify an entry point.
An entry point is a property of a connection to a configuration item (CI). Service Mapping starts the
mapping process from this point. For example, to map your electronic mailing business service, define
an email address.
Entry points vary depending on the nature of the business service. A printer, URL, or phone
number can all serve as entry points to your services. Service Mapping comes with a wide range of
preconfigured entry point types that cover most commonly used applications.
After Service Mapping discovers CIs belonging to your business service for the first time, it then runs
discovery on CIs again to find changes and updates. Discovery schedules determine what CIs and how
often Service Mapping rediscovers.
You can also learn how to define a business service from the following video tutorial:
Field Description
Attribute Description
There are some attributes which you configure differently depending on what entry point they relate to, as
described in the table below:
Table 367: Entry point attributes specific for some entry points
Groups can make your service lists much shorter and cleaner. It is especially important for large
organizations that have hundreds of business services.
You can also have a hierarchy of service groups with some groups embedded within others. If users have
access to a parent business service group, they automatically have access to all its child groups.
You can assign business services to groups and reorganize groups at any time.
When you delete a business service group, business services in the group are not deleted.
1. Create a business service group.
a) Navigate to Service Mapping > Services > Service Groups.
b) Click New.
c) Enter the name of the new business service group in the Name field.
d) To embed this group in another group, enter the name of the other group in the Parent Group
field.
Make sure that you have created service groups as described in Organize business services into groups
on page 635.
Role required: admin
Users inherit permissions from roles that are assigned to them. Service Mapping provides these
preconfigured roles:
sm_admin Sets up the Service Mapping application. Maps,
fixes, and maintains business services. Also
performs advanced configuration and customization
of the product. Assign this role to application
administrators.
Typically, enterprises have hundreds of services which makes it impractical to manage them individually.
How you organize business, manual or technical services, depends on the user and service provisioning
policies of your enterprise.The relation between business services in groups is purely logical and the same
business service can be part of more than one group. For more information about service groups, see
Organize business services into groups on page 635.
You can also have a hierarchy of service groups with some groups embedded within others. If users have
access to a parent business service group, they automatically have access to all its child groups.
By assigning a Service Mapping role to a service group, you allow all users with this role to access all
services belonging to this group.
In the base system, all services are assigned to the All service group that lets all users view and manage
the services. When you assign a role to a service group, the users with this role can access only IT
services in this service group. Users cannot access any IT services, which are not part of that service
group.
You can give access to services to existing or new users. You can also add users to groups to assign roles
simultaneously to multiple users.
You can assign some roles directly to business service groups. However, most enterprises choose to
organize their roles as a hierarchy. It helps to manage roles across multiple ServiceNow applications. For
example, the Service Mapping administrator [sm_admin] can be part of a broader administrator role like
administrator [admin]. By assigning roles to user groups, you give permissions and rights of this role to all
users belonging to the same group.
When you delete a business service group, business services in the group are not deleted. The connection
between the role and the services making up this group (responsibilities) are removed.
1. If using Service Mapping, navigate to Service Mapping > Services > Service Group
Responsibilities.
2. If using Event Management, navigate to Event Management > Services > Service Group
Responsibilities.
3. Click New.
4. In the Business Service Group field, enter the name of the group.
5. In the Role field, enter the name of the role.
For example, evt_mgmt_admin.
6. Click Submit.
When you define a new business service, Service Mapping performs discovery of all CIs that participate
in this business service and creates its map. After the initial mapping is complete, Service Mapping
regenerates business service maps regularly by rediscovering CIs making up a business service.
In the base system, Service Mapping is preconfigured with a generic schedule for discovering all CIs of
the application type. CIs of the application type derive from the cmdb_ci_appl table. The generic schedule
triggers discovery of all applications in your organization once a day. In addition to the generic discovery
schedule for applications, there is a preconfigured schedule for discovering load balancers deriving
from the cmdb_ci_lb_service table. Typically, these two preconfigured schedules are enough to update
information for business services.
Part of discovering an application CI is identifying its host. Service Mapping checks if the device hosting
this application CI exists in the CMDB. If not, Service Mapping triggers Discovery to perform host detection.
The information on hosts is updated in the CMDB when Discovery runs horizontal discovery. You can
manage the schedules that trigger horizontal discovery as described in Schedule discovery on page 28.
Some CI types are prone to more frequent changes and updates than others, so you can manage the load
by adjusting the discovery schedule to match the nature of each CI type. Having customized discovery
schedules also allows you to avoid redundant stress on your infrastructure. For example, if you modify
Apache Web server settings more often than updating your applications, create discovery schedules for
Apache Web server that rediscover them more frequently than other CIs.
If necessary, you can create the following discovery schedules for application CIs:
• For CI types
Service Mapping discovers all CIs belonging to this type
• For specific CIs
Service Mapping discovers only one CI that you specified for this schedule
When you define discovery schedules based on CI types, several schedules may apply to the same CI.
To avoid discovering the same CIs more than once the most specific schedule always has precedence.
For example, if there are two CI types one of which is a parent and the other is its child, and you create
discovery schedules for both of them, CIs belonging to the child CI type are discovered only using the
schedule for the child CI type, not the parent CI type. At the same time, if there is no schedule for a child CI
type, the parent CI type schedule is used to discover the child CIs.
If you define a discovery schedule for a specific CI as well as a schedule for the CI type to which this CI
belongs, Service Mapping uses the schedule for this specific CI, and not the generic schedule for its CI
type.
1. Navigate to Service Mapping > Administration > Discovery Schedules.
2. To create a specific schedule, click New.
Or
To customize the generic schedule for all application CIs, click All Applications:
The Discovery Schedule page opens.
Field Description
Field Description
Max run time Set a time limit for running this schedule.
When the configured time elapses, the
remaining tasks for the discovery are
canceled, even if the scan is not complete.
Use this field to limit system load to a
desirable time window. If no value is
entered in this field, this schedule runs until
complete.
Run and related fields Determines the run schedule of the
discovery. Configure the frequency in the
Run field and the other fields that appear to
specify an exact time.
Daily Runs every day. Use the Time field to select the time of day this
discovery runs.
Weekly Runs on one designated day of each week. Use the Time field to
select the time of day this discovery runs.
Monthly Runs on one designated day of each month. Use the Day field to
select the day of the month. Use the Time field to select the time of
day this discovery runs. If the designated day does not occur in the
month, the schedule does not run that month. For example, if you
designate day 30, the schedule does not run in February.
Periodically Runs every designated period of time. Use the Repeat Interval
field to define the period of time in days, hours, minutes and
seconds. The first discovery runs at the point in time defined in the
Starting field. The subsequent discoveries run after each Repeat
Interval period passes.
Once Run one time as designated by the date and time defined in the
Starting field.
On Demand Does not run on a schedule. Click the Discover now link to run this
discovery.
Weekdays Runs every Monday, Tuesday, Wednesday, Thursday, and Friday.
Use the Time field to select the time of day.
Weekends Runs every Saturday and Sunday. Use the Time field to select the
time of day.
Month Last Day Run the last day of every month. Use the Time field to select the
time of day.
Calendar Quarter Runs on March 31, June 30, September 30, and December 31.
End Use the Time field to select the time of day. To change these
dates, modify the script include DiscoveryScheduleRunType.
After Discovery Allows you to sequentially stagger the schedule. Use this option to
run this schedule after the Discovery designated in the Run after
field finishes. Check the Even if canceled check box to designate
that this discovery should run even if the Run after Discovery is
canceled before it finishes.
• This option is not valid when the Discovery is started via
DiscoverNow, or when using the Discover CI feature.
• You cannot designate an inactive Discovery schedule.
• You cannot create a loop by designating the run after Discovery
to be the same Discovery.
• This Discovery does not run if the Run after discovery does
not finish, with the exception that the Even if canceled box is
checked and the Discovery is canceled.
5. Click Submit.
domain to which the business service belongs from the domain picker ( ). It must
be a domain without any child domains.
Role required: sm_admin
• Ensure that the traffic-based discovery is enabled at the Service Mapping product level: navigate to
Service Mapping > Administration > Properties and verify that the Traffic based discovery check
box is selected.
• Enable traffic-based discovery for a specific business service as described in Define a business service
on page 628.
You may choose to use traffic-based discovery and mapping in addition to the pattern-based mapping.
You can create rules to configure Service Mapping to use traffic-based discovery for certain CIs. You can
configure traffic-based discovery rules to monitor specific CIs or types of CIs.
Rules for specific CIs take precedence over rules for CI types. For example, if you do not want to use
traffic-based discovery on any Tomcat servers, you can define a CI type rule disabling the traffic-based
discovery on the Tomcat table. At the same time, you can create a discovery rule enabling the traffic-based
discovery for a specific Tomcat server. In that case, Service Mapping uses the traffic-based discovery only
for this specific Tomcat server out of all Tomcat servers.
If you used both pattern-based and traffic-based mapping for a business service, its map displays CIs and
hosts that Service Mapping discovered using both methods. While detecting CI inbound and outbound
traffic creates a very inclusive map, it may also result in mapping many redundant CIs that do not influence
the business service operation.
If your instance uses domain separation, you can create traffic-based rules for specific domains. Rules in
the base system are assigned to the global domain and apply to all domains of all levels.
When you create a rule for a specific domain, the new rule is used only for this domain and does not exist
in any other domains. If you customize an existing rule in the global domain and assign it to a specific
domain, you create a copy of the global rule, which is still used in all other domains except the domain that
has the customized version of this rule. Likewise, if you customize a rule in the global domain, the change
affects all domains except the one that uses a customized copy of this rule.
1. Navigate to Service Mapping > Administration > Traffic Based Discovery.
2. Click New.
3. Define the rule parameters as follows:
Field Description
4. Click Submit.
5. To fine-tune traffic based discovery, define advanced parameters as follows:
Parameter Description
Parameter Description
Discovery performs discovery through a MID Server located in the enterprise private network. A behavior
determines:
• What MID Server Discovery is using.
• What probes and patterns the chosen MID Server is running.
Field Description
Field Description
Run and related fields Determines the run schedule of the discovery.
Configure the frequency in the Run field and the other
fields that appear to specify an exact time.
Important: The IP range for the schedule must correlate with the IP range of the MID Server
that you assigned to the behavior. The MID Server must also have the appropriate application
or the ALL application.
4. Click Submit.
You can also learn about business service maps from the following video tutorial:
1. Navigate to Service Mapping > Services > Business Services.
2. Click View map next to the business service that you want to view.
3. Ensure that the map opens in Edit mode.
4. Check that all essential CIs are discovered and mapped correctly, and correct any issues as
necessary.
Option Action
Option Action
To add a connection:
1. Right-click the CI, and select Add manual
connection.
2. Enter connection properties as described in
Entry point attributes on page 630.
3. Click Submit.
Remove a CI that does not belong in the Right-click the connector leading to the CI, and
business service select Mark boundary.
The CI is marked as boundary on the map
( ).
Include a CI that was mistakenly removed Right-click the connector leading to the CI, and
from the business service select Unmark boundary.
Option Action
Check that the connections between CIs are 1. Click a CI to see all its connections.
correct
2. Click the connection you want to check.
The selected collection appears highlighted.
3. Review the connection details in the
Properties pane.
Option Action
Check that clusters are reflected correctly Click the plus (+) icon next to a CI.
There are two types of clusters:
Application clusters
can have a cluster
within a cluster, but
never more than two
levels as shown below:
Option Action
Check that inclusions are reflected correctly Click the plus (+) icon next to a CI.
In an inclusion, a server hosts applications that
are treated as independent objects. For example,
IIS Virtual Directory can run on a Windows
Service as its host.
5. Check a list of CI traffic-based connections to identify CIs that Service Mapping failed to discover or
did not identify correctly.
Note: You can see traffic-based connections for a CI, even if you disabled the traffic-based
discovery for a business service the CI is part of.
Note: There may be a case when you can see traffic-based connections on the map, but
the Traffic Based Connections list does not display them. It happens if the connections
have been removed from the cmdb_tcp table less than three days.
d) If the Host view is on, switch it off by clicking More Options and clicking the Display in Host
View toggle.
4. To transfer the segment into an existing business service map and connect to it:
a) Select Connect to existing service.
b) In the Add To Existing Discovered Service window, select the business service to which you want
to transfer the segment.
c) Click Choose.
The map displays the connected business service icon instead of the transferred segment. The
business service map to which you transferred the segment, shows it under the newly added
entry point. For example, the Tomcat business service map contains the Trade connected service
CI. The Trade business service map shows the segment transferred from the Tomcat business
service.
5. To transfer the segment into a new business service map and connect to it:
a) Select Create new service from here.
b) In the Create New Discovered Service window, define attributes for the business service you want
to create for this segment:
Field Description
c) Click Create.
The map displays the connected business service icon instead of the transferred segment. The
new business service appears in the list of business services and contains the segment of the
map. For example, the Tomcat business service includes the TP rooms connected service CI.
The TP rooms business service map contains the segment transferred from the Tomcat business
service.
You can also use the following video to learn about troubleshooting maps in Service Mapping:
4. Click anywhere in the map outside CIs and connections to select the entire business service.
5. Review discovery and mapping errors:
Option Action
Find a CI correlating with an error Click the error on the Discovery Messages tab.
In the map, the related CI or a CI connection is
marked in yellow.
Option Action
To view more details related to an error Right-click on the error and select Show
discovery log. If you identify a pattern-related
error in the Discovery Log window, fix the pattern
as described in Troubleshoot pattern-related
mapping errors on page 661.
6. Resolve errors as follows:
Symptom Resolution
• The business service map displays the KB0621669: Resolving failure to execute
warning icon () on top or instead of the commands using sudo during Discovery in
configuration item. Service Mapping
• The following error message displays
for the configuration item: Failed to
execute command using sudo on
host <host IP address>.
The following error message displays for the KB0564283: Resolving an issue of denied
configuration item: Access is denied. access to a Windows Server
• The business service map displays the KB0564296: Resolving failure to run
warning icon () on top or instead of the commands on Windows Servers during
Windows Server. discovery in Service Mapping
• The following error message displays
for the Windows Server: Failed to
execute WMI command on host.
Symptom Resolution
• The business service map displays the KB0564282: Resolving issue of RPC
warning icon () on top or instead of the Server unavailable during Windows Server
Windows Server. discovery in Service Mapping
• The following error message displays
for the Windows Server: RPC Server
unavailable.
• The business service map displays the KB0610414: Resolving failure to discover
warning icon instead of the load balancer operating system for a load balancer
CI.
• The following error message is
displayed: Service Mapping
triggered the horizontal
discovery to find the host
x.x.x.x, because this host
was not in the CMDB. The
horizontal discovery failed.
See discovery status for more
info.
Instead of the map, the following error KB0596671: Resolving a large topology
message displays: Cannot display the issue in business service maps
map. Topology too large.
A Virtual IP (VIP) for a load balancer KB0610412: Resolving load balancers
service CI is sometimes updated with the IP merging into one CI in Service Mapping
addresses of another load balancer.
The map displays incorrect virtual IP names KB0610413: Resolving incorrect virtual IP
for load balancers. names of load balancers in Service Mapping
The business service map consists of KB0595227: Troubleshooting inconsistent
different CIs every time you run the mapping mapping results
process on the same entry points to
discover the same business service.
A cluster CI appears detached from all CIs KB0621531: Resolving disconnected cluster
in map tiers above or below it. CIs on the business service map
• The map displays either the load KB0621576: Resolving failure to discover
balancer configuration item (CI) with a VIP for a load balancer
warning icon or just the warning icon
• The following error message appears
for the CI that is expected to be
the load balancer in the business
service: Failed to recognize
application. See the
discovery log for more
details.
The business service map shows a different KB0621616: Business service map displays
load balancer from the one you expected to a wrong load balancer in Service Mapping
see in this business service.
Symptom Resolution
• The business service map displays the KB0621670: Resolving the issue of SSH
warning icon () on top or instead of the command time out during discovery in
configuration item. Service Mapping
• The following error message displays for
the configuration item: SSH command
timed out on host.
business service map with warning icons . Typically, mapping errors happen when Service Mapping
fails to connect to a CI or fails to recognize it due to an inaccurate pattern.
You can identify problematic steps in your pattern and fix them without reviewing all steps and operations
contained in the pattern. It allows you to troubleshoot pattern-related errors quickly and effectively.
The following video provides an alternative way of troubleshooting pattern-related errors:
4.
Right-click the CI with the warning icon ( ) and select Show discovery log.
The Discovery Log window opens showing mapping patterns and their details.
5. Expand the failed pattern which appears above the result marked red.
7. Click Debug.
The Pattern Designer window opens showing the pattern with its steps in the left pane.
8. Click the first step and wait for Pattern Designer to run it. Repeat this action for other steps following
the order until you reach the faulty step that causes the error.
In the example below the port number is 8081 instead of 8080 which causes the problem.
9. Click OK.
10. Fix the pattern attribute that causes the problem.
Planning the migration of an entire business Use the map to see which configuration items (CIs)
service or its segments need relocating or which CIs become redundant.
Planning the upgrade or replacement of a CI Check the map to see what other CIs are affected
when the CI is non-operational during maintenance.
You can assess and plan the downtime or make
provisions to avoid the downtime.
Because the same CI may be used for multiple
business services, check the CI using the
Dependency Views application. With this
application, you can see all business services that a
CI belongs to.
Analyzing a business service for high Use the map to identify CIs crucial for the service
availability and continuity performance. Decide if you want to fortify these CIs
by creating clusters.
Troubleshooting business services Use the map to understand the impact of an issue
and determine which CI is causing the problem.
Map window
Beside the map itself, the map window also displays the Properties pane and tabs with additional
information.
The kind of information and messages displayed at the bottom of the screen depends on the mode as
well as on the way you customize the map view. Service Mapping users see maps in the view mode,
while administrators use the edit mode that shows discovery messages as explained in Check a business
service map on page 646. For more information on map views, see Modify view for business service map
on page 696.
You can navigate to a different business service map directly from this window by selecting it from the list:
By default, the entire map is shown in the center of the visible map area of the window. You can zoom in
and out as well as position the map depending on what segment of the map you must see using these
controls:
You can also click anywhere in the map area and drag the required segment of the map into the visible
area.
You can view changes made to a business service as a whole and to individual CIs belonging to a service
by choosing a time range. For more information, see View change history of business services on page
686.
Every map consists of icons representing CIs and arrows that represent the connections between them.
Some map elements can contain other elements such as in the case of application clusters. On the map,
application clusters appear as a stack of CIs with the plus (+) symbol next to it. The cluster label shows the
number of CIs in this cluster with the multiplication sign.
When the CI has inclusions, the CI icon has the plus (+) symbol next to it and shows the contained
application when you expand it.
An OS cluster appears as a CI with the plus (+) symbol and the number of CIs in this cluster.
When a map is loaded and no elements are selected, the Properties pane shows the details of this
business service.
A selected device, application, or connector displays in blue and is highlighted. Information about the
selected map element is displayed in the Properties pane on the right of the map. The Properties pane
contains the links to the Detailed Properties page.
The properties for the server hosting applications and applications themselves are shown separately inside
the Properties pane.
If there is any information related to the selected CI or connection, it is highlighted on the tab at the bottom
of the window.
When you select information on a tab at the bottom of the map window, the related CI displays in yellow.
The View mode allows Service Mapping users to view business services without modifying them.
If you are logged in as a Service Mapping administrator, a business service map opens in the Edit mode
that shows discovery messages and warning icons that indicate discovery errors ( ) on maps.
In the Edit mode, you can modify the business service by manually adding or removing CIs, resolving
connections, and so on. For more information about modifying business services in the Edit mode, see
Troubleshoot maps in Service Mapping on page 657.
Each CI type has different properties. For example, the Linux Server type has different properties than the
SQL Instance type.
Properties available for viewing also depend on the Service Mapping setup.
1. Navigate to Service Mapping > Services > Business Services.
2. Click View map next to the business service including the CIs that you want to view.
3. To see the full name of a CI name that has been shortened in the map, point to the CI.
A tooltip displays the full CI name.
4. Click the CI to see details about it in the Properties pane.
The properties of applications and the servers that host them are shown separately.
5. To view more detailed properties for the CI, click Detailed properties at the bottom of the Properties
pane.
By default, Service Mapping merges connection lines for the same CI to declutter a business service map.
It helps to make the map more readable.
Each connection type has different properties. For a merged connection line, all underlying connections are
listed.
Properties available for viewing also depend on the Service Mapping setup.
1. Navigate to Service Mapping > Services > Business Services.
2. Click View map next to the business service including the connections you want to view.
3. View connection properties as follows:
To view Do this
To view Do this
The source and • Right-click the merged connection line with a number and select
target CIs for a the relevant connection.
merged connection
Or
• If a segment of a connection line is shared by more than one
merged connections, there is no indication that it is a merged
line.
Right-click the connection line coming out or going into a CI, and
select the relevant connection.
To view Do this
The source and 1. Click the CI whose connections you want to view.
target CIs for a
connection if the All concealed connections for this CI appear on the map.
spanning tree view is 2. Click the relevant connection line.
enabled.
The spanning tree
view simplifies a
business service
map concealing most
of the connection
lines.
To view Do this
To view Do this
Detailed properties Click the connection line coming out or going into a CI, select the
of a connection relevant connection, and then choose Select edge.
segment shared
by more than one
merged connection
in the Properties
pane
To view Do this
To view Do this
Field Description
Field Description
If necessary, you can mark times on the history scale by creating baselines. It lets you quickly return to the
marked view.
1. Navigate to Service Mapping > Business Services > Discovered Services.
2. Click View map for the business service for which you what to see change history.
The Changes tab Service Mapping created for this business service.
3. On the history timeline, set the time range of changes that you want to view.
Option Action
To set the time range of the history timeline Click the hour, day, week, or month icons.
To increase or decrease the time range Click the zoom in and zoom out icons.
Option Action
To change the upper limit on your history Click on the history scale.
range
Note:
You cannot set the lower limit on your
history range to a time before this
business service was created. This time
is marked with the Business Service
Created event on the history scale.
The map shows the history view of the business service for the time you selected.
Note: The Change tab displays all change records even if the ones, which are filtered out of
the history view.
b) Navigate to the time you want to mark as a baseline on the history scale.
c) Click Set baseline.
Option Action
To see the CI responsible for a change record Select a change record on the Changes tab.
The related CI is marked yellow in the map.
Option Action
To see only change records related to a CI Select the required CI or the connection on the
map.
The Changes tab displays only change records
related to the selected CI or connection.
To see network at the selected moment in the 1. Set the time on the history scale.
past
2. Right-click on the connection and select
Show network path.
The new tab opens displaying the network or
storage path map for the time you selected.
6. To exit the history view and see the current status of the business service, click the current icon.
b) If necessary, increase or decrease the time range by clicking the zoom in and zoom out icons.
c) To change the upper limit on your history range, click the history scale.
The upper limit appears above the history scale.
You cannot move the lower limit on your history range to before the Business Service Created
event.
The map shows the history view of the business service for the time you selected.
Note: The Change tab at the bottom of the screen displays all change records even if you
limit the range on the history scale.
5. Define the time before the changes in the Compare point 1 field and the time after the changes in the
Compare point 2 field.
Or
Drag the pointers on the history scale to set corresponding time points.
If the history scale does not include the time set as a compare point, its corresponding pointer appears
next to the compare point in yellow:
6. Click Compare.
The comparison view opens in a separate tab.
7. Select a marked CI to see the relevant change record on the Changes tab.
Service Mapping discovers and displays network devices located on your organization local network, not
on WAN or the public cloud. When Service Mapping encounters a connection between two applications,
it uses traceroute on the source host to figure out the path to the target host. Then Service Mapping use
SNMP credentials to discover the devices it found on the path.
Service Mapping can show devices for storage paths using the following protocols:
• Fiber Channel: Storage Area Network (SAN)
• IP Ethernet: Internet Small Computer Systems Interface (iSCSI) or Network Attached Storage (NAS)
Note: For storage paths using Fiber Channel, Service Mapping does not display Fiber Channel
switches.
Between a host CI and a storage CI there may be two separate storage paths using different protocols:
Fiber Channel and IP Ethernet. If so, the business service map displays one connection line for both
protocols.
You can see the change history for network and storage paths for business services. Checking path history
for manual or technical services is not supported.
1. Navigate to Service Mapping > Services > Business Services and click View map next to the
relevant business service.
Or
2. Navigate to Event Management > Dashboard and double-click a business service.
3. On the map, right-click the connection for the network or storage path you want to see.
4. Select Show network path or Show storage path.
5. If necessary, select the type of protocol and click Choose in the Choose Protocol Type dialog box.
The view showing the network or storage path opens in a new tab.
If Service Mapping cannot discover all objects in a connection, the connector line on the network or
storage path appears dotted as shown above.
6. For storage paths, use the Storage Mapping tab at the bottom of the screen to view correlation
between file systems on the host and the storage volume or the shared folder on the storage device.
7. To view the change history for network paths, perform steps in View change history of business
services on page 686.
8. If you access the network or storage path map from Event Management, you can see the list of alerts
related to this path appear at the bottom of the screen.
Properties you show or hide when you customize maps are not removed permanently and can be
displayed or hidden again as you need. Service Mapping saves the changes to the map view and displays
the last map view when you reopen a business service.
1. Navigate to Service Mapping > Business Services > Discovered Services.
2. Click View map next to the relevant business service.
3. Click the More options menu.
Display in Host View The business map shows The business map shows
Switch between the CI view hosts. CIs.
that hides hosts and the host
view that hides CIs.
The host view can help
determine that Service
Mapping discovered and
mapped all hosts correctly
as well as help locate
connections to each host.
The host view also simplifies
the map especially when a
large number of applications
are running on a small
number of hosts.
Remove Topology Cycles The business service map The business service map
Hide connections returning hides topology cycles. shows topology cycles.
to the same configuration
item (CI) that originated
them to reduce map
cluttering.
In most cases these
returning connections are
not important for analysis or
troubleshooting of business
services.
Show Traffic Based CIs The business service map The business service map
Display configuration items displays CIs and hosts that shows only CIs and hosts
(CIs) that were found using Service Mapping discovered discovered using patterns.
traffic-based mapping. using both patterns and
traffic-based discovery.
While detecting CIs inbound
and outbound traffic creates
an inclusive map, it may
also result in mapping many
redundant CIs that do not
influence the business
service operation.
Map indicators For each record type The business service map
Display records from tables that is set to display, a shows CIs without related
that extend the main Task corresponding tab appears indicators.
table and that are related to underneath the map. For
the CIs. example, if alerts are set
to display, then an Alerts
The base system provides tab appears. Click any tab
map indicators for the to display the respective
following record types: records.
• Affected CIs Event Management Map
• Open alerts Indicators (Video)
• Current, planned, or
recent change requests
• Open incidents
• Current, planned, or
recent outages
• Open problems
Spanning tree view The map displays a business The map reflects the actual
service as a tree. structure of a business
Simplify the map by
service.
organizing CIs into a tree
structure and concealing
some connection lines.
This option is especially
relevant for very large maps.
Check CI dependencies
You can see if a particular configuration item (CI) is part of other business services and check if it depends
on other CIs.
Display a business service map by navigating to Service Mapping > Services > Business Services and
selecting View map for the relevant business service.
Role required: admin, sm_admin or sm_user
While Service Mapping shows position of a CI in a particular business service, your organization may also
use the same CI in other business services. If you are planning to perform changes to this CI, make sure
that the changes do not affect other business services.
1. In the map, right-click the relevant CI and select Open dependency views.
The Dependency Views page opens in the new tab showing the selected CI in the center.
2. Click Details.
In deployments where domain separation is enabled and domains are configured to form a hierarchy, MID
Servers must be placed in the lowest domain level, a "leaf domain."
Once MID Servers are installed, configure them to work with Service Mapping for the best discovery
results.
• IP range - limits operation of this MID Server to this IP range. Service Mapping does not choose this
MID Server for a discovery request whose endpoint is outside this IP range.
Important: These selection criteria are mandatory for the new MID Server algorithm in fresh
installs. Configure them in every MID Server you are planning to use with Service Mapping.
Note: In upgraded deployments, Service Mapping uses the legacy algorithm for selecting MID
Servers. For more information, see MID Server configuration for Service Mapping in upgraded
deployments on page 705.
In addition to selection criteria, you can configure one of the MID Servers as the default server that Service
Mapping uses.
In fresh installs of the ServiceNow deployment, Service Mapping selects a MID Server using a new
improved algorithm:
• Service Mapping chooses the MID Server whose selection criteria best match the parameters of the
discovery request.
• If there are no MID Servers with matching selection criteria, Service Mapping chooses the default MID
Server.
• If there are no MID Servers with matching selection criteria or default MID Server, Service Mapping
cannot start the discovery process.
While by default Service Mapping uses a new algorithm in fresh installs, it can support both new and
legacy algorithms for selecting a MID Server. For more information, see Choose MID Server selection
algorithm on page 706.
In addition to the IP range, you can configure one of the MID Servers as the default server that Service
Mapping uses. For operational information, see Configure a default MID Server for Service Mapping for
upgraded deployments on page 706.
In upgraded deployments, Service Mapping selects a MID Server using a legacy algorithm:
• Service Mapping chooses the MID Server whose application criteria is set to ServiceMapping or to ALL.
• By default, Service Mapping selects the MID Server whose IP range matches the IP in the discovery
request.
• If Service Mapping cannot find a MID Server with the matching IP range and matching application, it
assigns the discovery request to the default MID Server.
• If there is no default MID Server and none of the MID Servers have IP ranges configured for them,
Service Mapping uses a random MID Server.
• If there is no default MID Server and no IP range matches the IP in the discovery request, Service
Mapping cannot start the discovery process.
The new algorithm for selecting MID Servers uses an additional selection criteria: capability. It allows
Service Mapping to take a network protocol into consideration. For information about updated selection
criteria and the new MID Server selection algorithm, see MID Server configuration for Service Mapping in
fresh installs on page 704.
While by default Service Mapping uses the legacy algorithm in upgraded deployments, it can support
both new and legacy algorithms for selecting a MID Server. If necessary, you can enable the new MID
Serverselection algorithm for your upgraded deployment. For more information, see Choose MID Server
selection algorithm on page 706.
Configure a default MID Server for Service Mapping for upgraded deployments
Service Mapping uses the default MID Server when it cannot find a MID Server with the matching IP range.
Configuring a default MID Server improves the discovery process.
Role required: sm_admin
Perform this procedure only for upgraded deployments. For fresh installs, perform Set a default MID Server
for an application.
Ensure you know the name of the MID Server you want to configure as the default MID Server for Service
Mapping.
In upgraded deployments, Service Mapping selects a MID Server using a legacy algorithm:
• Service Mapping chooses the MID Server whose application criteria is set to ServiceMapping or to ALL.
• By default, Service Mapping selects the MID Server whose IP range matches the IP in the discovery
request.
• If Service Mapping cannot find a MID Server with the matching IP range and matching application, it
assigns the discovery request to the default MID Server.
• If there is no default MID Server and none of the MID Servers have IP ranges configured for them,
Service Mapping uses a random MID Server.
• If there is no default MID Server and no IP range matches the IP in the discovery request, Service
Mapping cannot start the discovery process.
4. Click Update.
By default, Service Mapping uses the new algorithm in all fresh installs.
In upgraded deployments, Service Mapping selects a MID Server using a legacy algorithm:
• Service Mapping chooses the MID Server whose application criteria is set to ServiceMapping or to ALL.
• By default, Service Mapping selects the MID Server whose IP range matches the IP in the discovery
request.
• If Service Mapping cannot find a MID Server with the matching IP range and matching application, it
assigns the discovery request to the default MID Server.
• If there is no default MID Server and none of the MID Servers have IP ranges configured for them,
Service Mapping uses a random MID Server.
• If there is no default MID Server and no IP range matches the IP in the discovery request, Service
Mapping cannot start the discovery process.
By default, Service Mapping uses the legacy algorithm in all upgraded deployments.
1. Navigate to Service Mapping > Administration > Properties.
2. To enable the new algorithm for upgraded deployments:
a) Clear the Enable Legacy MID selection algorithm check box.
b) Click Save.
c) Configure MID Server selection criteria as described in MID Server configuration for Service
Mapping in fresh installs on page 704.
You must upload an rctrlx.exe file for every MID Server in your deployment.
1. Download the rctrlx.exe file onto your computer from the Internet.
For example, https://github.com/leonsodhi/rctrlx/releases offers the latest versions of the rctrlx.exe
file:
2. Place the rctrlx.exe file into the folder for Windows 32-bit servers:
<MID Server installation directory>\agent\bin\sw_wmi\bin\32
For example, C:\SN_MID\SN_MID_Dev\agent\bin\sw_wmi\bin\32.
3. Place the rctrlx.exe file into the folder for Windows 64-bit servers:
<MID Server installation directory>\agent\bin\sw_wmi\bin\64
For example, C:\SN_MID\SN_MID_Dev\agent\bin\sw_wmi\bin\64.
• libintl3.dll
• pcre3.dll
• regex2.dll
3. In testing setups, where the Netflow Collector is located not on the same server as the MID Server,
you may need to convert the nfdump into the gzip format. Then you must manually copy the raw data
in the nfdump output file onto the MID Server.
4. The MID Server processes the raw data in the nfdump output file and places the processed
information onto the ECC queue.
5. A sensor retrieves the processes data from the ECC queue and writes it into the Flow Connection
[sa_flow_connection] table.
6. Every time Service Mapping checks the ECC queue and receives information on a CI discovered
using patterns, it also checks if there is any data on outbound connections related to this CI. This
information is in tables holding data discovered using traffic-based discovery: [cmdb_tcp] and
[sa_flow_connection]. If these two tables contain unique data that patterns did not discover, Service
Mapping enriches the information about the CI connections and adds them to the map.
2. Configure the Netflow collector to save the nfdump file in a certain directory.
For operational information, refer to https://sourceforge.net/projects/nfdump/.
3. Configure the Netflow collector to save data for one day:
a) Open the command-line window on the server hosting the Netflow collector.
b) Create a cron job by using the following command:
crontab -e
c) Enter the following command using the correct paths:
*/10 * * * * /usr/local/bin/nfexpire -e /data/nfdump -t 1d.
4. Create a file with the nfdump data. For example, use the following command:
nfdump -q -m -R /data/nfdump/ -o extended -t 2016/07/06.07:00:00-2016/07/06.07:10:00 'inet and
proto tcp' >> /tmp/my_file
5. If the file is very large, you can compress it using the gzip format. Use the following command:
gzip /tmp/my_file
6. Copy the nfdump data file to the MID Server.
7. Configure Service Mapping to receive data collected by the Netflow collector:
a) Navigate to Service Mapping > Administration > Flow Connectors.
b) Click New.
c) Click ndfdump file.
d) On the dfdump file page, configure parameters as follows:
Field Description
e) Click Submit.
If you are satisfied with the results of the test, configure Netflow-based data collection as described in
Configure data collection using Netflow on page 716.
2. Configure the Netflow collector to save the nfdump file in a certain directory.
a) Open the /etc/init.d/nfdump file.
b) Modify the parameter responsible for saving this file in the required location.
For example, on an Ubuntu server, specify the location using the DEAMON_ARGS parameter:
DATA_BASE_DIR="/var/cache/nfdump"
DAEMON_ARGS="-D -l $DATA_BASE_DIR -P $PIDFILE"
5. Verify that the Netflow collector is configured correctly and receives the correct data from the network
resources.
a) Run the following command:
b) In the command output, verify that marked fields contain real data:
e) Click Submit.
1. Amazon EC2 instances collect their individual logs into log streams and forward them to the central
flow log group.
2. The MID Server collects the data from the flow log and processes it.
3. The MID Server places the processed information onto the ECC queue.
4. A sensor retrieves the processes data from the ECC queue and writes it into the Flow Connection
[sa_flow_connection] table:
5. Every time Service Mapping checks the ECC queue and receives information on a CI discovered
using patterns, it also checks if there is any data on outbound connections related to this CI. This
information is in tables holding data discovered using traffic-based discovery: [cmdb_tcp] and
[sa_flow_connection]. If these two tables contain unique data that patterns did not discover, Service
Mapping enriches the information about the CI connections and adds them to the map.
e) Click Submit.
5. Verify that Service Mapping collects data using VPC Flow Logs:
a) On the AWS VPC flow logs form, select the newly configured connector and click Run now to
start the data collection flow and populate the Flow Connection [sa_flow_connection] table.
b) Navigate to System Definitions > Tables.
c) Click the Flow Connection [sa_flow_connection] table.
d) Under Related Links, click Show List.
The view that controls the property display in Service Mapping is called sa_map_properties. You can
assign it to new CI types or modify it for CI types which already have it. Use the CI type hierarchy to
configure the sa_map_properties view:
• The parent CI type at the top of the hierarchy is cmdb.
• If you do not define the view for a child CI type, Service Mapping displays properties configured for its
parent with the addition of all properties of this child CI type.
• If you define the view for a child CI type, it includes only the properties that you specifically added.
Service Mapping does not display any of its parent properties automatically.
• If you do not add a parent CI type or its child CI type to the view, Service Mapping uses the ascendant
CI type that was added to the view in addition to all properties of all parent CI types in the hierarchy
between this child CI type and the parent CI type that is added to the view.
For example, if you define the sa_map_properties view for the CI type for Windows Servers, the same
properties are displayed for all Windows Servers.
5. In the CI Detailed Properties page, click the Additional actions icon and select Configure > Form
Layout.
6. In the View name list under For view and section, define which view you want to modify:
7. To add properties to the view, from the Available pane, select the properties that you want to display
and click the Add button.
8. To remove properties from the view, from the Selected pane, select properties that you want to hide
and click the Remove button.
9. Use the Move up and Move down buttons to define the order in which the properties appear.
10. Click Save.
11. Click Update.
The Properties pane displays the properties that you selected.
12. If necessary, repeat the procedure for other CI types.
To apply the change to all MID Servers in your Leave the MID Server field blank.
organization
To apply the change to a specific MID Server Select the relevant MID Server from the list.
6. Click Submit.
The default configuration includes map indicators for the following record types. By default, the display of
all map indicators is disabled:
• Open incident
• Open alert
• Unplanned current outage
• Planned current outage, or an open problem
• Current, planned, or recent change request
You can create a map indicator to define additional record types, such as trouble sources for business
service CIs. You can also modify an existing map indicator, for example to use a different color scheme
or to alter the priority of a task. Affected CIs are based on the cmdb_ci field in the Task table and its
extensions. Therefore, if you use custom tables to store CIs for incidents, problems and changes, the
calculation of affected CIs is affected.
For more information see Event Management Map Indicators (Video).
1. Navigate to Dependency Views > Map Indicators.
2. Click New to create a new map indicator, or click the name of an indicator from the Table column to
modify an existing map indicator.
3. Fill in the fields on the form, as appropriate.
Field Description
Field Description
4. Click Submit to enter a new map indicator. Click Update to modify an existing map indicator.
Pattern customization
Service Mapping and Discovery use patterns in their discovery process. The base system contains a wide
range of patterns that cover most industry standard network devices and applications. You can customize
these patterns and create new ones.
In Service Mapping and Discovery, devices and applications are referred to as configuration items (CIs).
A pattern is a sequence of operations whose purpose is to establish attributes of a CI and its outbound
connections. Service Mapping and Discovery share a set of preconfigured patterns that cover most of
the commonly used devices and applications. Patterns can be of infrastructure and application type.
Infrastructure patterns are used only by Discovery for creating lists of devices. Application patterns serve
both Service Mapping and Discovery that use the same application patterns for their purposes. For
example, Discovery runs the horizontal discovery with the Apache Web Server pattern to find and list all
Apache Web Servers in your organization. Service Mapping runs the top-down discovery using the same
pattern to discover a specific Apache Web Server and place it on a business service map.
For discovering devices that act as hosts for applications, Service Mapping relies on Discovery. As part of
the top-down discovery process, Service Mapping triggers Discovery to perform its horizontal discovery
behind the scenes. Service Mapping then uses information on hosts provided by the horizontal discovery to
create its business service maps.
Discovery uses a combination of probes and patterns. For more information, see Horizontal discovery
process flow on page 8.
You can customize the pattern set in the following cases:
• If your organization uses proprietary devices and applications, create patterns for these items to enable
Discovery and Service Mapping to discover them.
• If you modify key attributes of CI types that had corresponding patterns, modify the relevant patterns to
reflect the change.
Patterns are assigned to the CI types that they serve to discover. If necessary, you may assign more
than one CI type per pattern. In that case, you define one main CI type and multiple related CI types. For
example, a pattern for discovering BIG-IP Global Traffic Manager (GTM) F5 has BIG-IP Global Traffic
Manager (GTM) F5 as its main CI type and related CI types for the DNS name, network adapter and other
components.
For top-down discovery performed by Service Mapping, each application pattern serves to discover only
the main CI type.
At the same time, Service Mapping usually uses more than one pattern to discover the same CI type, since
a CI type can use different protocols, operating systems, entry points, and so on.
Unlike top-down discovery, the process of horizontal discovery uses each pattern to discover a main CI
type with all related CI types.
Every time you modify and save a pattern, you create a version of this pattern. By default, the latest
version is used for discovery, but you can choose any other version to use.
A CI type (or class) contains several important definitions that apply to all CIs belonging to it:
• CI attributes are added as fields to the CMDB tables.
• Identifiers help Service Mapping and Discovery differentiate between new and existing CIs. For
example, if there is an Apache Web Server CI type defined in the CMDB, and Service Mapping and
Discovery both discover an Apache Web Server CI, the system processes it using identifiers. It then
recognizes the discovered Apache Web Server CI as an updated version that already exists in the
system, not a new Apache Web Server CI. For information about the attributes the system uses to
identify CIs for different classes, see the identifier rules in Discovery identifiers on page 114.
• There are reconciliation rules that help the ServiceNow platform to consolidate CI properties received
from different applications correctly. These rules are necessary for organizations where more than one
application participates in the discovery process. Reconciliation rules define how properties of the same
CI discovered by different discovery sources are merged. For example, Service Mapping discovers
the version and home directory attributes of an Apache Web Server CI, while Discovery discovers
the version and patch level attributes for the same Apache Web Server CI. The ServiceNow platform
applies the reconciliation rule and as a result Service Mapping does not overwrite the attributes found
by Discovery.
• Preconfigured CI types that come with the ServiceNow platform form a hierarchy where some CI
types serve as parents to others who automatically receive their parent's attributes in addition to
parameters you configure specifically for them. CI type hierarchy is used widely for configuring CI
behavior, relationships, and display. Child CI types derive properties from their parents. In this example,
the Apache Web Server CI is a child of the Web Server CI and derives many attributes from its parent,
such as name, version, and model ID.
In addition to these CI type definitions, the horizontal discovery process uses a CI classification to define
to which CI type a CI belongs. Create a device CI classification if you create a CI type for devices using
SNMP and a process CI classification for an application CI type.
1. Navigate to Configuration > CI Class Manager.
2. To use an existing CI type as a parent for the new CI type, in the Class Hierarchy pane, navigate to
the required CI type, right-click it and select Extend.
3. Create a table to store the CI type attributes:
Field Description
Field Description
4. Configure how the instance determines if a discovered CI is an upgraded CI existing in the instance or
a brand new CI. See Create or edit a CI identification rule.
Warning: If there is no CI identification rule for a CI type, Service Mapping discovers CIs
belonging to this type, but cannot interpret the results of the discovery process. In this case,
the ServiceNow platform rejects the discovery results for these CIs and their information is not
updated.
Field Description
Field Description
5. (Optional) Configure the instance to consolidate CI properties received from different data sources
correctly. See Create or edit a CI reconciliation rule.
Configure the following Service Mapping-related parameters correctly:
Field Description
Note: If you do not create a CI reconciliation rule, data discovered by patterns is used to
update CI properties.
Note: There is no need to create CI classifications for hosts because these classifications are
included in the base system.
7. For CI types that represent inclusions, define the hierarchy for the new CI type. Clear the Reverse
Relationship Direction check box while performing this configuration.
See Create or edit containment service rule metadata.
8. If necessary, customize icons that represent CIs in maps.
See Create or modify map icons.
If your ServiceNow instance uses domain separation and you have access to the global domain, select the
domain to which the business service belongs from the domain picker ( ). It must
be a domain without any child domains.
Role required: sm_admin or admin
Service Mapping starts the discovery and mapping process for every business service from the entry point
you define for it. In addition to this, Service Mapping patterns use entry points to discover CI outbound
connections.
Service Mapping includes a wide range of preconfigured entry point types that cover most commonly
used applications. If your organization uses a less known or proprietary application that does not have a
corresponding entry point type in Service Mapping, you must create it.
Entry points are modeled in the ServiceNow CMDB as CIs of endpoint type.
Like any other CI type, an entry point contains several important definitions that apply to all CIs belonging
to it:
• CI attributes are added as fields to the CMDB tables.
• Identifiers help Service Mapping and Discovery differentiate between new and existing CIs.For
example, if there is an Active Directory Forest endpoint CI type defined in the CMDB, and Service
Mapping discovers an Active Directory Forest CI, it processes it using identifiers and recognizes it as an
updated version of the Active Directory Forest CI that exists in the system, not a new Active Directory
Forest CI. Unlike with regular CI types, identifiers for new endpoint CI types are created automatically.
• CI type hierarchy. Preconfigured CI types that come with the ServiceNow platform form a hierarchy
where some CI types serve as parents to others who automatically receive their parent's attributes
in addition to parameters you configure specifically for them. CI type hierarchy is used widely for
configuring CI behavior, relationships, and display. Child CI types derive properties from their parents.
Create standard entry points as child CIs for the endpoint CI type, which creates an extension for the
cmdb_ci_endpoint table. For entry points of inclusion type create child CIs for the inclusion endpoint CI
type extending the cmdb_ci_endpoint_inclusion table. In an inclusion, a server hosts applications that
are treated as independent objects.
Field Description
Field Description
5. Add entry point properties on the Columns tab at the bottom of the page.
By default the new entry point derives properties from its parent CI, but you can modify the properties
as necessary.
6. Click Submit.
Create a pattern
Create a pattern and define its basic attributes.
Role required: admin, discovery_admin, or sm_admin
Make sure that the application for which you want to create a pattern, has a corresponding configuration
item (CI) type and a CI classification. If the CI type you require is not in the list, create it as described in
Create a CI type for Service Mapping and Discovery on page 732.
If your ServiceNow instance uses domain separation and you have access to the global domain, select the
domain to which the business service belongs from the domain picker ( ). It must
be a domain without any child domains.
Patterns can be of infrastructure and application type. Infrastructure patterns are used only by Discovery
for creating lists of devices. Application patterns serve both Service Mapping and Discovery that use the
same application patterns for their purposes.
Field Description
4. Click Submit.
5. Click the pattern you created in the list.
6. Define advanced attributes:
• For an infrastructure pattern, click Edit for Discovery.
• For an application pattern used for horizontal discovery by Discovery, click Edit for Discovery.
• For an application pattern used for top-down discovery by Service Mapping, click Edit for
Mapping.
Field Description
Field Description
Run Order [Application patterns only] For an application pattern used by Service
Mapping, select the order in which this
pattern always runs:
• Before
• After
7. Define a set of identification steps for every incoming connection of a configuration item (CI).
a) Under Identification Section, click the plus icon to create a new identification set of steps.
b) Configure the following parameters:
Field Value
Name Unique name for the identification section.
Entry Point Types [Application patterns only] Select all relevant entry point types. Every CI
has incoming connections that are referred
to in Pattern Designer as entry points. You
base your CI identification process on the CI
entry points creating steps and defining step
operations and variables for every entry point
separately.
For application patterns used by Discovery,
enter either TCP or All.
Find Process Strategy [Application patterns Select the appropriate strategy for finding the
only] process that populates the Process variable in
the Temporary Variables table.
• LISTENING_PORT: The entry point is the
listening port.
• TARGET_PORT_AND_IP: The entry point
port communicates with another server.
• NONE: The process variable is not
populated.
Field Value
Order For an application pattern used by Service
Mapping, select a number that determines
the order in which Service Mapping uses
identification sections. The section with the
lowest order number is used first.
10. When creating an application pattern, make sure you define both modes: for Discovery and for Service
Mapping.
• For application type patterns, continue with Define the connection section on page 787.
1. On the pattern form, double-click the relevant entry in the Identification Sections or Connectivity
Sections. Connectivity Sections is for Service Mapping only.
If no discovery steps have been identified for this pattern, an _Untitled_Step_ node appears in the
Operational Step tree in the left pane of the window.
2. In the tree, right-click a step in the left pane and select Add Step Before or Add Step After.
3. Select an operation from the list and then fill in the fields that appear for the operation.
Operation Objective
Change User Use operating system credentials instead of the
default administrative credentials.
Field Description
5.
To add comments to any step, click the comment icon ( ) and add the text in the comment field.
6. To delete a step from the section, select the step and click the trash can icon.
7. After you define all steps, click Save.
8. On the pattern record, click Activate to make that pattern available for use.
Click Debug to access the additional actions, and you can browse to and open information source files
rather than looking them up separately.
Useful shortcuts in Pattern Designer
There are several shortcuts that make entering values in Pattern Designer easier.
To Do this Example
Enter constant values, a string Enclose the value in quote Enter the path with quote marks
marks ("). (") at the beginning and at the
end of the string.
Enter a value that can change, Start the variable name with the Enter the variable name
a variable dollar sign ($). beginning with the dollar sign
($).
Enter a value that can change Enter the variable name in the Enter the dollar sign ($),
from a variable that contains following format: followed by process for the
several variables container variable name, period
$<container_variable>.<variable>
(.), and then pid for the name
of the string variable.
To Do this Example
Enter a value from the specific Enter the variable in the To use the value from
field in a table following format: the second row in the
$tabular_variable[row instanceID column from
number].column_name the IfTable variable, enter
$IfTable[2].InstanceID.
Enter values from a specific Enter the variable in this format: To use the value from
column in a table sequentially, the current row in the
$tabular_variable[].column_name
starting from the current row instanceID column from
the IfTable variable, enter
$IfTable[].InstanceID.
To Do this Example
Enter values from a specific Enter the variable in this format: To use the value from the first
column in a table sequentially, row in the instanceID column
$tabular_variable[*].column_name
starting from the first row from the IfTable variable, enter
$IfTable[*].InstanceID.
To Do this Example
Copy a value or a variable into Select the variable or the value Drag the company variable
a field you want to copy from the CI from the CI Attributes pane
Attribute pane and drag it into into the Values field.
the target field.
To Do this Example
Enter a variable using auto- 1. Type the dollar character Variables have a $ prefix.
complete ($) and the first letters of Typing $P in a field displays a
the variable name. list of possible values beginning
with "P".
2. Select the relevant value
from the list showing all
currently available values
that match the characters
you entered.
Specify complex (concatenated) Enter a value, then add a plus To specify the path, use the
values in fields sign (+), and then enter another install_directory variable and
value. extract of the actual path
connected by a plus sign (+).
Note: You can drag
values and variables to
create complex values.
Service Mapping and Discovery share patterns, but execute them differently. Discovery runs infrastructure
and application patterns for horizontal discovery, while Service Mapping runs application patterns for
the top-down discovery. When you activate debug mode, you choose how pattern steps are run: as if by
Discovery or as if by Service Mapping.
Service Mapping and Discovery share a set of preconfigured patterns that cover most of the commonly
used devices and applications.Patterns can be of infrastructure and application type. Infrastructure patterns
are used only by Discovery for creating lists of devices. Application patterns serve both Service Mapping
and Discovery that use the same application patterns for their purposes.
When you activate the debug mode, the following additional actions become available:
Action Description
If you are not in debug mode, actions are not performed and values are not displayed in the tables.
1. In the Pattern Designer, click Debug > Debug for Discovery or Debug for Service Mapping
depending on how you need the pattern executed.
The Debug Identification Section window is displayed.
2. Fill in the required details for the entry point type:
Field Description
The debug mode is activated and the green check mark appears on the Debug button .
4. To deactivate the debug mode, click the Debug button in the Pattern Designer window.
3. In the first condition field, enter the required value. For example, a variable name.
If you select Is Empty, the second field is rendered irrelevant and disappears.
5. To add more conditions, click the plus icon next to the Set Precondition check box icon and define
the criteria.
6. If you define multiple conditions, set the logical relationship between them by clicking the AND or the
OR icon.
7. Define if the criteria must be satisfied or not for the step operations to run: from the If precondition is
list, select True or False.
Pattern variables
You use variables in discovery patterns to refer to parameters or attributes that the pattern needs to
discover.
There are several kinds of variables used in discovery: global variables, CI attribute variables, and
temporary variables.
Pattern Designer displays different kinds of variables in different areas of its interface.
Always prefix variables with the dollar symbol ($) which indicates variables, but is not actually a part of the
variable name. For example, if you specify $Abc as the variable name, the actual name of the variable is
Abc.
Change credentials for discovery
You can define a Change user operation to make Service Mapping or Discovery use SSH or Windows
credentials instead of the default administrative level credentials. This operation is relevant only for
configuration items (CIs) hosted on Windows or Unix operating systems.
Ensure that the operating system (OS) credentials you want to use instead of the default credentials are
correctly configured. Configure the tag attribute to the OS CI type whose credentials you want to use. For
operational information on how to create credentials, see SSH credentials and Windows credentials.
When you define a Change user operation, the appropriate credentials for the same CI type, or a different
CI type on the same or different host are used for the discovery process.
1. Select Change user from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
Use different host Select the check box to use a specific host
and then fill in the host name.
Use Different CI Type Select the check box and select the CI type
from the list.
Field Value
Hierarchy Applications > Business Integration
Software
CI Type IBM WebSphere MQ Queue
Pattern WMQ Queue Unix Pattern
Section Identification for MQ Queue entry point
types
Step number and Name 5. Change user credentials
Field Description
Select Entry Point Select the entry point type from the list.
Target CI type Select the target CI type from the list.
Is Artificial Select the check box if the connection
should be hidden (that is, not shown in the
user interface), but is used for continuing the
discovery flow.
Is Traffic Based Select the check box to use the traffic-based
discovery method for this connection.
Field Value
Hierarchy Applications > Business Integration
Software
CI Type IBM WebSphere MQ Queue
Pattern WMQ Queue Unix Pattern
Section Alias queues connectivity
Step number and Name 6. Create outgoing connection to alias
queues
Merge tables
Use the Merge table operation to merge content from two source tables into a target table.
You can specify the following type of criteria:
• Field equality: If there are matching values in specified fields, both source tables are merged.
• Condition: If both source tables satisfy the criteria, then they are merged.
1. Select Merge table from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
Field Value
Hierarchy Applications > Application Servers
CI Type WebSphere EAR
Pattern J2EE EAR On Linux
Section DB2 JDBC connectivity
Step number and Name 21. merge the scanned_result with
outgoing_conns
Filter a table
You can use the Filter table operation to search a source table for a specified value. If found, values are
logged in a specified target table.
1. Select Filter table from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
Field Description
Note: If the specified target table is already populated, the new fields and values from the
source table overwrite the existing fields and values of the target table.
Field Value
Hierarchy Applications > Application Servers
Field Value
CI Type WebSphere EAR
Pattern J2EE EAR On Linux
Section DB2 JDBC connectivity
Step number and Name 8. Filter DB2 JDBCProviders
Get a process
Use the Get process operation to search for a specific process to store in a tabular variable.
Make sure that you are in debug mode as described in Activate pattern debug mode on page 748.
You can manually specify the filtering criteria, or you can select a process from the list of all processes
on the system. The values of the selected process are used to populate the filtering fields. Modify these
criteria as needed (for example, to delete irrelevant criteria).
Processes that satisfy the specified filtering criteria are placed in a tabular variable whose name you
specify. This tabular variable appears in the Temporary Variables table.
1. Select Get process from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
Field Value
Hierarchy Applications > Business Integration
Software
CI Type Web Servers - IIS
Pattern IIS
Section Identification for HTTP(S) entry point types
for IIS6 second logic
Step number and Name 40. Get IIS process
Using the browse function, if in Debug mode 1. Click Browse and select the registry key.
The selected key path is placed in the
Registry key path field. A form opens and
displays a list of keys in a tree.
2. Select the key to show the attributes.
3. Select the attributes and specify a name in
the variable column of the table that stores
the value of the key in the Results table.
You can use variables. You can also enter
a value from the specific field in a tabular
variable as described in Useful shortcuts in
Pattern Designer on page 743.
Manually, if not in Debug mode 1. In the Registry key path field, specify the
registry key path.
2. Ensure that the By using all keys from the
registry directory is selected.
3. Click Test and verify that the table is
populated with variable names and values.
3. If discovery must terminate when no results are found for the operation, select the Terminate check
box.
The following configuration is an example of the Get registry key key operation.
Field Value
Hierarchy Applications > Portals
CI Type SharePoint
Pattern Microsoft SharePoint
Section Identification for SharePoint
Step number and Name 1. Find SharePoint version from registry
Field Description
3. In the Variable Table, click Add and define the table name and column name to hold the query result.
For multiple results, click Add for each additional column that is needed.
Field Value
Hierarchy Applications > Mail Services
CI Type ExchangeBackEndServer
Pattern ExchangeBackEndServer On Windows
Pattern
Section Storage connectivity
Step number and Name 2. Query ldap -
msExchStorageGroup.msExchESEParamSystemPath
Match a condition
Use the Match operation to specify conditions that must be met in order for the discovery process to
continue. If these conditions are not met, the discovery process terminates.
1. Select Match from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
2. You can use an actual string or a variable. You can also use values from temporary tabular variables:
from a specific field or a specific column in a table sequentially, starting from the first row. For more
information, see Useful shortcuts in Pattern Designer on page 743.
3. To add more conditions, click the plus icon.
Field Value
Hierarchy Applications > Business Integration
Software
CI Type IBM WMB HTTP Listener
Pattern WMB HTTP Listener On Unix Pattern
Section Identification for HTTP
Step number and Name 1. Check process name to match http lstnr
During discovery of an IBM WMB HTTP Listener, use the Match operation to check the
process name.
Important: Avoid entering a specific path to a location or file because it can be different on
different operating systems. It is recommended to use variables for paths.
You can use variables. You can also enter a value from the specific field in a tabular variable as
described in Useful shortcuts in Pattern Designer on page 743.
3. Fill in the fields, as appropriate.
Field Description
4. To change the execution mode or credentials, click Advanced and fill in the fields, as appropriate.
Field Description
Execution mode Select the mode from the list. The mode
determines how communication with the
remote system is invoked.
CI Type Select the CI type from the list.
To insert variables for the credentials ($
$username$$, $$password$$):
1. Position the cursor in the appropriate
location in the Command field.
2. Click the plus icon.
Field Value
Hierarchy Business Integration Software
CI Type IBM WebSphere MQ WebSphere EAR
Pattern WMQ On Unix Pattern
Section Identification for WMB_DEB entry point
type
Step number and Name 5. Handle default q manager case
Parse a file
Use the Parse file operation to extract information from a file to be stored in a variable table.
1. Select Parse file from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
Field Description
File path Specify the file path and click Retrieve File.
Field Value
Hierarchy Applications > Application Servers
CI Type BizTalk Orchestration
Pattern BizTalk Orchestration pattern
Section Identification for BizTalk Orchestration
Step number and Name 7. get all send ports
Parse a URL
Use the Parse URL operation to break down a URL to the component level.
1. On the Identification or Connectivity Sections form, select Parse URL from the Operation list.
2. Fill in the fields, as appropriate.
Field Description
Field Value
Hierarchy Applications > Application Servers
CI Type WebSphere EAR
Pattern J2EE EAR on Linux
Section Web Services Connectivity
Step number and Name 3. parse the wsdlfile into table
Parse a variable
Use the Parse variable operation to extract information from a variable to be stored in a variable table.
1. Select Parse variable from the Operation list in one of the following locations:
You can use regular variables. You can also use values from a temporary tabular variable: from
a specific field or a specific column in a table sequentially, starting from the first row. For more
information, see Useful shortcuts in Pattern Designer on page 743.
2. Fill in the fields, as appropriate.
Field Description
Field Value
Hierarchy Applications > Application Servers
CI Type WebSphere EAR
Pattern J2EE EAR on Linux
Section EAR to MQ Connectivity
Step number and Name 6. Calc base_was_dir location
Parsing strategies
In discovery patterns, you can use parsing strategies to extract text from any text file and to specify a
variable to store the extracted content.
Parsing strategy Description
File parsing strategy (except vertical) Retrieve text from various file types including:
• .ora file (used by various Oracle products)
• .properties file (common for Java)
• xml file
• .ini file
Keyword, command, and positional type parsing • Retrieve text directly following a specific
strategy keyword
• Retrieve the value of a command-line
parameter using Java-style parameters
• Retrieve the value of a command-line
parameter using standard Unix/Windows
parameters
• Retrieve text specified by its position from the
end of the line
• Retrieve text specified by its position from the
beginning of the line
Attention: Do not use this parsing strategy for non-text files such as binary files.
You can define multiple extracts and variables. When identifying text for extraction into variables, what you
are really doing is identifying the text location within a context.
You can use one of the following methods:
• In Debug mode, you can select the relevant string from the file contents in the text box. For each string
you select, its position and delimiters relative to its context are stored. It enables the same definitions
to apply to other files with the same structure even though the text varies. However, it selects the entire
text within a context.
For example, if you try to select only 456 in the text box of an XML file with the following line, the entire
string between the keywords is selected.
<ciTypeID>123-456-7890000000</ciTypeID>
• On the Advanced Parsing Options form (outside of Debug mode), you can specify a delimiter and
position to identify the text string. You can also use this form to make a more refined selection than from
within the text box.
For example, you could specify a delimiter (-) and the number of positions to extract after the delimiter
(3) to extract the string (456).
1. Select one of parsing operations from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
• Ini file
• Oracle file
• Properties file
• XML file
3. Define the string to be parsed from within Debug mode, or on the Advanced Parsing Options form
(outside of Debug mode).
Option Description
In Debug mode 1. Select the string in the text box. All matching
strings in the same context are automatically
selected.
2. Assign the string to a variable on the Define
Variable Name form. Provide a unique and
meaningful name and click OK.
3. To identify additional strings and variables,
click the plus icon.
Outside of Debug mode (Advanced Parsing 1. Click Advanced and specify the root path.
Options form) The root path is the section (hierarchical
branch in the file structure) where parsing
takes place.
2. Click the plus icon for each string and
variable to be added and fill in the fields, as
appropriate.
• Name: Specify the column name.
• XPath query: Specify the XPath query
for the string. For example, appcmd/APP/
@APP.NAME.
• Delimiter: Specify the delimiter for the
string.
• Position: Specify the position of the
string.
4. To end the discovery process if no results are found, select the If not found check box.
5. Click Close Advanced.
Field Description
4. To end the discovery process if no results are found, select the If not found check box.
1. Select one of the parsing operations from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
2. Select one of the keyword, command, or positional types from the Parsing strategy list.
3. Define the string to be parsed from within Debug mode, or outside of Debug mode.
Option Description
Outside of Debug mode 1. Specify the value and delimiter for the
Keyword, Position from Start, or Position
from End.
2. To add more strings and variables, click the
plus icon.
4. To end the discovery process if no results are found, select the If not found check box.
4. To end the discovery process if no results are found, select the If not found check box.
Field Description
Use default Select this check box to use the default line
separator.
3. Define the string to be parsed from within Debug mode, or outside of Debug mode.
Option Description
4. To end the discovery process if no results are found, select the If not found check box.
Field Description
Field Description
Set a variable
Use the Set variable operation to apply a value to a variable.
1. On the Identification or Connectivity Sections form, select Set variable from the Operation list.
2. Fill in the fields, as appropriate.
Field Description
Field Value
Hierarchy Applications > Application Servers
CI Type WebSphere EAR
Pattern J2EE EAR on Linux
Section Identification for J2EE EAR entry point.
Step number and Name 1. sets the ear name
Transfer a file
You can transfer a file from the MID Server to a computer in your organization.
Role required: admin or sm_admin
1. Upload the file to the MID Server:
a) Navigate to Service Mapping > Administration > Uploaded Files.
b) Enter a meaningful name in the Name field.
c) Click the Operation Systems tab.
d) Click OS Types and select the relevant type from the list.
e) Click OS Architectures and select the relevant option from the list.
Note: You can select both 32 and 64 bit option for the same file if necessary.
3. Click Refresh.
The name of the uploaded file appears in the File name field.
4. Specify the variable that refers to the MID Server path on the remote computer in the Full Path Target
field.
Field Value
Hierarchy Applications > Web servers
CI Type Website
Pattern IIS Website
Section XML Web Services Connectivity
Step number and Name 9. Upload strings utility
2. To browse for and select the variables and SNMP OIDs for the query, follow these steps.
a) Activate the debug mode.
b) Click Browse and select the MIB in the list or perform a search based on MIB values.
c) Select either a single scalar value or multiple columns in the MIB tree. Multiple columns must be
in the same table.
d) Click Get Data to display the data in the SNMP Browser form and then click OK.
3. To manually define variables and SNMP OIDs for the query, choose either Scalar or Table variable
type and fill in the fields, as appropriate.
Field Descriptions
Field Descriptions
Field Value
Hierarchy Network > Load Balancer
CI Type Radware Load Balancer
Pattern Radware Appdirector SNMP
Section create outgoing connections
Step number and Name 1. get farm table
3. Click the plus icon to add each target field and fill in the fields, as appropriate.
Field Description
Field Value
Hierarchy Applications > Web Servers
CI Type IIS
Pattern IIS
Section IIS7 and IIS8 Virtual Folder connectivity
Step number and Name 6. remove website name from vpath
Unchange user
Use the Unchange user operation to switch back to default administrative credentials if you previously
performed a Change user operation.
Select Unchange user from the Operation list in one of the following locations:
• the Identification Sections or Connectivity Sections for Service Mapping
• the Step window for Discovery
Field Value
Hierarchy Applications > Business Integration
Software
CI Type IBM WebSphere MQ Queue
Field Value
Pattern WMQ Queue Unix Pattern
Section Local queue connectivity
Step number and Name 11. exit change user credentials
Field Value
Hierarchy Applications > Application Servers
CI Type WebSphere EAR
Pattern J2EE EAR on Linux
Section Oracle JDBC connectivity
Step number and Name 5. Union JDBCProviders with
JDBCProviders2
3. To populate the WMI method with the methods relevant for the table you selected in Source table,
click Get methods.
This step is relevant only for the debug mode.
4. Fill in the fields, as appropriate:
Field Description
Field Value
Hierarchy Applications > Directory Services
CI Type IIFP
Pattern IIFP On Windows Pattern
Section AD Home Forest connectivity stage-wmi
Step number and Name 2. Invoke WMI Method Run Details ()
4. If you are working in the manual mode, manually configure the following parameters:
Field Description
5. Define parameters and operators for the condition in the Condition clause.
6. Enter the variable for the table in which the retrieved data is saved in the Target table field.
You can enter a value from the specific field in a table as described in Useful shortcuts in Pattern
Designer on page 743.
7. Select the Terminate check box to terminate discovery if no results are found.
Field Value
Hierarchy Applications > Directory Services
CI Type IIFP
Pattern IIFP On Windows Pattern
Section AD Home Forest connectivity stage-wmi
Step number and Name 1. Get all agents details via WMI step
This step uses the WMI Query operation to extract data on agents from the
MicrosoftIdentityIntegrationAgent namespace and save this data in the variable table
named $ManagementAgents.
Note: The Connectivity Section is grayed out if the pattern cannot be edited for Service
Mapping.
Field Description
Select Entry Point Select the entry point type from the list.
Target CI type Select the target CI type from the list.
Is Artificial Select the check box if the connection
should be hidden (that is, not shown in the
user interface), but is used for continuing the
discovery flow.
Is Traffic Based Select the check box to use the traffic-based
discovery method for this connection.
Field Value
Hierarchy Applications > Business Integration
Software
CI Type IBM WebSphere MQ Queue
Pattern WMQ Queue Unix Pattern
Section Alias queues connectivity
Step number and Name 6. Create outgoing connection to alias
queues
Customize a pattern
Customize an existing pattern to reflect any changes or peculiarities in the configuration item (CI)
implementation or setup.
If your ServiceNow instance uses domain separation and you have access to the global domain, select the
domain to which the business service belongs from the domain picker ( ). It must
be a domain without any child domains.
When you customize a pattern, you actually create a copy of the original pre-configured pattern. While
Service Mapping or Discovery use the customized version, the original version is not deleted. When you
upgrade your ServiceNow platform, it updates the original pattern, not the customized copy of it.
If at some point you want to abandon the customized pattern and start using the updated original pattern,
you can revert to the original pattern as described in Choose the pattern version on page 790.
1. Navigate to the appropriate module Pattern Designer > Discovery Patterns.
2. Click the pattern you want to modify.
3. To modify the pattern to perform horizontal discovery using Discovery, click Edit for Discovery.
4. To modify the pattern to perform top-down discovery using Service Mapping, click Edit for Mapping.
5. In the Pattern Designer window, modify the relevant sections of the pattern.
This pattern version becomes the active version that Service Mapping uses and its state changes to
Current.
Finalize a pattern
After you finish defining your pattern, make it ready for use by Service Mapping and Discovery.
Role required: admin, discovery_admin, or sm_admin
There are several ways to finalize a pattern after creating or modifying it:
• Saving – Save all the information you defined for the pattern creating a draft. You can continue working
on the pattern draft at any time. Service Mapping and Discovery do not use the pattern draft for
discovery and mapping.
• Checking – Actually run the pattern on the relevant configuration item (CI) to check that the operations
that you defined for it are correct. Check your pattern before activating it. This option is available only
for application patterns used by Service Mapping.
• Activating – Make Service Mapping and Discovery use this pattern for discovery and mapping. A new
version of this pattern is created. The state of the pattern version used previously changes to History.
1. On the pattern definition form, complete your pattern definition by performing one or more of the
following actions:
To accomplish this Do this
4. Click Submit.
5. Find the pattern in the list and click it.
8. Click Save.
9. In the Pattern Designer Pattern window, click the plus icon in the Identification Sections pane.
10. Define the identification section as follows:
Field Value
Name Enter Identification for HTTP(S) entry
point type(s).
Entry Point Types Select TCP, HTTP(S) from the list.
Find Process Strategy Select LISTENING_PORT for the strategy.
11. Click Create New.
The new identification section appears in the Identification Sections pane.
12. Double-click the created identification section.
The Identification Section window opens.
Field Description
14. Check that the process name on the CI is Apache Web Server:
a) Rename the first step to Check process name to match Apache.
b) Select Match from the Operation list.
c) Enter $process.executable in the process name field.
d) Select Contains from the conditional operator list.
e) Enter httpd in the string field.
f) Click the plus icon to add another condition row.
g) Enter $process.executable in the process name field.
h) Select Contains from the conditional operator list.
i) Enter apache in the string field.
j) Click OR for the logical operator.
k) Click Test and verify that you get the following message: No changes were made during this test.
Note: Make sure to enter the value with the quote marks (").
g) Click OK.
a) In the step tree, right-click the last step and select Add Step After.
b) Rename the new step to Get home dir.
c) Select Parse variable from the Operation list.
This operation extracts the value after the -d in the content box.
d) Expand the process variable in the Temporary Variables pane.
e) Drag the commandLine variable from the Temporary Variables pane into the variable field under
operation.
Note: For more information on using the drag-and-drop feature, see Useful shortcuts in
Pattern Designer on page 743.
f) Select Command line Unix style from the Parsing strategy list.
g) In the Variables pane, add the new intall_directory variable.
h) Click Test.
17. Obtain the home directory attribute if Pattern Designer from HTTP daemon.
If the previous step populated the home directory attribute, skip this step. In this example, it is
necessary to perform it.
a) In the step tree, right-click the last step and select Add Step After.
b) Rename the new step to Condition – check that home dir was set if not
extract it from httpd –V.
c) Select Parse command output from the Operation list.
This operation extracts the value after the -d in the content box.
d) Click Set Precondition.
Note: For more information on using the drag and drop feature, see Useful shortcuts in
Pattern Designer on page 743.
o) If the new conf_file variable is not added automatically, create it in the Variables pane.
p) Click Test.
The Debug Results window shows the configuration file attribute populated with a value.
q) Click OK.
20. If the configuration file attribute is still not populated, perform this step:
a) In the step tree, right-click the last step and select Add Step After.
b) Rename the new step to default location of conf file.
c) Select Set variable from the Operation list.
d) Click Set Precondition.
e) Enter $conf_file in the condition value field.
f) Select Is Empty from the conditional operator list.
g) Enter $home_dir+"/conf/httpd.conf" in the Value field.
h) Enter $conf_file in the Parameter field.
i) Click Test and verify that the configuration file attribute is populated.
j) Click Test and verify that the configuration file attribute is populated.
Note: The version number is not moved into the CI Attributes pane because confirmation
of its value in the Content box and the Debug Results dialog box is sufficient.
Notice that at this stage you have successfully identified the Apache Web Server and populated
its various attributes except the version attribute that is left blank on purpose.
Event Management
The ServiceNow® Event Management application helps you identify health issues across the datacenter on
a single management console.
Event Management is available as a separate subscription from the rest of the ServiceNow platform. You
must request activation from ServiceNow personnel.
Service Analytics is automatically activated with Event Management and provides alert aggregation and
root cause analysis (RCA) for discovered business services. To enhance Event Management and Service
Analytics, you can also activate Operational Metrics. Operational Metrics learns from historical metric data,
and builds standard statistical models to project expected metric values along with upper and lower control
bounds.
Event Management can manage external events and configure alerts for discovered business services,
manual services, technical services, and alert groups.
Architecture
Event Management receives and processes events via the MID Server.
As events occur on various systems, the MID Server connector instance sends the events to the
ServiceNow instance. Event Management generates alerts, applies alert rules, and prioritizes alerts for
remediation and root cause analysis. You can use a browser to view this information on dashboards, the
Alert Console, or from a service map.
Process flow
Event Management either pulls events from supported external event sources using a MID Server or
pushes events from external event sources using JavaScript code.
Inbound events are collected in the Event [em_event] table and then processed in batches. For events
that meet the defined criteria in alert rules, alerts are created or updated in the Alert [em_alert] table. If
an alert does not exist for the event, a new alert is created. If the alert exists, the existing alert is updated
appropriately.
As part of the alert life cycle, you can manage alerts in the following ways:
• Acknowledge alerts.
• Create a task such as an incident, problem, or change.
• If automatic remediation tasks apply to the alert, begin automatic remediation to start a workflow.
• Complete all tasks or remediation activities.
• Close alerts for resolved issues.
• Add additional information, such as a knowledge article for future reference.
Event Management uses discovered services from Service Mapping and automated alert groups with root
cause analysis from Service Analytics to expedite alert resolution.
When an event from an external source arrives from the MID Server, script, or web service API (not
pictured), Event Management locates CI information for alert generation and CI remediation. CI information
is stored into the CMDB from sources such as Service Mapping, Discovery, third-party sources, and
manual population. If Service Analytics is enabled, additional correlated alert group and root cause
analysis information is available to resolve the issue.
Supported browsers
For UI16:
• Firefox version 31 ESR and version 38 or later
• Chrome version 43 or later
• Microsoft Internet Explorer (IE) 9 or later
• Safari version 6.1 or later
Specify the date and time you would like this Date and time must be at least 2 business
plugin to be enabled days from the current time.
3. Click Submit.
10. Create an event rule to populate alert attributes that are used for analysis, and are essential for
effective alert aggregation. For details, see Create an event rule for automatic alert aggregation on
page 1018.
11. If you upgraded from the Fuji release, Import a business service as a manual service.
12. Configure any other general tasks that appear in this section as appropriate.
Note: If you have existing business services from the Fuji release, you must import them as
manual services. For more details, see Import a business service as a manual service.
Field Description
Field Description
View CIs that are present in the manual Review the Service Configuration Item
service Associations section.
View CMDB relationships between CIs In the form header, click View map.
Use a CI class to add corresponding CI In the Service Configuration Item Associations
instances and relationships to the manual section:
service
1. Click New.
2. Select a Configuration Item class from the
CMDB.
3. Click Submit.
Copy all suggested CI relationships from the In the Service Configuration Item Associations
Related Item section to the manual service section:
and BSM map
1. Click Populate.
2. Select the number of related item layers to
add or import.
3. Click OK.
Option Action
Delete a CI from the manual service In the Service Configuration Item Associations
section:
1. Click the CI name.
2. In the Actions on selected rows dialog box,
select Delete.
3. In the dialog box to confirm this click Delete.
Event Management removes the information
from the manual service. However, the
CMDB information remains intact.
Restore deleted CIs on the Service In the Service Configuration Item Associations
Configuration Item Associations list section, click Populate.
Delete the manual service In the form header.
1. Click Delete
2. In the dialog box, click Delete.
Event Management removes only the
manual service. However, the CMDB
information remains intact.
Field Description
Field Description
4. Select a source table and specify the criteria for the technical service query.
Field Description
Field Description
Note: Alert groups that are created in this procedure do not display in the Alerts console.
You can learn about Event Management basics, including alert groups, from the following video.
1. Navigate to Event Management > Services > Alert Groups.
2. Click New
3. Fill in the fields, as appropriate.
Field Description
Field Description
4. Click Update.
Configure a service group to enable users to manage business services, manual services, or alert groups.
Role required: evt_mgmt_admin
1. Navigate to Event Management > Services > Service Group Responsibilities.
2. Click New.
3. In the Business Service Group field, select the name of the service group.
4. In the Role field, select an Event Management role:
• evt_mgmt_admin
• evt_mgmt_integration
• evt_mgmt_operator
• evt_mgmt_user
5. Click Submit.
6. To find users who are assigned to the role, navigate to User Administration > Users > Roles and
search for the role.
Make sure that you have created service groups as described in Organize business services into groups
on page 635.
Role required: admin
Users inherit permissions from roles that are assigned to them. Service Mapping provides these
preconfigured roles:
Typically, enterprises have hundreds of services which makes it impractical to manage them individually.
How you organize business, manual or technical services, depends on the user and service provisioning
policies of your enterprise.The relation between business services in groups is purely logical and the same
business service can be part of more than one group. For more information about service groups, see
Organize business services into groups on page 635.
You can also have a hierarchy of service groups with some groups embedded within others. If users have
access to a parent business service group, they automatically have access to all its child groups.
By assigning a Service Mapping role to a service group, you allow all users with this role to access all
services belonging to this group.
In the base system, all services are assigned to the All service group that lets all users view and manage
the services. When you assign a role to a service group, the users with this role can access only IT
services in this service group. Users cannot access any IT services, which are not part of that service
group.
You can give access to services to existing or new users. You can also add users to groups to assign roles
simultaneously to multiple users.
You can assign some roles directly to business service groups. However, most enterprises choose to
organize their roles as a hierarchy. It helps to manage roles across multiple ServiceNow applications. For
example, the Service Mapping administrator [sm_admin] can be part of a broader administrator role like
administrator [admin]. By assigning roles to user groups, you give permissions and rights of this role to all
users belonging to the same group.
When you delete a business service group, business services in the group are not deleted. The connection
between the role and the services making up this group (responsibilities) are removed.
1. If using Service Mapping, navigate to Service Mapping > Services > Service Group
Responsibilities.
2. If using Event Management, navigate to Event Management > Services > Service Group
Responsibilities.
3. Click New.
4. In the Business Service Group field, enter the name of the group.
5. In the Role field, enter the name of the role.
For example, evt_mgmt_admin.
6. Click Submit.
Feature Support
Feature Support
Remediation Supported.
While editing alert rules, users can only apply
relevant workflows.
1. Activate the Domain Support – Domain Extension Installer plugin if it is not already active.
2. Configure a connector instance to use a MID Server from same domain as Event Management.
Note: Events are purged after 7 days. For information about backing up events to a custom table,
seeEvent Management considerations on page 849
Event Management SLA Event management SLA tasks for CIs and
business services.
[em_ci_severity_task]
Table Description
Table Description
Impact Maintenance CIs CIs that are in maintenance and therefore are
excluded from impact calculation.
[em_impact_maint_ci]
Service Analytics Metric Type Registration Source registration details for processing raw
data.
[sa_metric_registration]
Event Management adds the following tables that are shared with Service Analytics.
Table Description
Event Management adds these properties. Be cautious when changing Event Management property
values, as these settings can greatly affect overall system performance.
Note: To open the System Property [sys_properties] table, enter sys_properties.list in the
navigation filter.
Property Description
evt_mgmt.query_based_service_graph_handler.page_count
Page size of CIs from the technical service to
be fetched at once while calculating technical
service Impact Tree Page size in a single fetch
of CIs while calculating the Impact Tree for a
technical service.
• Type: integer
• Default value: 100
• Location: Event Management > Properties
Property Description
Property Description
Property Description
Property Description
Property Description
Property Description
evt_mgmt.update_alert_restricted_fields_elapsed_time
Minimum time in seconds to wait before updating
an alert for identical events
• Type: integer
• Default value: 60
• Location: Event Management > Settings >
Properties
• Learn More: Configure automatic actions for
alerts and incidents on page 952
Property Description
Property Description
Disable default rule edit Impact Rule Reverts changes in the default
impact rule.
[em_impact_rule]
Event rule grouping calculation Event rule calculation Calculates suggested grouping
for events.
[em_event_rule_calculation]
Handle deleted event rule Event Rule [em_match_rule] Removes deleted event rule.
Handle SLA configuration SLA Configuration When SLA configuration filters
delete [em_sla_configuration] are deleted, deletes CIs in the
em_ci_severity_task table.
Name and pattern cannot be Event Type Verifies that the Name and
empty Pattern fields have values.
[em_event_type]
Verify overwrite template table Alert Rule Verifies that the overwrite
template of the alert rule is
[em_alert_rule]
defined in the Alert table.
Verify template is on Incident Alert Rule Verifies that the incident
template of the alert rule is
[em_alert_rule]
defined in the Incident table.
Event Management - Connector execution job Compares current time with time when active
connector instances were last run and sets
relevant connectors to execute. Runs every 10
seconds.
Event Management - Delete Work Notes Trim content of alert work notes. When the
maximum number of work notes (default is
50) is reached, further work notes are purged
from the alert. Modify the default using the
evt_mgmt.max_worknotes_on_alert
property. Runs every hour.
Event Management - Impact Calculator Trigger Trigger the impact calculation. Runs every 19
seconds.
Event Management - Update stuck connectors Release connector instances that are stuck.
Runs every 2 minutes.
Event Management - auto close alerts Alerts that are idle longer than
7 days (default time period) are
closed. Modify the default using the
evt_mgmt.alert_auto_close_interval
property. Runs every 10 minutes.
Event Management - close flapping alerts Close flapping alerts. Runs every 5 minutes.
Event Management - close threshold alerts Close threshold alerts. Runs every 2 minutes.
Event Management - create/resolved incidents Job to:
by alerts
• Create incidents for alerts according to alert
action rules.
• Update incidents according to alert state.
Note: You can activate Performance Analytics content packs and in-form analytics on instances
that have not licensed Performance Analytics Premium to evaluate the functionality. However, to
start collecting data you must license Performance Analytics Premium.
Content packs
The Performance Analytics widgets on the dashboard visualize data over time. These visualizations allow
you to analyze your business processes and identify areas of improvement. With content packs, you can
get value from Performance Analytics for your application right away, with minimal setup.
Note: Content packs include some dashboards that are inactive by default. You can activate these
dashboards to make them visible to end users according to your business needs.
To enable the content pack for Event Management, an admin can navigate to Performance Analytics >
Guided Setup. Click Get Started then scroll to the section for Event Management. The guided setup takes
you through the entire setup and configuration process.
You can create SLA definitions only for tables that extend the Task table. The Event Management
application provides a table named Event Management SLA [em_ci_severity_task], which extends the
Task table. Use this table in your SLA definitions to specify the severity level that should trigger and stop
the SLA. During alert impact calculation, changes in the severity level of business services and CIs are
automatically updated in the Event Management SLA table. Scheduled jobs keep the information in this
table up to date.
The Event Management SLA table is populated differently for business services and CIs:
• For business services, the system automatically populates the Event Management SLA table when a
business service is created or when its max severity is changed.
• For CIs, you must first identify which CIs can be made available for SLAs by creating an SLA
configuration record. The system then automatically populates the Event Management SLA table with
the CI max severity is changed.
Note: Duplicate CIs are not added to the Event Management SLA table even if the same CI
matches more than one SLA configuration filter.
Field Description
Create an SLA definition on the CIs that match this SLA configuration.
Topic Details
Topic Details
Planning
• Organize event source configuration of filters, modules, and so on, into multiple parallel efforts, rather
than in serial.
• Validate processed event formats to ensure that the data that is parsed is aligned with desired results.
• Test production events in a non-production environment. Integrate with non-production element
managers and ServiceNow instances. If non-production element managers are not available, send
events from element managers to both production and non-production environments.
Event integration
SNMP traps
• It is preferable to use a monitoring tool to send SNMP traps rather than sending them directly from
devices.
• Upload MIBs prior to defining event rules to avoid having to rewrite the Event Rules.
CloudWatch
• Use dedicated credentials for integrating CloudWatch with ServiceNow.
Email
• Use email only if the source has a low volume and other options are not available, such as, running a
script or forwarding an SNMP trap.
Event rules
• Write Event Rules to apply to the broadest number of events possible. More specific rules can then be
created as necessary. More specific rules should use a lower-order value.
• Avoid writing Event Rules that apply only to a certain subset of events if a more general rule can
achieve the same outcome.
• When Event Rules are applied to events, no changes are made to the original event. All processing
occurs in memory, so use the Processing Notes field and/or use the Check Process of Event UI
action link to troubleshoot.
• If you change a rule/transform that has existing mapping rules, you should review and retest with events
that are either real or simulated.
• Establish a consistent naming convention. A common convention is: <customer acronym>.<Event
Source>.<Description>. For example, ACME.OEM.Normalize
• Use the Order field to control which Event Rule runs if two Event Rules have similar conditions set.
Effective de-duplication and Populate the Source, Node, Type, Resource, Metric Name fields.
enabling efficient parallel
event processing
CI binding • Bind to host - by populating the Node field and optionally CI
identifiers.
• Binding to application and / or device – by populating the CI
type field and the Additional Info fields.
Alert Correlation, using Populate the Resource and Metric Name fields.
Service Analytics
Note: If CI is also bound, Alert Correlation is improved.
• Ensure that the From field value exactly matches a string in the JSON in the additional_info field. This
matching happens when a rule has been configured based on information in a MIB file. However, if
the MIB file is not uploaded, the JSON for the SNMP trap then shows varbinds (variable binding) with
dotted names, instead of the translated name in the MIB. The event field mapping rule then fails to be
applied.
• Establish a consistent naming convention. A common convention is: <Customer acronym>.<Event
Source>.<Mapping target>. For example, ACME.OEM.Severity.
De-Duplication
• The message_key field is used for De-Duplication. If reliable message key values are not provided with
the source event, it is important to have a well-defined plan for constructing these identifiers.
• If the message key is not defined, then the message key is <Source + Node + Type + Resource +
Metric Name> .
• The guideline is to have the event source populate the above five fields out-of-the-box (OOB), and
populate the message key. This enables a better distribution of event processing among instance
workers and nodes.
• If the source event does not have values for these fields, make sure to populate them using transform
rules. This does not affect event processing, but is used for de-duplication.
CI Binding
Note: This has to be populated from the source before the event is inserted.
• The primary binding strategy is to use the Node field. If the Node field is not pre-populated in the event,
it can be populated using Event Rules.
Alert Rules
• A scheduled job applies Alert Rules to new Alerts every 11 seconds. If an Alert Rule does not
immediately start, allow 10–15 seconds before you start troubleshooting.
• Use the Order field to control which Alert Rule runs if two Alert Rules have similar conditions set.
Task templates
• Create a user called Event Management (or a similar name) so that the Created by field in a task
template (for example, Incident) can be set to indicate that Event Management was the source of the
task.
• To perform any dynamic value assignment or to override OOB dynamic value assignment, use the
EvtMgmtCustomIncidentPopulator script include.
Remediation
• Always set orchestration workflow properties to the Remediation Task [em_remediation_task] table.
• Use ECC Queue and Workflow > Live Workflow > All Contexts to find more detailed information on
remediation activities.
• Use Service Groups to group business services into logical groups to reduce the number of services
displayed on the Service Health dashboard.
• Import manually built service maps. For a description of the conversion process, see Import a business
service as a manual service.
Performance
Enable multi-node in production environments and set values based on the size of the deployment and
expected event rate.
The Operational Metric collector logs and files are located under the path $(MID_SERVER_DIR)/agent.
Use these logs and files for troubleshooting and monitoring purposes.
The performance of Operational Metrics can be checked in the MID Server log file when the mid.log.level
MID Server parameter is in debug mode.
Operational Metrics performance numbers are available in the sa_performance_statistics table. To display
the performance numbers, filter the Performance Statistics list for Metric Collector.
Additional fields should be included in the Additional info field of the event. Additional fields should not be
added to an event by adding a custom field to the event table [em_event]. For more information about how
to include additional fields in events, see Custom alert fields on page 931.
Administer events
An event is a notification from one or more monitoring tools that indicate something of interest has
occurred, such as a log message, warning, or error.
Event Management receives external events and generates alerts based on event and alert rules. Events
can be sent directly to your instance using an email server, script, SNMP trap, or a web service API. The
corresponding alerts appear on dashboards for tracking and remediation purposes.
As the computer, software, or service generates events, the MID Server polls the external event tracking
tool. The MID Server, which maintains a connection to Event Management, sends the information to your
instance for storage, processing, and remediation.
Event fields uniquely identify each event. Event Management uses this information to determine whether to
create a new alert or update an existing one. By default, each event is uniquely identified by the Message
Key. If the Message Key is not populated, a concatenation of the Source, Type, Node, Resource, and
Metric Name fields are used and these fields populate the Message Key. If identifiers are not supplied in
the event, you can add them with event rules.
The instance stores events in the Event [em-event] table and attempts to generate alerts based on pre-
defined rules and event mappings. Regardless of whether an alert generates, the original event is always
available for review and remediation. Alerts then generate according to the following process flow.
1. Find the best matching event rule for an event. If the source of the event matches the source specified
in an existing rule, then a rule is matched. Also, if the event matches the optional rule Filter and the
event additional_info value matches the rule Additional Information filter. A rule without any filter
is ignored, for example the source filter is missing or the Additional Information filter is missing. If
multiple rules are defined for the same type of event, use the rule Order to determine the order of rule
application.
• If the rule Ignore check box is selected, no alert generates. However, the event is still available for
review and remediation.
• If the Transform check box is selected, apply the transforms. If transform compose parameters are
also set, apply additional content to display to the user in the alert.
• When the Threshold check box is selected, accumulate all events until the threshold is met.
Generate a single alert for the events.
2. Search for an event field mapping even if there was no event rule. If an event field mapping is found,
apply the mapping information. If the event has no severity after the event transformations, retain the
event for reference purposes and do not generate an alert.
3. Search the Alert [em_alert] table for a matching message key. If a matching message key exists,
update the alert according to the event information. Otherwise, create an alert. Later, if another event
has the same matching key, associate the events under a single alert. If possible, bind the alert to a
specific CI for root cause analysis.
View events
Event Management tracks individual events to manage external systems. An event is a notification
from one or more monitoring tools that indicates that something of interest has occurred, such as a log
message, warning, or error. Event Management receives or pulls events from one or more external event
sources and stores them in the Event [em_event] table. Event Management provides a list of raw incoming
events.
Role required: evt_mgmt_admin, evt_mgmt_operator, evt_mgmt_user, or evt_mgmt_integration
The event monitoring tool generates the values of the source and resource fields. Event Management
implementers can define event types and register nodes to help uniquely identify incoming events and
create alerts for the specific needs of the enterprise. Event Management uses this information to determine
whether to create a new alert or update an existing one.
An event source may generate duplicate events with the same identifying information. For events with the
same identifying information, Event Management uses the time interval between events to determine if
events represent an existing issue or new issue.
Additional fields should be included in the Additional info field of the event. Do not add additional fields to
an event by adding a custom field to the event table [em_event]. For more information about how to include
additional fields in events, see Custom alert fields on page 931.
Note: Business rules that are written for alert tables [em_alert] must be highly efficient or they may
result in performance degradation.
Time of event The time that the event External event monitoring
occurred, in the network tool
node time zone.
Source Event monitoring software External event monitoring
that generated the event, tool
such as SolarWinds or
SCOM. This field has a
maximum length of 100.
It is formerly known as
event_class.
Description Reason for event generation. External event monitoring
Shows extra details about an tool
issue. For example, a server
stack trace or details from
a monitoring tool. This field
has a maximum length of
4000.
Column Description
3. Click Submit.
Name Description
Name Description
Name Description
Connector instances
Connector instances use script instructions that enable the MID Server to connect to external servers to
obtain event information. Each connector instance is specific to an event source vendor.
Configure a connector instance
Configure a connector instance to schedule the frequency of event collection.
Before starting this procedure:
• Locate or define a connector definition
• Create the credentials to connect to the event source.
• If the connector instance is to be used for the collection of Operational Metrics, the ITOA Metric
Management plugin must be activated.
You can use a connector instance to control the location and manner in which events arrive from external
sources. You can optionally select to collect Operational Metrics information from the external sources that
you are connecting to. The Operational Metrics feature is for the investigation of alert root causes, analysis
of trends, threshold definitions, and anomaly predictions. You can run a test to confirm that connector
instance parameters let the MID Server receive events from a supported external event source to the
Event [em_event] table. The test also confirms:
• The MID Server is running.
• The host is running on the IP address that is in the Host IP field.
• Both the MID Server and the host are running in the same domain.
• The connector instance values fields are valid.
• A connection can be made to SCOM, using API, to retrieve events.
Field Description
Field Description
Field Description
Note:
• The metrics connector supports
working against the SCOM
database (MSSQL) that is
configured to work with SSL.
• This feature is not supported on
a MID Server that is installed in
a directory that has a space in its
path.
Field Description
Connector definitions
Each connector definition is specific to an event source vendor. The connector definition specifies the MID
Server script include that pulls events from the external event source. In addition, the connector definition
specifies what connector instance value parameters are needed to connect to the external event source
host.
Create a connector definition
Create a connector definition to support a new external event source from which to receive events.
Write custom JavaScript code that accomplishes these actions:
• Connect to an event monitoring tool.
• Retrieve events from an event monitoring tool.
• Send events to the Event [em_event] table using the web service API.
b) Specify the information required for the execute function. This function retrieves the information
from the external source.
Field Description
8. Click Submit.
• Send events to the Event [em_event] table using a web service API. See .
1. Create a custom MID Server script include. This example uses Groovy (deprecated). In place of a
script include in Groovy, it is recommended to use JavaScript.
a) Implement these methods:
• @Override OperationStatus testConnection()
• @Override OperationStatus execute()
c) Import platform classes for event creation, sending, logging, and third-party connector base
classes.
package com.service_now.mid.probe.tpcon.test
import com.glide.util.Log
import com.service_now.mid.MIDServer
import com.service_now.mid.probe.event.IEventSender
import com.service_now.mid.probe.event.SNEventSender
import com.service_now.mid.probe.tpcon.OperationStatus
import com.service_now.mid.probe.tpcon.OperationStatusType
import com.service_now.mid.probe.tpcon.ThirdPartyConnector
import com.snc.commons.eventmgmt.Event
d) Connect to the data collector, such as SCOM or VMware vRealize Hyperic (Hyperic), with the API
provided by the collector. Use the context parameters to set the event field values that came from
a connector instance.
e) Implement the execute function in the script. It reads events from the connector using its API to
create Event objects and send them to the ServiceNow instance. For example:
2. Navigate to MID Server > MID Server Script Files and create a script. Specify the Parent field as
Groovy, complete the form as appropriate, and then click Submit.
3. Navigate to Event Management > Connector Definitions and create a connector definition.
4. In the Groovy script to run field, select the MID Server script file, complete the form as appropriate.
In addition to username or host, you can add any other parameter, for example, port, and then click
Submit.
5. Navigate to Event Management > Connector Instances and create a connector instance.
6. In the Connector definition field, select the connector definition, complete the form as appropriate,
and then click Submit.
7. To confirm or debug the script, use debug printouts from Groovy to the MID Server log.
8. To monitor incoming events using the custom connector instance, navigate to ECC > Queue and filter
on ConnectorProbe.
Note: You can use the MetricCollector MID Server script include that is provided in the base
instance as an example for these items.
• Be written in JavaScript.
• In the connector definition, be specified in the JavaScript to run field.
• Include the retrieveKpi() method. The connector framework uses the retrieveKpi() method to
collect metrics. In this method, include the logic for reading data from the external source system and
transform to the RawMetric object (in ServiceNow format).
• Read the required connector instance parameters defined in the probe object.
• Collect raw metric data from the metric source.
• Prepare a RawMetric record. The RawMetric object can be created by passing these attributes in the
constructor:
Element Description
• Use the handleMetric function to add the raw metric information from the RawMetric record to be
processed in the MID Server. To get access to the MetricHandle object, make the following call to
the MetricCollector:
var metricHandler = MetricFactory.getMetricHandler()
Then for each RawMetric object execute:
metricHandler.handleMetric(rawMetric)
• Write error logs to the MID Server log using ms.log. You can use the debugLog() function in the
MetricCollector script as an example to write debug messages.
Element Description
Test connector requirements - Implement the testConnection() function. The function returns either
Success or Failure. If Failure, check the writeError() function for a failure message. You can also
check the sa_performance_statistics table to see that metrics are being received. For an example of test
connector script, see the SolarWindsJS Mid Server script include.
Metric types that the connector collects:
• Use set active: true/false to define the types you want to collect.
• In the metric collector script use HandleTypes.getActiveTypesForSql("Your Source"):
for a comma-separated list of supported types. If the type is not active, the type is not returned. If the
metric source is in registration mode, not type is retrieved. In registration mode, all types are
collected to be able to add any missing types.
• You can collect all metrics types. The metricHandler.handleMetric() function filters the
types that are collected. However, using only active types in the query commands should improve
performance.
1. Create a custom MID Server script include. As an example, use the MetricCollector script that is
provided with the base instance.
2. Navigate to MID Server > Script Includes and click New.
3. In the Name field, specify the name.
4. Select Active.
5. In the Script field, enter the custom metric collector script. Use JavaScript to compose this script.
6. Complete the form as appropriate, and then click Submit.
7. Configure a dedicated MID Server. For more information, see Configure a MID Server for Operational
Metrics on page 1034.
8. Navigate to Event Management > Event Connectors (Pull) > Connector Definitions and click New.
a) In the Name field, specify the name.
b) Select the Collect metrics option.
c) Right-click the form header and select Save.
d) In the JavaScript to run field, select the newly created custom MID Server script file.
9. In the Connector Parameters area, specify the parameters for all additional information needed for
the connector, for example, topics that you want to subscribe to.
10. In the Connector Definition to MID Server Capabilities area, specify the details of the MID Server.
Use a dedicated MID Server with ServiceAnalytics as a supported application, and with the ITOA
Metrics capability. For more information, see Configure a MID Server for Operational Metrics.
11. Complete the form as required, then click Submit.
12. Navigate to Event Management > Event Connectors (Pull) > Connector Instances and click New.
d) In the Metrics collection schedule field, specify the number of seconds for the metrics collection
schedule job to wait before it repeats.
e) Right-click the form header and select Save.
f) Complete the form as required, then click Save.
13. Click Test Connector to verify the connection between the MID Server and the custom metric
connector.
14. Click Update.
• To confirm or debug the custom metric connector script, use debug printouts to the MID Server log.
• To monitor incoming events using the custom metric connector instance, navigate to ECC > Queue and
filter on ConnectorProbe.
Note: The sample connector code is not updated automatically. Check the connector code on the
instance from time-to-time as it might be updated.
You can use the following code sample to create a JavaScript connector for receiving events from a
SolarWinds external source.
var SUCCESS =
Packages.com.service_now.mid.probe.tpcon.OperationStatusType.SUCCESS;
var FAILURE =
Packages.com.service_now.mid.probe.tpcon.OperationStatusType.FAILURE;
var Event = Packages.com.snc.commons.eventmgmt.Event;
var SNEventSenderProvider =
Packages.com.service_now.mid.probe.event.SNEventSenderProvider;
var HTTPRequest = Packages.com.glide.communications.HTTPRequest;
SolarWindsJS.prototype = Object.extendsObject(AProbe, {
try {
if (response.getStatusCode() == 200)
} catch (e) {
ms.log("Failed to connect to Solarwinds");
ms.log(e);
retVal['status'] = FAILURE.toString();
}
return retVal;
},
execute: function() {
retVal['status'] = SUCCESS.toString();
if (events.length > 0) {
// the result is sorted, but the sort order can differ.
Therefore
// the last signature is either on the first or the last event
var firstEventSignature = events[0].getField("swEventId");
var lastEventSignature =
events[events.length-1].getField("swEventId");
return retVal;
},
getResponse: function(query) {
getEvents: function(latestTimestamp)
{
if (latestTimestamp != null) {
query = query + "AND EventID > " + latestTimestamp + " order by
EventID asc ";
} else {
// if this is the first time just take maxEventsCount events
from the end
query = query + " order by EventID desc ";
}
if (resultJson == null)
return null;
return null;
}
var i = 0;
for (; i<resultJson.results.length; i++) {
var event = Event();
if (resultJson.results[i].EventTime != null) {
}
}
event.setText(sanitizedMessage);
if (nodes[resultJson.results[i].NetworkNode] != null) {
var node = nodes[resultJson.results[i].NetworkNode];
event.setHostAddress(node[0]);
event.setField("hostname", node[1]);
event.setField("eventType", resultJson.results[i].EventType);
if (eventTypes[resultJson.results[i].EventType] != null)
event.setField("icon",
eventTypes[resultJson.results[i].EventType]);
event.setField("netObjectId",
resultJson.results[i].NetObjectID);
event.setField("swEventId", resultJson.results[i].EventID);
event.setField("networkNodeId",
resultJson.results[i].NetworkNode);
event.setSource(SOLAR_WINDS);
event.setEmsSystem(emsName);
event.setResolutionState("New");
return events;
},
return false;
},
getEventTypes : function () {
if (resultJson == null)
return null;
var i = 0;
for (; i<resultJson.results.length; i++)
resultMap[resultJson.results[i].EventType] =
resultJson.results[i].Icon;
return resultMap;
},
getApplications : function () {
if (resultJson == null)
return null;
return resultMap;
},
getComponents : function () {
var resultJson =
this.getJSON("SELECT ComponentID, Name, applicationID FROM
Orion.Apm.Component");
if (resultJson == null)
return null;
return resultMap;
},
getNodes : function () {
if (resultJson == null)
return null;
var i = 0;
for (; i<resultJson.results.length; i++)
resultMap[resultJson.results[i].NodeID] =
[resultJson.results[i].IPAddress,
resultJson.results[i].DNS];
return resultMap;
},
if (response == null) {
ms.log("SolarWindJS Connector: Failed to bring data. Null
response");
return null;
}
if (response.getStatusCode() != 200) {
ms.log("SolarWindJS Connector Error Code: " +
response.getStatusCode());
return null;
}
},
type: "SolarWindsJS"
});
You can improve the configuration settings of a custom connector that connects to an external source by
considering the following tips.
Alert information collection Collect the required information about the alerts.
• Node – Identifier for host (IP/FQDN, mac, name).
• Resource – For example, disc, CPU.
• Message key – The default is
source&node&type&resource&metricName
• Severity – If one or more fields in the target monitor must be
mapped to ServiceNow severities.
• Resolution State – If the source monitor has closed alerts, the
alerts are closed in the ServiceNow instance.
• Type - Event type.
• Metric name - The name and description of the metric to which
the alert applies.
• Additional information – Any other information that might be
used to bind the alert to the CI or used for alert rules, metric
value, or alert ID in the target monitor.
Create ServiceNow events If an event must be created, ensure that you have the required
details.
To populate node, source, event class (ems system), resource,
message key, severity, state, type, metric name, additional
information and description
• Source – the name of the monitor
• Event class = ems system = the name of the connector instance
= this.probe.getParameter ("connector_name")
• Consider where you can cache information to avoid
unnecessary API calls.
Execution • Differentiate between the first pull of events with the subsequent
events that are pulled. The first pull of events:
• Takes events that exist.
• Must be limited either in time or number of events that are
pulled.
• Must only pull open alerts.
• Use the main API to get all events data since last signature and
limit the number of events.
• Create ServiceNow events by parsing.
• Send all event using: var sender =
SNEventSenderProvider.getEventSender()
• Update the lastSignature timestamp.
• Fill the resultant record with the status.
Last signature Each time that events are pulled, all events are pulled with the time
of that the event occurred, which is >= last signature timestamp.
In cases where the last event arrives at the last signature time,
it is pulled repeatedly. To prevent this, ‘remember’ the last event
ID. Thereafter, when events are next pulled, exclude this event ID
if it is in the group of pulled events and its time of event remains
unchanged.
Parameters Determine which parameters are available and how to use them.
• For context parameters, such as, connector name, user
name, password, host, and last event signature. For example:
this.probe.getParameter("connector_name").
• For connector instance values. For example:
this.probe.getAdditionalParameter ("port").
Time of event The Time of event is the time that the event occurred. Convert the
event time to GMT using the yyyy-MM-dd HH:mm:ss format.
Property Description
Note: You can reset the default MID Server by clearing the mid.server.connector_default
property or by updating the value of this property to a different MID Server. This property is not
automatically reset in all scenarios.
Field Value
Description Type a description for the use of the HPOM event collection
instance.
Connector definition The vendor and protocol used to gather events from the
external event source. Select the HPOM connector definition.
Last error message The last error message is automatically updated.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
MID Servers (MID Server Specify a MID Server that is up and is valid. You can configure
for Connectors section) several MID Servers. If the first is down, the next MID Server is
used. If that MID Server is not available, the next is selected,
and so on. MID Servers are sorted according to the order that
their details were entered into the MID Server for Connectors
section. If no MID Server is specified, an available MID Server
that has a matching IP range is used.
• url. Specify a URL. The JDBC protocol uses a connection string to establish the authentication and
other parameters to establish a connection between the client and the database. Each database
has its own connection string format. In the following URLs, replace the variables, for example,
<username> and <password>, with your values.
• For an IBM DB2 Universal URL, specify: [jdbc:db2://sysmvs1.stl.ibm.com:<port number>/
<database name>:user=<username>;password=<password>]
For example: jdbc:db2://sysmvs1.stl.ibm.com:5021/regionsdb:user=example;password=wlrs21
• For an Oracle URL, specify: [jdbc:oracle:thin:@<IP address>:<port number>:<database name>]
For example: jdbc:oracle:thin:@172.31.255.255:4028:OracleMain
• For a Microsoft SQL URL, specify: [jdbc:sqlserver://<IP
address>;user=<username>;password=<password>]
For example: jdbc:sqlserver://localhost;user=example;password=w3lrs2
• For a MySQL URL, specify: [jdbc:mysql://<IP address>/database?
user=<username>;password=<password>]
For example: jdbc:mysql://localhost/database?user=example;password=65wlrs
• For a Sybase URL, specify: [jdbc:sybase:Tds:<IP address>:<port number>/<database name>?
user=<username>;password=<password>]
5. Use a network tool, such as ping, to verify that the HPOM server is running and the IP address
matches the value in the Host IP field.
6. Click Test connector to verify the connection between the MID Server and the HPOM connector.
7. If the test fails, follow the instructions on the page to correct the problem and then run another test.
Tip: Use a network tool, such as ping, to verify that the HPOM server is running and the IP
address matches the value in the Host IP field.
8. After a successful test, select the Active check box and then click Update.
Field Value
Name Specify a unique name for the Hyperic connector
instance. Mandatory.
Host IP Specify the Hyperic IP address. Mandatory.
Credential In the Credentials form, click New and create the
required credentials. Save the credentials using
a unique and recognizable name, for example,
HypericOPS. Mandatory.
Schedule (seconds) The frequency in seconds that the system checks
for new events from Hyperic.
Description Type a description for the use of the Hyperic
event collection instance.
Connector definition The vendor and protocol used to gather events
from the external event source. Select the
Hyperic connector definition. Mandatory.
Last error message The last error message is automatically updated.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
MID Servers (MID Server for Connectors section) Specify the MID Server that must run the Hyperic
connector instance. If not specified, an available
MID Server that has a matching IP range is used.
3. Right-click the form header and select Save.
4. In the Connector Parameters section, specify the values of the mandatory Hyperic parameters.
a) port Default value 7080.
b) protocol Default value http.
5. Click Test connector to verify the connection between the MID Server and the Hyperic connector.
6. If the test fails, follow the instructions on the page to correct the problem and then run another test.
Tip: Use a network tool, such as ping, to verify that the Hyperic server is running and the IP
address matches the value in the Host IP field.
7. After a successful test, select the Active check box and then click Update.
Note: If the OperationsManagerDW database has been renamed, also change the database
name in the SCOMConnector.groovy MID Server Script as well as in the MetricCollector
script include.
• If you upgraded from an earlier release and a SCOM connector was defined:
1. Configure the Operational Metrics extension and assign it to the MID Server that is used to collect
events and Operational Metrics. See Configure the Operational Metric extension on page 1036.
2. Define the Log On user on the MID Server service, ensuring that This account is selected with the
details of the user in the Windows domain having read access to the SCOM database.
If you want to receive SCOM alerts, you can obtain the redistributable SCOM files from your SCOM
application. Add the files to the MID Server, and then configure a SCOM connector instance to collect the
alerts and Operational Metric raw data.
1. On the SCOM server, download the following files to a local computer.
Version and location SCOM path and library names
Note: To work with both SCOM 2012 and SCOM 2007 in your instance, before
uploading the following files to your instance, append .2012 to the end of the
filename Microsoft.EnterpriseManagement.OperationsManager.dll
that is found in the 2012 path and append .2007 to the end of the filename
Microsoft.EnterpriseManagement.OperationsManager.dll that
is found in the 2007 path. This enables the MID Server to load the applicable
dll when both SCOM versions are deployed. Do not append 2007 to the
Microsoft.EnterpriseManagement.OperationsManager.Common.dll file (for SCOM 2007).
4. Repeat step 3, creating a separate record for each of the remaining DLL files. Ensure that you have a
unique identifier after the SCOM version for each file that you attach, for example 2012 2.
Field Value
Name Specify a unique name for the SCOM connector
instance.
Host IP Specify the SCOM IP address.
Credential In the Credentials form, click New and create the
required credentials. Save the credentials using
a unique and recognizable name, for example,
SCOMOPS.
Schedule (seconds) The frequency in seconds that the system checks
for new events from SCOM Operations.
Description Type a description for the use of the SCOM event
collection instance.
Connector definition The vendor and protocol used to gather events
from the external event source. Select the SCOM
connector definition.
Last error message The last error message is automatically updated.
Active Select this option only after running a successful
test.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
Bi-directional Select to invoke the bi-directional option. This
option enables the bi-directional exchange of
values to-and-from the external event source.
The Last bi-directional status option displays
only when this option is selected.
Note:
Field Value
• The metrics connector supports
working with the SCOM database
(MSSQL) that is configured to work
with SSL.
• This feature is not supported on
a MID Server that is installed in a
directory that has a space in its path.
Metrics collection schedule (seconds) The time, in seconds, to repeat the Operational
Metrics collection scheduled job.
This option displays only when the Metrics
collection option is selected.
Note: If you are working with SCOM 2016, specify 2012 for the scom_version
parameter.
Tip: Use a network tool, such as ping, to verify that the SCOM server is running and the IP
address matches the value in the Host IP field.
12. After a successful test, select the Active check box and then click Update.
Note:
The default binding rules that contain SCOM as the external source, that applies to IT alerts
and Operational Metric raw data, are the following SCOM Management Packs:
Only alerts from the specified SCOM group arrive at the instance.
Configure event collection from Netcool
The Netcool connector requires configuration prior to receiving events from IBM Netcool/OMNIbus Object
Servers and Impact Servers.
Download the jconn3.jar from http://www.java2s.com/Code/Jar/j/Downloadjconn3jar.htm.
Role required: evt_mgmt_admin
To use the Netcool connector, configure a connector instance for the MID Server and add the
jconn3-1.0.jar file that contains the necessary Netcool Sybase jar file.
1. Navigate to MID Server > JAR Files.
2. Click New.
3. Fill in the Name and Description fields.
4. Attach the file that was uploaded in the Before you begin section:
a)
Click the attachments icon ( ).
5. Click Submit.
6. Navigate to Event Management > Connectors > Connector Instances.
7. Click New and create a connector instance with the following details:
Field Value
Name Specify a unique name for the Netcool connector
instance.
Host IP Specify the Netcool IP address.
Credential In the Credentials form, click New and create the
required credentials. Save the credentials using
a unique and recognizable name, for example,
NetcoolOPS.
Schedule (seconds) The frequency in seconds that the system checks
for new events from Netcool.
Description Type a description for the use of the Netcool
event collection instance.
Connector definition The vendor and protocol used to gather events
from the external event source. Select the IBM
Netcool connector definition.
Last error message The last error message is automatically updated.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
MID Servers (MID Server for Connectors section) Specify the MID Server that must run the Netcool
connector instance. If not specified, an available
MID Server that has a matching IP range is used.
8. Right-click the form header and click Save.
9. In the Connector Parameters section, specify the value of the mandatory Netcool parameter, url. Build
a URL using the provided format.
For example: jdbc:sybase:Tds:10.11.15.118:4100/NCOMS
Note: After updating the Host IP field, you must update the URL parameter, as it does not
update automatically.
10. Use a network tool, such as ping, to verify that the Netcool server is running and the IP address
matches the value in the Host IP field.
11. Right-click the form header and click Save.
12. Click Test connector to verify the connection between the MID Server and the Netcool connector.
13. If the test fails, follow the instructions on the page to correct the problem and then run another test.
Tip: Use a network tool, such as ping, to verify that the Netcool server is running and the IP
address matches the value in the Host IP field.
14. After a successful test, select the Active check box and then click Update.
Field Value
Name Type a descriptive name for the SolarWinds
connector. Mandatory.
Host IP Specify the SolarWinds IP address. Mandatory.
Credential Select a credential that uses Basic Auth. The
format for the User name is domain\username.
Mandatory.
Schedule (seconds) The frequency in seconds that the system checks
for new events from the SolarWinds server.
Description Type a description for the use of the SolarWinds
event collection instance.
Connector Definition Select Solarwinds. Mandatory.
Last error message The last error message is automatically updated.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
MID Servers (MID Server for Connectors section) For cases in which the connector cannot be
determined from the IP address range defined on
the MID Servers, such as in a multi-domain set
up, select the MID Server that is up and is valid
to run the SolarWinds connector instance. The
privileges that are required for the user account
that is used for the MID Server to collect events
from SolarWinds is that of evt_mgmt_integration.
3. Right-click the form header and select Save.
4. In the Connector Parameters section, specify the value of the port parameter. Default value 17778.
5. Click Test connector to verify the connection between the MID Server and the SolarWinds connector.
6. If the test fails, follow the instructions on the page to correct the problem and then run another test.
Tip: Use a network tool, such as ping, to verify that the SolarWinds server is running and the
IP address matches the value in the Host IP field.
7. After a successful test, select the Active check box and then click Update.
Field Value
Name Specify a unique name for the vRealize
Operations connector instance. Mandatory.
Host IP Specify the vRealize Operations IP address.
Mandatory.
Credential In the Credentials form, click New and create the
required credentials. Save the credentials using
a unique and recognizable name, for example,
vRealizeOPS. Mandatory.
Schedule (seconds) The frequency in seconds that the system checks
for new events from vRealize Operations.
Description Type a description for the use of the vRealize
Operations connector.
Connector definition The vendor and protocol used to gather events
from the external event source. Select the
vRealize connector definition. Mandatory.
Last error message The last error message is automatically updated.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
MID Servers (MID Server for Connectors section) Specify a MID Server that is up and is valid. You
can configure several MID Servers. If the first is
down, the next MID Server is used. If that MID
Server is not available, the next is selected, and
so on. MID Servers are sorted according to the
order that their details were entered into the MID
Server for Connectors section. If no MID Server
is specified, an available MID Server that has a
matching IP range is used.
3. Right-click the form header and select Save.
The connector instance values are added to the form and the parameters that are relevant to the
connector appear.
4. In the Connector Parameters section, specify the values of the mandatory vRealize Operations
parameters.
a) port Default value 443.
b) protocol Default value https.
5. Click Test connector to verify the connection between the MID Server and the vRealize Operations
connector.
6. If the test fails, follow the instructions on the page to correct the problem and then run another test.
Tip: Use a network tool, such as ping, to verify that the vRealize Operations server is running
and the IP address matches the value in the Host IP field.
Field Value
Name Type a descriptive name for the connector.
Host IP Specify the Zabbix IP address.
Credential Select a credential that uses Basic Auth. The
format for the User name is domain\username.
Read-only permission is sufficient.
Schedule (seconds) The frequency in seconds that the system checks
for new events from the Zabbix server.
Description Type a description for the use of the Zabbix event
collection instance.
Connector Definition Select Zabbix.
Last error message The last error message is automatically updated.
Last run time The last run time value is automatically updated.
Last run status The last run time status is automatically updated.
MID Servers (MID Server for Connectors section) For cases in which the connector cannot be
determined from the IP address range defined
on the MID Servers, such as in a multi-domain
set up, select the MID Server to run the Zabbix
connector instance.
3. Right-click the form header and select Save.
The parameters that are relevant to the Zabbix connector appear.
4. In the Connector Instance Values section, specify the Zabbix values.
a) days from Specify the number of days that have passed for which events must be collected.
Default value 2 days. Note: The maximum number of past events that can be collected is 3,000.
The first time that this connector runs, the number of past events is counted in the past from the
current date. In subsequent runs, the 3,000 maximum number of past events collected is counted
from the last event that was collected.
b) port Default value 80.
7. If the test fails, follow the instructions on the page to correct the problem and then run another test.
Tip: Use a network tool, such as ping, to verify that the SCOM server is running and the IP
address matches the value in the Host IP field.
8. After a successful test, select the Active check box and then click Update.
Do not add additional fields to an event by adding a custom field to the event table [em_event]. However,
additional fields should be included in the Additional information field of the event. For more information
about how to include additional fields in events, see Custom alert fields on page 931.
1. Send the request with these headers:
Parameter Type Description
2. One or more events in JSON format can be sent as the payload of the web service call. Event fields
that should be populated are:
Variable Description
Variable Description
Variable Description
3. To create multiple records with a single call, trigger the event web service using the following url,
where the <instance name> variable is replaced with the name of the required instance:
https://<instance name>.service-now.com/em_event.do?
JSONv2&sysparm_action=insertMultiple
Example showing the payload for two events that are sent in a single web service call:
{ "records":
[
{
"source":"SCOM",
"event_class":"SCOM 2012 on scom.server.com",
"resource":"D:",
"node":"name.of.node.com",
"metric_name":"Percentage Logical Disk Free Space",
"type":"Disk space",
"severity":"4",
"description":"The disk D: on computer V-W2K8-abc.abc.com is running
out of disk space. The value that exceeded the threshold is 38% free
space.",
"additional_info":"{
'scom-severity':'Medium',
'metric-value':'38',
'os_type':'Windows.Server.2008'
}"
},
{
"source":"SCOM",
"event_class":"SCOM 2012 on scom.server.com",
"resource":"MSSQL-database-name",
"node":"other.node.com",
"metric_name":"DB Allocated Size (MB)",
"type":"Database Storage",
"severity":"3",
"description":"High number of active connections for MSSQL-database-
name running on name.of.node.com. Active connections exceed 5000.",
"additional_info":"{
'scom-severity':'High',
'metric-value':'5833',
'os_type':'Windows.Server.2008'
}"
}
]
}"
4. To create one record with a single call, trigger the event web service using the following URL, where
the <instance name> variable is replaced with the name of the required instance:
https://<instancename>.service-now.com/api/now/table/em_event
Note: Use of this URL is limited as far as the rate of events it can support.
Example showing the payload for one event that is sent in a single web service call:
{ "record":
[
{
"source":"SCOM",
"event_class":"SCOM 2007 on scom.server.com",
"resource":"C:",
"node":"name.of.node.com",
"metric_name":"Percentage Logical Disk Free Space",
"type":"Disk space",
"severity":"4",
"description":"The disk C: on computer V-W2K8-dfg.dfg.com is running
out of disk space. The value that exceeded the threshold is 41% free
space.",
"additional_info":"{
'scom-severity':'Medium',
'metric-value':'41',
'os_type':'Windows.Server.2008'
}"
}
]
}"
{
\"source\" : \"Simulated\",
\"node\" : \"nameofnode\" ,
\"type\" : \"High Virtual Memory\",
\"resource\" : \"C:\",
\"severity\" : \"5\",
\"description\" : \"Virtual memory usage exceeds 98%\",
\"ci_type\":\"cmdb_ci_app_server_tomcat\",
\"additional_info\":\"{"name":"My Airlines"}\"
},
{
\"source\" : \"Simulated\",
\"node\" : \"01.myairlines.com\" ,
\"type\" : \"High CPU Utilization\",
\"resource\" : \"D:\",
\"severity\" : \"5\",
\"description\" :
\"CPU on 01.my.com at 60%\"
}
]
}" -u myUserID:myPassword "https://my-instance.service-now.com/
em_event.do?JSONv2&sysparm_action=insertMultiple"
2. Use the -H option. For the POST parameter, start the data block with an open bracket and delimit the
data with backslashes. For example:
3. Use the -u option and the instance URL with login credentials. For example:
-u myUserID:myPassword "https://my-instance.service-now.com/em_event.do?
JSONv2&sysparm_action=insertMultiple"
You can use an event rule transform to parse the content depending on your business needs. The
ServiceNow base system can also perform test case analysis on the events from CloudWatch.
1. On the AWS console, select SNS and create a new SNS topic if one does not already exist.
2. Under the topic, create a new subscription.
a) Take the Topic ARN from the topic that you created.
The Amazon Resource Name (ARN) is necessary for binding an Event Management alert to a CI.
b) Set the Protocol to https.
c) Set the Endpoint to: https://<user>:<password>@<instance>/evt_mgmt_proc.do.
3. Wait until the subscription goes from Pending to Confirmed and the subscription ARN is populated.
4. Create alarms in CloudWatch to send to Event Management. Link them to the SNS topic that you just
created.
5. In your ServiceNow instance, navigate to Event Management > Rules > Event Rules.
6. Click the link for events or grouped events that are not mapped to rules.
7. Select a CloudWatch event that you want to use for creating the rule.
8. In the simple view, create a rule to manage the following incoming values from CloudWatch:
Data from AWS event Use this field in the Event Field Rules section
node node
type type
description description
alarm body additional_info
AWS CloudWatch source source
Original source instance and SNS topic event_class
and update events, use rules to generate alerts, or change event severity. The severity is updated after the
impact calculation and stored in the em_impact_status table.
1. Navigate to System Properties > Email Properties.
2. In the Inbound Email Configuration section, in the Email receiving enabled option, select Yes to
enable the collection of events from email.
3. To configure the email information to pass to Event Management for event and alert processing,
navigate to System Policy > Email > Inbound Actions and in the list of Inbound Email Actions
records search for and open the default create event form.
a) In the Inbound Email Actions form, specify these values.
Field Value
Active Select the check box to enable the email
notification.
Description Type a description for this email message.
Target table Select Event [em_event].
Action type Select Record Action.
4. To customize the inbound parameters and map the text to event variables, add a Script.
For example:
current.source = 'email';
current.event_class = 'email';
current.description = email.body_text;
current.time_of_event = new GlideDateTime();
current.insert();
5. From the instance, send an email that contains matching data, and then confirm that the event or alert
information appears in Event Management. For more information sending email messages to create
an instance, see Inbound email actions.
Note: Ensure that the user who sends the email has the evt_mgmt_integration role.
6. Click Submit.
The inbound email is sent to the em_event table and regular Event Management processes continue, for
example, event rules and alert action rules run.
2. In the Outbound Email Configuration section, in the Email sending enabled option, select Yes to
enable sending severity information by email.
3. Click Save.
4. Navigate to System Notification > Email > Notifications.
5. Click New.
a) In the Email Notification Actions form, specify these values.
Field Value
Name Type a descriptive name, for example,
Business Service severity change.
Table Select the required service type and its
corresponding table: Alert Group, Discovered
Service, Technical Service, or Manual Service.
Active Select the check box to enable email
notification.
b) Click the Who will receive tab. In the Users area, search for and select the required user (the
user does not require System Administrator credentials). Repeat this step for as many users that
are required to receive this notification.
c) Click the When to send tab. In the choose field box, search for and select Severity.
d) Click the What will it contain tab. In the Subject field, type meaningful text that makes it clear to
the receiver of the email notification what the content of the email is.
e) In the Message HTML message area, using the Select variables functions, compose the
conditions under which the email notification is to be sent. For example, configure these variables
to change severity:
severity changed
Severity: ${severity}
on business service
Name: ${name}
6. Click Submit.
Name Description
Name Description
Name Description
4. Click Submit.
Low Warning
Medium Major
High Critical
Event Management provides default event field mappings for commonly used system monitoring tools. The
Event Mapping Pairs from event field mappings format the incoming event data for Event Management.
You can view the default event field mappings and mapping pairs by navigating to Event Management >
Event Field Mappings and double-clicking Name. The default event field mappings that are available for
the following event sources:
• enterprises.20006.1.5
• enterprises.20006.1.7
• HPOMWIN
• Microsoft Operations Manager
• netappDataFabricManager
• oraEM4Traps
• SNMPv2 Generic Trap
• SolarWinds
• Trap From Enterprise 9
• vmwVC
• whatsup-whatsupState
netappDataFabricManager-
Maps the source
dfmEventSeverity dfmEventSeverity
field to the event
severity.
SNMP Traps For SNMP Traps, the following default event field
mappings are active by default in the SNMPv2
Generic Trap Source category.
Trap from Enterprise 9 For Trap from Enterprise 9, the following default
event field mappings are active by default in various
Source categories.
Field Description
Field Description
4. Click Submit.
For example, see these values for a predefined rule that is applied to events in the Trap
From Enterprise 9 class. If the events contain the snmpTrapOID element with a
value of iso.org.dod.internet.private.enterprises.cisco.0.0, the mapping
rule changes the value to reload in alerts. If the events contain the snmpTrapOID
element a value of iso.org.dod.internet.private.enterprises.cisco.0.1,
the mapping rule changes the value to tcpConnectionClose in alerts.
Field Values
Name cisco.snmpTrapOID
Source Trap From Enterprise 9
Mapping type Single field
From field snmpTrapOID
To field snmpTrapOID
Event Mapping Pairs • Pair 1
• Key:
iso.org.dod.internet.private.enterprises.cisco.0.0
• Value: reload
• Pair 2
• Key:
iso.org.dod.internet.private.enterprises.cisco.0.1
Field Values
• Value: tcpConnectionClose
Test an event field mapping by sending an event that contains a field that is present in the event field
mapping.
Default event rules are available for various sources. You can customize these rules as necessary to
manage events and alert generation. To view the default rules, navigate to Event Management > Rules >
Event Rules.
Default events are available from the following sources:
• HPOMWIN
• oraEM4Traps
• Generic Trap
• SNMPv2
• SolarWinds
• Trap From Enterprise 9
• vmwVC
Field Description
Note: You can filter the Event Rule list to display the required subset of the information.
However, if you create a favorite link after filtering, when the link is clicked the Event Rule list
does not display the correct filter.
Find rules that apply to an event 1. Navigate to Event Management > All
Events, and then click an event Number.
2. Under Related Links, click Check process
of event.
3. Review the information that appears above
the Source field, and review all Processing
Notes.
Find rules that apply to an alert 1. Navigate to Event Management > All
Alerts, and then click an alert Number.
2. In the processing notes, review information
about the applied event rule.
Note: Do not change the view of the Event rule list by clicking an option from the context menu in
the List controls context menu in the form header.
Note: Only one rule is applied to each event, according to the filter applied to the events. Multiple
event rules do not apply to an event.
Option Description
Create an event rule from an existing event 1. Click the link for events or grouped events
and or group of events that are not mapped to rules.
3. Fill in or edit the fields, as appropriate. For example, use the Event Compose Fields section to map
event data to alert fields.
Field Description
Field Description
Field Description
5. Click Submit.
If necessary, use either the simple or advanced view to create an event rule. For example, event rules are
useful for managing events that occur regularly.
Note: Only one rule is applied to each event, according to the filter applied to the events. Multiple
event rules do not apply to an event.
After you add advanced mapping information to the event rule, it may not be viewable from the simple
view. For example, the simple view cannot open an event rule that contains regex patterns, such as, \b[A-
Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b. An error message reminds you that the rule cannot be viewed in
simple mode.
Options to create the rule are:
• Create an empty event rule and assign event fields for alert generation.
• Create a rule from an existing event or groups of events that do not have a rule, so the event fields are
copied to the Event Match Fields section of the rule.
You can learn about transform, compose, and binding from the following video.
Note: Event rules that are not configured to perform any action are skipped. Therefore, if the rule
is not configured as ignore, threshold, or binding, it is important to specify either the match or the
compose fields.
Option Description
Create an event rule from an existing event 1. Near the top of the form, click the link for
events or grouped events that are not
mapped to rules.
Field Description
Field Description
Field Description
Field Description
Field Description
Field Description
Container Level 1 Select the container type from the list. This
field appears according to the selection in
the CI type field.
6. Click Submit.
The settings and values are validated. If the validation is successful, the Event Rule list displays.
Otherwise, an error message displays, indicating where to find information to complete the configuration.
For example, if PostgreSQL DB [cmdb_ci_endpoint_postgresql] is specified for the CI type and
insufficient attributes were specified, then a message, similar to the following, displays.
Missing information for the CI type: cmdb_ci_endpoint_postgresql. For more
details, use the
CI rule: PostgreSQL DB
In this case, in the message, click the PostgreSQL DB link. The Identifier displays, showing that the
criterion attributes are port,ip_address,host,instance,host_name.
Ignore an event
You can configure an event rule to ignore extraneous events and prevent alert generation.
Role required: evt_mgmt_admin
Even if an event is ignored, it is still recorded in the Event [em_event] table.
1. Navigate to Event Management > Rules > Event Rules.
2. Create or open an event rule.
3. Select the Ignore event check box.
4. Click Submit or Update.
To populate custom alert fields with data contained in the Additional information field of the event, see
Custom alert fields on page 931.
fields, use the custom field name in the Additional information field. You can also use Event Rules for
this purpose. Values in the Additional information field of an Event that are not in JSON key/value format
are normalized to JSON format when the event is processed.
Depending on permissions, you may only be able to create fields with the user_ prefix. If so, use Event
Rules to create an Additional information field with the same name. To prevent some fields to be copied
to the alert field, use the evt_mgmt.alert_black_list_fields property and add the field names that must be
excluded. By default, the fields that are not copied are:
• message_key
• category
• additional_info
• sys_updated_on
• sys_updated_by
• sys_created_by
• sys_created_on
• sys_mod_count
• sys_id
Note: Threshold metric can be the name of any numeric field in the Additional information field
of the event. Therefore, if cpu is an additional information field for a specific event, then cpu can be
used as a Threshold metric.
Assume you want to generate an alert when CPU utilization reaches or exceeds 80% where there is a
period of 20 seconds between events. Create an event rule with these settings (an explanation for each
value is given in parenthesis):
• Threshold metric: cpu (events regarding high CPU usage)
• Create alert operator: = > (operator to determine whether utilization of Threshold metric reaches or
exceeds the specified value)
• *: 80 (percent)
• Occurs: 3 (three events occur where the cpu usage is equal to or above "=>" 80%)
• Over: 20 (over twenty seconds between each event)
To demonstrate how the above settings are evaluated, assume that the following events are received:
First scenario
Reported elapsed time and the cpu usage for each event:
• First event elapse time 20, cpu=85
• Second event elapse time 40, cpu=80
• Third event elapse time 60, cpu=70
In this scenario, no alert is generated since one event has a CPU utilization that is under 80%.
Second scenario
Reported elapsed time and the cpu usage for each event:
• First event elapse time 20, cpu=85
In this scenario, an alert is not generated since the elapsed time in one event is over the specified 20
seconds.
Third scenario
Reported elapsed time and the cpu usage for each event:
• First event elapse time 20, cpu=85
• Second event elapse time 40, cpu=95
• Third event elapse time 60, cpu=90
In this scenario, an alert is generated since in all events the elapsed time is within the specified time and
the cpu usage is over 80%.
Note: When configuring an event rule to create or close alerts according to a threshold, events
that arrive at the same second, as determined by the time_of_event field, are skipped as they are
considered to be duplicates.
To create an alert when a specific event occurs 5 times in 10 minutes, in the Threshold
tab, specify the following properties:
1. Select the Active check box.
2. In the Threshold metric field, specify the name of any additional information field that
exists in the event. The value of the field is irrelevant.
3. In the Create Alert Operator field, select Count.
4. In the Occurs field, specify 5.
5. In the Over field, 600 (10 * 60 seconds).
6. Click Submit or Update.
The event can contain the binding process flow in its Processing Notes field. For example, while binding
the alert to a process, the following steps occurred:
• The node, which is found by node name, resolved to the CI ID 775cbc6093120200e0e931f6357ffb17.
• There was no CI type on the event and the sa_process_name variable was set to /ora92/
pmon_U2P3_db.
• Event Management tried to match the CI ID to the process pattern from the em_binding_process_map
table.
• Event Management found the CI ID 6d6cbc6093120200e0e931f6357ffb78: pattern ${oracle_home}/
pmon_${instance} (from em_binding_process_map) and matched it to the /ora92/pmon_U2P3_db
pattern that was defined in the sa_process_name variable.
• The alert was bound to CI ID 6d6cbc6093120200e0e931f6357ffb7.
Note: In this section, the phrase "populate event field XXX with value YYY", occurs by sending
the field value from the event source, or by using event rules to populate the field (assuming this
information is contained somewhere in the event).
If the event is specific to an application type, use the following steps to bind alerts to a specific application:
• Use the recommendations in Binding to a host computer, switch, or router CI on page 937.
• Create an event rule with a filter that captures events on the application type you want.
• In the event rule, select the Transform check box and populate the CI Type with the CI type you want
to bind.
• In the binding process, after the host is found, the algorithm matches all additional_info attributes that
have the same name as CI fields for that CI type. If the match is successful, the event is bound to the
CI.
• If more than one matching application is found on the host, alert is bound to the host and not the
application.
If there is no CI Type, for example, you want to bind alerts to a SQL server application when the CPU on
sqlServer.exe is over 90%, use these steps instead:
• Use the recommendations in Binding to a host computer, switch, or router CI on page 937
• Create an event rule with an event match field which maps the process name to the mapping variable
sa_process_name. In this case, do not use the CI type.
• Use the Process to CI Type Mapping module to add custom patterns.
Binding to a non-host CI
If the event is specific to a non-host CI, for example a business service, manual Service, or alert group, use
the following steps to bind alerts to a non-host CI:
• Leave the Node field empty.
• Populate the CI Type with the CI type you want to bind.
• Make sure the additional_info field has enough information to uniquely identify the CI. The algorithm
matches all additional_info attributes that have the same name as CI fields for that Event Type. If the
match is successful, the event will be bound to the CI.
Optional method:
• Leave the Node field empty.
• Populate the CI Identifier field as above with attributes that will uniquely identify the CI.
Use the following steps to bind alerts to a host, such as a computer, switch, or router CI:
• Populate the Node field in the event with the CI name, FQDN, IP or MAC value. The bind is successful
even if host has more than one IP address or MAC.
• If you want to use a unique identifier that is not one of the four mentioned above, populate the event
rule CI Identifier field with one or more unique identifiers of the CI. This field should be in JSON format.
For example, if you want to use a unique identifier which is not one of the four mentioned above, add a
CI Identifier filter field with one or more unique identifiers of the CI. If the host CI is VMWare VM, and it
has a field called MOID, use the JSON format and specify: {"moid":"<CI moid>"}
Binding to a process
You can define pattern of processes that matches with the sa_process_name variable. The
sa_process_name variable must be filled with the process name using an event rule. The Process to CI
Type Mapping table is used for process binding. The table includes default patterns. The pattern variables
extract from this table according to the host related CI Type. The table contains these columns for process
binding:
• The Process field contains the process pattern. Use any combination of free text, regex, and pattern.
• The ci_type field contains the CI type in CMDB.
Binding an alert to a process, Event Management first receives all the processes that are running on the
node (the alert resolving node). Then, Event Management matches every related process pattern with the
name that is defined in the sa_process_name variable. Regular expressions are supported.
For example, if the sa_process_name contains /ora92/pmon_U2P3_db for an oracle instance:
• The sa_process_name variable must be filled by the event rule to store the process name.
• The Process to CI Type Mapping table has the following record:
Process: ${oracle_home}/pmon_${instance}
CI type: cmdb_ci_db_ora_instance.
• Binding the process, Event Management finds a process of type cmdb_ci_db_ora_instance is running
on the node.
• Event Management matches /ora92/pmon_U2P3_db from the sa_process_name variable with the
process pattern from the Process to CI Type Mapping table for type cmdb_ci_db_ora_instance.
• If the oracle instance has ${oracle_home} = ora92 and ${instance} = U2P3_db extracted from the
database, then there is a match and this oracle process is bound in the alert cmdb_ci.
/ora92/pmon_U2P3_db= ${oracle_home}/pmon_${instance}.
Binding to an application
Binding to a device
The Process to CI Type Mapping table provides all of the necessary information for binding the alert to
device CIs. The table can be extended to support more device types in the future. The Process to CI Type
Mapping table contains these columns for device binding:
• The from_ci_type field matches with the Event Type.
• The to_ci_type field contains the CI Type to look for in the CMDB.
• The mapped_column field contains the name of the column which stores the relationship to the node.
To bind a device, use the Process to CI Type Mapping [em_binding_process_map] table. For example,
the table below shows a record in the Process to CI Type Mapping table. If the Event Type is
cmdb_ci_file_system, Event Management searches the File System [cmdb_ci_file_system] device table for
a computer field contains the node sys_id. If there is a match, the alert binds with this CI.
Table field Value
from_ci_type cmdb_ci_file_system
to_ci_type cmdb_ci_network_adapter
mapped_column computer
In the binding process, after the host is found, the algorithm matches all additional_info attributes that
have the same name as CI fields for that event type. If match is successful, the event is bound to the CI. If
more than one matching device is found on the host, the alert is bound to the host and not the application.
Use the following steps to bind alerts to a specific device:
• Use the recommendations in Binding to a host computer, switch, or router CI on page 937.
• Create an event rule with a filter that captures events on the device type you want. In the event rule,
select the Transform check box and select the appropriate CI Type:
Device to bind CI Type
• To bind custom device types, use the Process to CI Type Mapping [em_binding_process_map] table.
• To bind custom device types, use the Process to CI Type Mapping [em_binding_process_map] table.
For example, if you build a filter condition for a port, select Port.
Device to bind CI type
5. In the Event Match Fields section, specify any Additional info mappings.
6. Click Submit.
7. To bind to custom device types, configure the alert binding to a process and add the custom device
type to the Process to CI Type Mapping [em_binding_process_map] table.
Optional method:
• Leave the Node field empty.
• Populate the CI Identifier field as above with attributes that will uniquely identify the CI.
Field Value
Field node
Regular Expression (.*)
Mapping temp node
7. In the Event Compose Fields section, insert a new row with these parameter value:
Field Value
Field node
Field Value
Composition (empty)
8. Click Submit.
9. Navigate to Event Management > Rules > Event Field Mapping.
10. Create the corresponding event field mapping with these parameter values:
Field Value
Source Specify the event monitor software that
generated the event.
Mapping type Select Single field.
From field Specify temp node.
To field Specify name.
11. In the Event Mapping Pairs section, insert new rows for each event mapping pair.
a) Set the Key with the URL or an IP address with corresponding port value.
b) Set the Value with the business service, manual service, technical service, or alert group name.
For example, you can add an event mapping pair for each business service.
extract from this table according to the host related CI Type. The table contains these columns for process
binding:
• The Process field contains the process pattern. Use any combination of free text, regex, and pattern.
• The ci_type field contains the CI type in CMDB.
Binding an alert to a process, Event Management first receives all the processes that are running on the
node (the alert resolving node). Then, Event Management matches every related process pattern with the
name that is defined in the sa_process_name variable. Regular expressions are supported.
For example, if the sa_process_name contains /ora92/pmon_U2P3_db for an oracle instance:
• The sa_process_name variable must be filled by the event rule to store the process name.
• The Process to CI Type Mapping table has the following record:
Process: ${oracle_home}/pmon_${instance}
CI type: cmdb_ci_db_ora_instance.
• Binding the process, Event Management finds a process of type cmdb_ci_db_ora_instance is running
on the node.
• Event Management matches /ora92/pmon_U2P3_db from the sa_process_name variable with the
process pattern from the Process to CI Type Mapping table for type cmdb_ci_db_ora_instance.
• If the oracle instance has ${oracle_home} = ora92 and ${instance} = U2P3_db extracted from the
database, then there is a match and this oracle process is bound in the alert cmdb_ci.
/ora92/pmon_U2P3_db= ${oracle_home}/pmon_${instance}.
You can assign custom-developed business process applications to CI types. The following default
CI Types automatically bind to application process names during alert generation. The CI Types have
global domain availability. The Process to CI Type Mapping [em_binding_process_map] table stores the
mappings for the binding process.
4. Click Submit.
Note: License usage is based on node and domain. For example, where alert Alert0021744
from node tech.mgmt, is bound to the WebServer CI1, and another alert, Alert0012979, from a
different domain, is bound to the WebServer CI2, then the license usage count is 2.
Note: If more than one alert rule can apply to an alert, if the alert rules resolve the alert with the
same action, then only one alert rule is applied, according to order. However, if each of the rules
resolves the alert with a different action, then each of these rules apply. For example, if one alert
rule creates a KB and another alert rule creates an incident, then both alert rules are applied.
Field Description
Field Description
Remediation tab
Enable remediation The check box to enable remediation with
an Orchestration workflow.
Execution Whether the workflow selected in
the Orchestration workflow field is
automatically invoked or users can invoke it
manually.
Orchestration workflow If the Enable remediation check box is
selected, the remediation workflow runs.
Field Description
Launcher tab
Enable The check box to launch web-based
applications from the Alert Console or
dashboard alert panel.
Display Name A descriptive name for the window that
appears when users launch the application.
URL A dynamic URL that uses specified fields
in the alert, including the Source and
Additional Information fields. For example,
the values in these fields in the alert replace
the parameters in the URL: http://
${source}.com/${my_application}.
In an alert, the value in the Source field is
used for the {source} section of the URL.
The Additional Information field
contains a JSON code with the value
of {my_application}, such as
{'my_application':'application_name'}.
Assign a task template to an alert rule to populate fields in the alert that is generated with predetermined
information. The information in the task template fields populate the fields of the resultant alert. This
information is fixed and repeated each time the template is used.
Configure custom script to populate fields in the alert that is generated with dynamic information. The
resultant alert shows information that is current and this information can change each time that an alert is
generated.
You can optionally both assign a task template to an alert rule and use custom script that inserts dynamic
values into the alert fields. In this way, certain fixed information is displayed along with information that
changes according to the specified requirements. When both a task template and script are used, the task
template runs first and thereafter the script runs.
Assign a task template from an alert rule
When configuring or modifying an alert rule, you can assign a task template from the template library or
create a task template.
Role required: evt_mgmt_admin
Task templates that you create are added to the task template library. Within an alert rule, you can assign
an existing task template from the template library to the rule, or you can create a new task template. In the
latter case, relevant fields defined in the alert rule automatically populate the corresponding fields in the
task template with values.
1. Navigate to Event Management > Rules > Alert Rules.
2. Click the existing alert rule that you want to modify or click New.
3. Fill in the fields, as required. For more information, see Create or edit an alert rule on page 944.
4. In the Action tab:
a) In the Task template field, click the search icon.
b) In the Task Templates screen, either select an existing task template from the list or click New.
If you clicked New, the User and Table fields are automatically populated with values from the
existing alert rule. The Active option is also automatically selected.
c) In the Short description field, enter the description of the template.
d) In the Template field, specify the field and its corresponding value to add to the task. Repeat this,
as required. This content automatically populates records based on this template.
e) Click Submit. The new task template is automatically assigned as the value for the Task
template field of the alert rule.
5. Click Submit.
Assume that there is an alert rule named InfoTest. This example describes how to create a
task template from within the InfoTest alert rule.
1. Navigate to Event Management > Rules > Alert Rules.
2. Click the InfoTest alert rule.
3. In the Action tab:
• Select Auto open.
• In the Type field, select Incident.
• In the Task template field, click the search icon.
• In the Task Templates screen, click New. The User and Table fields are
automatically populated with values from the existing alert rule. The Active option
is also automatically selected.
• In the Short description field, enter the subject line that must appear in the email
notification.
• In the Template field, specify the required fields and corresponding values.
• Click Submit. The new task template is automatically selected for the Task
template field.
4. Click Submit.
Assume that the current resource type must appear in the Short description field of the
alert. Enter this script in the EvtMgmtCustomIncidentPopulator placeholder:
Field Description
Field Description
8. To specify the time interval and conditions for automatically opening an alert:
a) Click Schedule.
b) Fill in the fields as appropriate, and then click Submit.
Field Description
The new template is populated in the Overwrite alert template field on the Alert rule form.
The new event occurs before the active interval The alert is updated with the event information.
elapses for the existing alert. The event is identified as a recurrence of an
existing issue and its severity is copied to the
alert.
If the alert was closed before the active interval
elapses, the alert is reopened and updated with
the event information.
The new event occurs after the active interval A new alert is created because the event is
elapses for the existing alert. identified as a new issue. The event information
is added to the new alert.
Change the active interval by editing the Active interval (in seconds), within which a new event
reopens a closed alert property value.
1. Navigate to Event Management > Settings > Properties.
2. Change the number of seconds for the Active interval (in seconds), within which a new event
reopens a closed alert property.
3. Click Save.
• After a background job, which runs every five minutes, verifies that no alert updates occurred and the
flap_quiet_interval time has elapsed. If there are no updates, the alert shows the previous state value
that it had before the background job ran.
3. Click Save.
You can monitor the Event Management > All Alerts list for alerts that are in the flapping state. For
details, see View alerts in the flapping state on page 976.
3. Click Save.
Note: When running maintenance rules, the cmdb_ci status of matching CIs is not changed.
However, matching CIs are flagged in the em_impact_maint_ci table by these rules and this status
is considered for impact and alert calculations.
gs.setProperty('evt_mgmt.impact_maintenance.use_change_request', 'false');
gs.setProperty('evt_mgmt.impact_maintenance.use_in_maintenance', 'false');
4. Click Submit.
Note: Starting with the Helsinki release, CIs are considered to be in maintenance not only
when an active change request is scheduled, but also when the Status field of the CI is set to In
Maintenance. In both cases, the CI is excluded from impact calculation and alerts for the CI do
not appear in the Alert console.
Note: When a child CI is put in maintenance, it also puts the parent CI in maintenance.
If there is a connection between services, the impact of one service on the other is also calculated.
Figure 175: Impact calculations use information from various sources to set the alert severity
Impact calculation varies depending on the CI relationships for a business service or manual service.
Additional factors, such as change requests, network paths, storage paths, and related CIs, all affect
impact calculation.
You can learn about the impact tree from the following video tutorial.
Change requests and the In Maintenance status If an active change request is scheduled for the
CI or if the Status of the CI is In Maintenance,
all alerts on the affected CI are excluded from
impact calculation. The Alerts tab and Alert console
also temporarily hide all corresponding alerts. The
impact tree shows the CI in green with a note of (In
Maintenance). The impact tree and the business
service map temporarily show CIs in green.
These properties are available in the Event Management > Settings > Properties module.
Property Description
Impact rules
Impact rules, which are used for impact calculation, estimate the magnitude or severity of an outage based
on affected CIs. The Impact Rule [em_impact_rule] table contains impact rules that show the applicable
CIs, business services, and settings for impact. The following default impact rules are available:
From the Manual Services list 1. Navigate to Event Management > Manual
Services.
2. Click View Map next to the service.
Field Description
Field Description
Field Description
Field Description
Review the changes on the impact tree. For example, if you changed a number or percentage influence for
child CIs, review the impact tree updates accordingly.
For example, based on of the cmdb_ci_vm_zones Infrastructure relationship definition, Event Management
adds ZoneServer@mmp1 to the business service. The Containment rule manages impact severity on
alerts.
Field Description
Child Type The table that contains data about the child
entity.
Parent Type The table that contains data about the
parent entity.
Relation Type The relationship between the child and
parent entities.
Field Description
Field Description
4. Click Submit.
Alert management
As alerts generate, you can view more information about them, acknowledge them, and take action to
resolve them. You can also manually create alerts to track issues that did not generate an event or alert.
You can respond to an alert in the following ways:
• Manually remediate the alert.
• Acknowledge an alert that requires attention.
• Create an incident or security incident.
• Close the alert.
• Resolve any incident that is related to the alert.
• Reopen the alert.
Note: Business rules that are written for alert tables [em_alert] must be highly efficient or they may
result in performance degradation.
Field Description
Field Description
Field Description
Field Description
Field Description
Option Description
Option Description
Close the alert. Click Close. For more information, see Close an
Event Management alert on page 983.
For example, you can respond to an alert by rebooting a problematic server. After no events are generated
for several minutes, it is assumed that the issue is fixed and the alert is closed. If the reboot did not actually
fix the issue, this server can generate more events later. Then additional alerts are generated for the same
issue.
1. Navigate to Event Management > All Alerts.
2. Click the number of an alert that is in the Flapping state.
3. On the alert, click the Flapping tab.
Field Description
Flapping tab
Flap count The number of times the alert has flapped
—that is, has fluctuated between a closed
and a non-closed state—within the flap
interval since the start time in the Flap start
window.
Flap start window The initial start time to measure the flapping
occurrences.
Flap last update time The last time flapping occurred. This time
is the ServiceNow processing time, not the
source system time.
Flap last state The state before the alert entered the
flapping state.
Note: If a maintenance flag was set on an alert in Geneva, after upgrading to Helsinki it affects the
impact as a regular alert. However, it is seen in the alert console only when a new event arrives.
Note: If the Maintenance field is not visible on the Alert form, personalize the list to add the
field. For more information, see Personal lists.
To put multiple alerts into maintenance Select the check boxes next to each alert, and
then click the Maintenance UI action at the top of
the list.
To put one alert into maintenance Open the alert, and either select the
Maintenance check box and click Update, or
click the Maintenance UI action at the top of the
list.
You can select to work either in a compact view of Connect that overlays the standard user interface or in
a full-screen workspace. The conversations are not copied into the alert.
1. Navigate to Event Management > All Alerts.
2. Click the required alert.
3. In the alert screen, click Follow.
The label of the button changes to Following, showing the current connection to collaboration status.
4. To view the Connect sidebar in a full window, click the down arrow in the Follow button and select
Open Connect Full.
You can write text in the Worknote field. This text is not copied into the alert.
5. To collaborate with colleagues or with support, click the Add User icon.
6. To conclude the Connect collaboration session, click Following.
Field Description
Field Description
Field Description
Field Description
4. Click Submit.
Option Description
Option Description
Note: If Security Incident Response is activated, the base system includes an alert rule called
Create security incidents for critical alerts. This alert rule creates security incidents when critical
security events are reported.
4. Click Update.
The created incident appears in the Task field of the Alert form.
Note: If one or more of the alert rule parameters cannot be resolved, the related application
name appears in black. When the alert parameters are resolved, then the application name
becomes a link.
3. Click the name of an application to open the URL in another tab or window in your browser. The
common use case is launch in context to the source management system. Other examples of use is to
search in the knowledge base, not just ServiceNow, but external as well, or any URL base action that
can utilize the alert parameters
Example
A common use case is launch in context to the source management system. Other examples include to
search in knowledge bases, not only within ServiceNow, but externally as well. Any URL-based action can
utilize the alert parameters and the URLs can refer to wikis, messaging services, REST APIs, and so on.
Reopen an alert
Additional events can cause reopening of alerts, or you can reopen an alert by changing its state. When an
alert reopens, any associated incidents can also be updated or reopened according to the incident state
and the evt_mgmt.alert_reopens_incident property.
Alert remediation
Use alert rules to associate specific alerts with a remediation workflow. You can apply a workflow to alerts
manually. Alternatively, you can configure the alert rule that executes a workflow automatically for alerts
that match the rule criteria. The workflow can also be scripted to request user approval before executing
subsequent workflow tasks. The workflow is paused for user approval, even if it was automatically
triggered.
CI remediation
Remediation tasks
When a remediation workflow is executed, a remediation task is created to capture details such as the time
that the workflow started to run, the alert that triggered the remediation workflow (if relevant), and the CI
that was remediated. View these tasks to track remediation activities in the organization.
Apply alert remediation
If an alert meets the criteria of an alert rule that enables manual remediation, you can manually apply
remediation to resolve the alert.
Ensure that the alert matches an alert rule that enables manual remediation.
Role required: evt_mgmt_admin
When an alert is initially created, you can select which remediation to run from a list of all applicable
remediations.
1. Find the alert that you want to remediate.
Option Description
From the Event Management dashboard Navigate to Event Management > Dashboard.
You have these options for selecting an alert:
• Select an alert from the list at the bottom of
the dashboard.
• Click a business service tile and select an
alert from the list at the bottom of the service
map.
• Click a technical service tile and select an
alert from the technical service object list, or
select an alert from the Alerts tab.
After you finish configuring the workflow, make sure that you publish it. For more details, see Create a
workflow.
Role required: evt_mgmt_admin
A CI remediation rule associates a set of CIs that might experience problems with a remediation workflow.
The remediation workflow can either resolve the underlying problem or help troubleshoot the problem that
generated the alert. For example, you can proactively configure a workflow with the appropriate response
actions for predictable alerts. For more details, see the Event Management Remediation short video.
1. Navigate to Event Management > Rules > CI Remediations.
2. Click New, or select a CI remediation to edit.
3. Fill in the fields, and click Submit or Update.
Control Description
In service maps that are opened from the Event Management dashboard, this remediation can be applied
to any CIs that match the filter conditions. For more information, see Apply CI remediation on page 986.
Apply CI remediation
You can apply remediation to a CI that matches the conditions in a CI remediation rule. The specified
remediation workflow is then executed to perform tasks that help resolve or troubleshoot a problem that the
CI is experiencing.
Ensure that the CI matches a CI remediation rule.
Role required: evt_mgmt_admin
1. Navigate to Event Management > Dashboard.
2. Double-click the tile of the service that includes the CI.
3. On the service map, right-click the CI and select Remediation options.
4. In the Remediation options dialog box, select the remediation that you want to apply and click Run.
The remediations list includes remediations in which the CI matches the filter of a CI remediation rule,
and remediations configured in alert rules for which alerts that are associated with the CI, match the
filter.
Alert monitoring
Alert generation and remediation can be monitored from the Event Management dashboards, alert
console, and alert timeline.
• The Alerts Console shows incoming alerts for discovered business services, manual services,
technical services, alert groups, and correlated alert groups.
• The Event Management Dashboard displays the health of services. It consolidates alert information
about discovered business services, manual services, technical services, and alert groups. Additional
information is available on the business service map, impact tree, and timeline. If Service Analytics is
activated, additional information about automated alert groups is also available.
• The Overview dashboard displays reports on active alerts. You can drill down to individual alerts from
these reports.
Icon Meaning
Each tile represents the highest severity of an alert for the service, alert group, or CI. For example, a tile
appears in red if an alert is critical. A green tile indicates either information alerts or no currently active
alerts. As the alert severities change, the dashboard updates accordingly. The following severities are
available:
• Red: Critical severity (highest severity). Immediate action is required. The resource is either not
functional or critical problems are imminent.
• Orange: Major severity. Major functionality is severely impaired or performance has degraded.
• Yellow: Minor severity. Partial, non-critical loss of functionality or performance degradation occurred.
• Blue: Warning severity (lowest severity). Attention is required, even though the resource is still
functional.
• Green: Informational. No severity. An alert is created. The resource is still functional.
Show or hide alerts by severity In the dashboard header, click the icon of the
lowest severity that you want to view in the
dashboard.
You can also move the slider to the left of the
icon. The dashboard displays only the severities
that are to the right of the slider. For example,
move the slider to the left of the red icon if you
want to view only alerts with critical severity.
Task Action
View the properties of an alert Click Alerts or Correlated Alerts , then double-
click the required alert number.
View alerts from another service or group In the dashboard header, click the down arrow,
and then select the service or group name.
Show active alerts for a group In the dashboard header, click Service. Then
click the group name and review the alerts listed
for it under the Alerts tab.
Show alert CI relationships or bindings in a On the dashboard, double-click a tile for the
business service map service. For more information on the business
service map, click View alert impact on CIs in a
business service map on page 995.
Show an aggregation of correlated alerts for Below the dashboard tiles, click Correlated Alert.
services and alert groups
Remediate an alert Under the Alerts tab, right-click the alert and then
select Run remediation.
4. To reduce the number of alerts that display in the alerts panel, enable Correlated Alerts.
The alerts panel displays alert groups and alerts that have not been grouped.
Note: The Correlated Alerts toggle also displays in the Alerts Console. When either the
Alerts Console or the dashboard is opened, the Correlated Alerts toggle opens in the setting
that was last selected.
You can filter business services and save the customized dashboard view as a favorite link.
1. Navigate to Event Management > Settings > Dashboard Views.
2. In the Dashboard Views screen, click New.
3. In the Name field, enter a descriptive name for the dashboard view.
4. Select Active.
5. Specify the filter conditions and click Submit.
6. Click the options menu in the title next to Dashboard Views.
7. In the context menu, click Create Favorite.
8. In the Create Favorite screen, the name that you specified for the view is displayed. You can
optionally edit the name. Select a symbol and its color for displaying to the left of the dashboard view
name.
9. Click Done. The dashboard view name and its symbol appear in the navigation tree.
10. In the navigation tree, click the required dashboard view name to display the customized dashboard.
If ITOM Metric Management is activated, you can right-click an alert and click View Metrics to open the
integrated Metrics Explorer and Dependency Views map for the CI that is associated with the alert.
Monitor alerts for a manual service
If you want to view information for only manual services, navigate to the Manual Services list. From this list,
you can open business service maps to view and manage alerts for the CIs in each service.
Role required: evt_mgmt_admin, evt_mgmt_operator, or evt_mgmt_user
1. Navigate to Event Management > Manual Services.
2. Click View Map.
3. Do one or more of these actions:
Option Action
Option Action
Change the map display, map layout, or map In the business service map header:
indicators
1. Click the menu icon.
2. Configure the appropriate settings.
service ( ).
Under each month, an indicator displays the number and severity of alerts in a period.
3. Use the following keys to move forward and backward through the timeline:
Option Description
Option Description
A business service map shows alerts with impacted CIs and CI interdependencies. For example, changes
to a connection between a host and hypervisor appear on the business service map. As the business
service map definition or the statuses of alerts change, the business service map, alert, and impact
information updates accordingly. You can open a business service map from these places:
• From the Manual Services list, you can view business service maps for manual services.
• From the Event Management dashboard, you can view business service maps for discovered services
and manual services.
The following icons are used in business service maps. The icon shapes are slightly different for manual
services.
Icon Description
Icon Description
1. Open the business service map for the service from either the Event Management dashboard or the
Manual Services list.
Option Description
From the Manual Services list 1. Navigate to Event Management > Manual
Services.
2. Click View Map next to the service.
View alerts for a CI by type and severity In the business service map:
1. Click a CI tile.
2. Below the map, click the Alerts tab and
review the listed alerts.
Option Action
Show alert details for networks or storage for In the business service map:
a business service
1. Right-click a path between CIs.
2. Select Show network path or Show
storage path.
Display additional information for CIs In the business service map header:
1. Click the menu icon.
2. Turn on the Map Indicators for the
additional information that you want to view.
You can view the discovered services history by business service . The discovered services history is
not available for manual services. The discovered services history appears only when there are multiple
discovered services that occur over a period. A discovered service is a business service that is discovered
by Service Mapping. If no discovered services are generated for a particular business service, the
discovered services history is hidden. The color corresponds to the discovered services severity, and the
length of the bar in each color corresponds to how long the discovered services stayed at that severity.
1. Open the business service map for the service from either the Event Management dashboard or the
Manual Services list.
Option Description
From the Manual Services list 1. Navigate to Event Management > Manual
Services.
2. Click View Map next to the service.
Option Description
Note: To customize how alerts work with CIs in maintenance, see Create maintenance rules on
page 952.
You can learn about how alerts work with CIs from the following video tutorial.
How CIs in Maintenance appear in Event Description and the optimal time to resolve the
Management alert
How CIs in Maintenance appear in Event Description and the optimal time to resolve the
Management alert
Note: The Event Management Overview dashboard does not display information on alerts for CIs
that are in maintenance. If you want to view the status of alerts for CIs in maintenance, see the
Event Management > All Alerts list.
By default, the Overview dashboard displays these reports for active alerts:
Report Description
Top Configuration Items with the most Alerts The CIs with the most alerts by severity.
Alert State The state of any open alerts.
Alert Incidents by Severity The number of incidents associated with any
open alerts. If there are no incidents associated
with alerts, this report is empty.
Services Affected by Alerts The number of alerts by business service.
Report Description
See the information represented by a chart Point your cursor to the segment to see a tooltip
segment with the details.
See a list of alerts with the severity Click the chart segment. A list of the alerts opens.
represented by a chart segment
The color of the segment represent the alert
severity.
• Critical (red): Immediate action is required.
The resource is either not functional or critical
problems are imminent.
• Major (orange): Major functionality is severely
impaired or performance has degraded.
• Minor (yellow): Partial, non-critical loss of
functionality or performance degradation
occurred.
• Warning (blue): Attention is required, even
though the resource is still functional.
• Info (green): An alert is created. The resource
is still functional.
Save the chart as an image file If a menu icon appears when you point your
cursor to a chart, you can click the icon to export
the chart to an image file.
Rule-based R Related alerts that have been grouped according to Create an alert
compliance with alert correlation rules. Alert correlation correlation rule on
rules are used to group alerts that are related. page 1009
Automated A Groups that are correlated automatically by Service Service Analytics
Analytics. A virtual alert is added to the group as the automated alert
primary alert of the group. groups on page
1019
CMDB C Based on CI relationships in the CMDB, for CIs without CMDB alert groups
historical data that could have been used to group on page 1021
alerts.
Double-click the Group column for an alert group to open its Grouped Alerts dialog box, where you can:
• Display all alerts in the group.
• Provide feedback about the usefulness of the group.
• Manually add or remove alerts from the group.
On the Alert form, verify that the newly added alerts appear in the Alerts tab.
For example, if a server in your organization goes offline, you do not need to see additional alerts that
show that the virtual machines or applications running on the server are also down. Instead, these
additional alerts, or secondary alerts, are grouped under the root alert called the primary alert. You can see
these grouped alerts in the alert console.
Note: Service Analytics provides alert groups for an automated correlation of events and alerts.
These alert rules give you full manual control of alert correlation. For more information on the
service analytics feature, see Service Analytics automated alert groups on page 1019 .
The purpose of specifying secondary alerts is to identify which alerts to suppress, so you can reduce the
amount of alert noise and focus on the primary alert. You are able to see both types of alerts, but the
secondary alerts are grouped under the primary alert in the alert console view.
Figure 181: Correlated alerts grouped under the primary alert (Alerts Console)
Primary and secondary alert rules are simply filters that specify which kinds of alerts are classified as
primary and which kinds are classified as secondary. The filter criteria runs against the Alert [em_alert]
table.
An alert can belong to more than one group, but only one level of secondary alerts is allowed under a
primary alert. For example, if secondary alert A is grouped under secondary alert B, both alerts become
secondary alerts under the same primary alert. See alert hierarchy for more information.
Alert relationships
Alert correlation rules enable you to specify what type of relationship the primary and secondary alerts
must have for the rule to match. These relationships depend on the relationships between the CIs in the
CMDB:
• None: Ignores the relationship between the primary and secondary alerts.
• Same CI: Both alerts need to be related with the same CI. If the CI field is blank, then the alerts need to
have the same Node value.
• Parent to Child: The relationship between the primary and secondary alert is a parent-child CI
relationship in the CMDB (in the CI Relationships table [cmdb_rel_ci]).
• Child to Parent: The relationship between the primary and secondary alert is a child-parent CI
relationship in the CMDB (in the CI Relationships table [cmdb_rel_ci]).
Alert correlation rules run when an alert is created or reopened. The alert is checked to determine if it
matches being primary or secondary.
When the primary alert severity is changed to Closed, the secondary alerts are also set to Closed unless
they are part of other groups where the primary alert is not yet set to Closed.
Alert hierarchy
Only one level of secondary alerts is permitted. In situations where a secondary alert has its own
secondary alert, the Event Management application flattens the hierarchy to preserve only two levels.
For example, assume alert A is the primary alert and alert B is the secondary alert. If alert C becomes a
secondary alert for alert B, the application flattens the hierarchy so that A remains the primary and B and C
become sibling secondary alerts, one level below A.
As another example, assume that there are three correlation rules that produce the following results:
• Rule 1 (with an Order value of 1): B becomes a primary alert for A.
• Rule 2 (with an Order value of 2): A becomes a primary alert for C and D.
• Rule 3 (with an Order value of 3): E becomes a primary alert for A.
When alerts B, C, D, and E are triggered, they all appear in the console separately because there are no
correlations between them.
When alert A is triggered:
1. Rule 1 makes A a secondary alert under alert B.
2. Rule 2 makes both C and D secondary alerts under alert A.
3. Rule 3 makes A a secondary alert under E, giving alert A two primary alerts above it in the hierarchy.
Therefore, if all alerts are triggered, only B and E appear in the alert console indicated as primary alerts.
Note: A secondary alert can be correlated with more than one primary alert, and vice versa.
3. Click the primary alert number to open the record for the primary alert.
The Correlated Alerts section shows a list of all the secondary alerts that are correlated with this
primary alert.
Field Description
Field Description
(function
findCorrelatedAlerts(currentAlert)
{
var res = {
'PRIMARY': []
};
var gr = new
GlideRecord('em_alert');
gr.addQuery('message_key',
'!=',
currentAlert.getValue('message_key'));
gr.query();
while (gr.next()) {
res['PRIMARY'].push(gr.getUniqueValue());
}
return JSON.stringify(res);
})(currentAlert);
If you delete a correlation rule, the existing correlation groupings on the alert console are not removed.
Service Analytics
ServiceNow® Service Analytics enhances Event Management with alert data analysis and alert aggregation
for technical services, manual services, and alert groups. It also provides root cause analysis (RCA) for
business services discovered by Service Mapping and for manual services.
With Service Analytics, you can do the following:
• Aggregate alerts.
Correlate alerts according to timestamps and CI identification creating automated alert groups. This
helps organize incoming real-time alerts, and reduce alert noise. An operator can then easily identify
related issues across the data center, in the context of services.
• Apply RCA to identify the root cause CI and associated discovered business services or manual
services that the initial root alert was generated for.
Service Analytics identifies root cause CIs for discovered business services and for manual services,
and then aggregates similar alerts from the root cause CI and from its related CIs into an automated
alert group. An operator can drill down and determine which CIs and alerts are affecting discovered
business service and manual service health.
• Correlates alerts based on CIs' relationships in the CMDB, creating CMDB alert groups.
• Visualize the correlation between alerts, services, and root cause CIs for discovered business services
and for manual services in a service map.
Table Description
SA Analytics Alert Staging Staging table for alerts used for analytics.
[sa_analytics_alert ]
SA RCA Service Configuration Item Association Associations between CIs and services.
[sa_rca_svc_ci_assoc]
Table Description
SA RCA SMC Rule Base Service Analytics (SA) Root Cause Analysis
(RCA) State Model.
[sa_rca_smc_rule_base]
Individual rules that are associated with RCA
configuration in the SA RCA SMC Config Base
[sa_rca_smc_config_base] table.
SA Alert Aggregation Learned Pattern Elements CI/Metric Name pairs associated with learned
patterns.
[sa_agg_pattern_element]
SA Alert Attribute Populator Status State and statistics for attribute populator job.
[sa_alert_attribute_populator_status table]
Property Usage
Enable root cause analysis for business services Enables RCA for alerts associated with business
services and manual services, to identify root
sa_analytics.rca_enabled
cause CIs.
• Type: true | false
• Default value: false
• Location: Service Analytics > Properties
Time interval (in seconds) criteria for grouping Interval that alerts must be created in, to be
alerts included in a group.
sa_analytics.rca.learner_group_interval_secs • Type: integer
• Default value: 60
• Range of possible values: 60-900
• Location: Service Analytics > Properties
Length of time period (in seconds) from which to The interval that the learner uses to chunk the
include alerts for analysis alert data for processing.
sa_analytics.rca.learner_query_interval_secs • Type: integer
• Default value: 86400
• Range of possible values: 43200-86400
• Location: Service Analytics > Properties
Confidence score % threshold, above which The confidence score that must be met by the
correlated alert groups will be displayed in identified root cause CI for the associated alerts
the Event Management dashboard and Alerts to be displayed.
Console
• Type: integer
sa_analytics.rca.query_probability_threshold • Default value: 0
• Range of possible values: 0-100
• Location: Service Analytics > Properties
Property Usage
Purge staging tables (in days) Number of days that RCA input is kept before it
is purged.
sa_analytics.rca.input_purge_days
• Type: integer
• Default value: 90
• Range of possible values: 30-180
• Location: Service Analytics > Properties
Generate event when MID Server file system When the file system of the MID Server exceeds
usage space exceeds limit (0-100%) the percentage threshold, an event is generated.
sa_analytics.rca.mid_max_allowed_space • Type: integer
• Default value: 20
• Range of possible values: 0-100
• Location: Service Analytics > Properties
CMDB property to be used for grouping the A property from the cmdb_ci table that can be
alerts used in alert aggregation used to group alerts by. The alert aggregation
Learner then learns those alerts with respect to
sa_analytics.agg.learner_group_by_property
the grouping.
When left empty, the alerts are learned together
without being grouped.
• Type: string
• Default value: none
• Other possible values:
• A column that is not a reference from the
cmdb_ci table such as po_number.
• A column that is a reference to another
table such as location.name (the name
field in the location table which is
referenced from cmdb_ci)
Property Usage
Enable CMDB Correlation for Alert Aggregation Enables alert correlation based on CMDB CIs
and relationships.
sa_analytics.agg.query_cmdb_correlation_enabled
• Type: true | false
• Default value: true
• Location: Service Analytics > Properties
Enable Mutual Information Graph Aggregator Enables Mutual Information aggregator for Alert
Aggregation Learner.
sa_analytics.agg.mi_graph_enabled
• Type: true | false
• Default value: true
• Location: Service Analytics > Properties
Use customer feedback in forming new Alert Incorporate customer feedback while forming
Aggregation Groups automated alert groups.
sa_analytics.agg.query_customer_feedback_enabled
• Type: true | false
• Default value: false
• Location: Service Analytics > Properties
Name Description
Service Analytics Purge Old Observation Data - Cleans the staging data.
Daily
Name Description
Service Analytics Prepare RCA Learner Input Prepares RCA input data. Stores and probes
Data - Daily MID server to learn statistical information about
alerts.
Service Analytics group alerts using RCA/Alert Applies RCA and alert aggregation to open
Aggregation alerts and prepares automated alert groups.
Service Analytics Alert Aggregation Learner - Learns information about existing alerts and
Daily groups new open alerts.
Service Analytics RCA Configuration Configures root cause analysis.
Service Analytics Check File System Space on Checks disk usage on the dedicated
Analytics MID -Daily MID Server, and generates an event
if it exceeds the threshold set in the
sa_analytics.rca.mid_max_allowed_space
property.
Service Analytics Gather Value Report Data - Gathers data for the Value Report.
Daily
Service Analytics - Update virtual alerts for Update the virtual alerts that were created to
aggregation groups represent alert aggregation groups, with any
changes to alerts belonging to that group. Runs
every minute.
Service Analytics Attribute Populator for Populate attributes used in feature identifier for
Historical Alerts historical alert data using event rules. Runs on
demand.
The default attribute used for forming patterns is metric_name. Navigate to Service Analytics > Manage
Pattern Identifier to view which pattern identifier attributes are currently in effect, to choose a different set
of attributes to deploy, or to define a new set of pattern identifier attributes.
To ensure that the specified pattern identifier attributes used for forming patterns is effective, a sufficient
number of alerts must have the respective attributes populated. Therefore, if you specify a new set of
pattern identifier attributes, you should do the following to ensure meaningful analysis:
• Create an event rule that populates the respective attributes.
• If a large number of existing alerts do not have values for the new set of pattern identifier attributes,
ensure to run the Service Analytics Attribute Populator for Historical Alerts job which will use the
appropriate event rule to populate attributes in historical alerts. Properties originating from the CMDB CI
using dot walking – are not populated.
• Choose effective identifiers:
• The set of pattern identifier attributes should not be too unique (for example, the date field is unique
for every alert), because it will be impossible to identify any pattern.
• The set of pattern identifier attributes should not be too common, because it will not be possible to
create distinct groups.
Only one set of pattern identifier attributes can be active at a time. A new set of pattern identifier attributes
is not automatically implemented until you deploy it. When you deploy a new set of attributes, the current
set of attributes that is in effect, becomes inactive. Subsequent queries use the active pattern identifier
attributes to perform alert aggregation.
1. Navigate to Event Management and click Manage Pattern Identifier.
2. On the SA Alert Aggregation Pattern Attributes page click New.
3. Click the Feature Identifier Attributes icon and in the slush bucket, move attributes from the
Available list to the Selected list. You can select the Configuration Item class, and click the '+' sign to
display and select its attributes.
4. Click Submit.
5. Optionally, activate the newly specified pattern identifier attributes by selecting the pattern identifier
attributes that you want to activate and clicking Deploy.
After specifying a new set of pattern identifier attributes, ensure that a matching event rule exists that
populates the respective alert attributes. Also, run the Service Analytics Attribute Populator for Historical
Alerts job to populate the respective attributes of historical alerts, if values are missing in existing alerts.
Alert aggregation
Alerts are grouped based on the CI that is associated with the alerts. Service Analytics groups alerts that
are very similar, but not necessarily identical, and also based on how close in time the alerts were created.
Alerts for technical services, manual services, and alert groups are not associated with a service model
and do not undergo RCA. Other than being correlated by time and CI, these alerts are not necessarily
related by the same underlying problem.
Alert Aggregation Learner An offline job that runs once a day to process past
alerts. The Alert Aggregation Learner identifies
patterns of related alerts using a combination
of pattern-based and probabilistic techniques. If
the sa_analytics.agg.learner_group_by_property
property is set, then before processing starts, the
Alert Aggregation Learner groups alerts by the
specified CMDB property.
Real Time Query A scheduled job that runs every minute and updates
alert aggregation groups. It tries to match real-
time alerts with alert patterns stored in the alert
knowledge base.
Service Analytics applies root cause analysis (RCA) algorithms if one of the CIs in an automated alert
group belongs to a discovered business service or to a manual service, in order to identify root cause CIs.
For a discovered business service and for a manual service, an automated alert group contains alerts that
were generated by the root cause CIs and by related CIs.
In the list view, automated alert groups are noted by an icon in the Group column, and the
value is Automated.
b) Click Automated to drill down to an automated alert group.
c) Click the link in the Number column in the list view, to display the individual alerts within the
group. For details about the alert form, see View alert information on page 971.
2. To display automated alert groups that are associated with the services displayed on the Event
Management dashboard:
a) Navigate to Event Management > Dashboard.
b) Enable Correlated Alerts.
c)
In the list view, automated alert groups are noted by an icon in the Group column.
• In the alerts console, after you open the Automated Grouped Alerts dialog box, you can provide
feedback about the accuracy and helpfulness of an automated alert group, and add or remove alerts
from the group.
• In the Event Management dashboard, view impacted services (if there are any) and root cause CI if
applicable, by double-clicking a service on the map, and then clicking Root Cause CI.
The ongoing operations of an application can generate a large number of events and alerts which
can become overwhelming when problems arise. If the system is experiencing a problem, the manual
process of assessing the impact of the alerts and identifying the underlying cause might require extensive
resources and might be lengthy.
To identify underlying problems, RCA algorithms prioritize alerts, group them in the context of impacted
services, and identify root cause CIs. The root cause CI is a CI from which the root alert for an incident
originated and which subsequently triggered other alerts to be generated.
RCA has these components:
RCA Learner An offline job that runs once a day to process past
alerts. It collects information about frequent alert
For discovered business services and for manual services, the Learner collects and analyzes data from
past alerts. For new alerts, the Learner applies existing knowledge from similar past alerts, and continues
to capture and analyze data from new alerts. As more alerts are encountered and resolved, the alert
knowledge base grows and the precision of diagnosing the root cause CI improves.
Business services are discovered by Service Mapping and represented internally in the system by a
service model. The service model of the business service is used for identifying CIs related to the root
cause CI.
When the root cause CI is known, operators can create a single incident ticket and engage only the
needed IT operator to expedite remediation. The IT operator can direct troubleshooting efforts to remove
the root cause problem, and stop the recurrence of undesirable events.
By default, RCA is not applied. You can enable RCA and modify other RCA-related behavior by changing
the settings of the sa_analytics.aggregation.include_service and sa_analytics.rca_enabled properties for
Service Analytics.
RCA configurations
RCA uses an RCA configuration that filters and scopes the alerts to be analyzed. The base system
includes pre-defined RCA configurations, but those might not be optimal in every environment. See RCA
configurations on page 1023 for more information.
Confidence score
To help you decide how to invest troubleshooting efforts, RCA algorithms calculate a confidence score
for the identified root cause CI. The confidence score is based on the Learner data and expresses the
confidence in the identification. For example, a confidence score of 75% means that there is a certainty of
75% in the identification of the root cause CI. If more than one cause is possible, you can investigate the
most likely root cause before investigating less likely root causes.
By default, RCA groups with any confidence scores are displayed. To limit what groups are displayed,
change the sa_analytics.rca.query_probabality_threshold property to a percentage that the RCA group
confidence score must meet to be displayed. If a root cause CI has a confidence score that is lower than
the specified percentage, Service Analytics does not treat that CI as the root cause.
You have these options for viewing root cause CI, if applicable.
UI access Description
Event Management dashboard - root cause CI Displays root cause CIs highlighted in a business
service map, and the relationships between the
root cause CI, alerts, and related CIs in business
services.
From a correlated alert group Displays all automated alert groups and lets
you drill down to view details about the alerts
in the group, and the root cause CI if it exists.
Double-click a group, and then click the Impacted
Services tab to display the services and root cause
CIs if applicable.
RCA configurations
Service Analytics uses an RCA configuration to determine which alerts to include in its root cause analysis.
The RCA configuration is used in learning the conditional probability of how a particular state of CI impacts
other CIs. Multiple RCA configurations can be defined, but only a single configuration is in effect at any
point of time for a given domain.
There are two types of RCA configurations, and the base system includes a pre-defined RCA configuration
for each type:
• Rule-based model: Considers the severity of the alert, and maps any severity level (Critical, Major,
Minor, Warning) into a state. For example, a binary-state configuration maps into two possible states:
• 0 – No alert for the CI
• 1 – Alerts exist for the CI
Each configuration consists of one or more rules that define the set of alerts to be included in the RCA,
such as all alerts with the severity of critical. You can define custom RCA configuration using the rule-
based model, adding rules to map alerts according to CI and alert attributes to various states.
The base system includes the predefined RCA configuration Default Binary Model Config, which is the
default RCA configuration.
• Multi-state model: Based on the combination of the Resource and the Metric Name alert columns to
learn the model, and it is not associated with any rules. The multi-state model combines the Metric
Name and Resource alert columns into a string, and then aggregates such strings from multiple alerts
into a single state string. The number of resulting states is determined by the number of unique state
strings for a particular CI.
The base system includes the predefined RCA configuration Default Multi State Config, which you can
use in a comparison test of RCA configurations. There are no variations for this default configuration,
and therefore you cannot create a custom RCA configuration that is based on the multi-state model.
There is no single RCA configuration that is the most optimal in every environment. Therefore, in addition
to the default configurations, you can create custom rule-based RCA configuration. A custom RCA
configuration can use additional CI and alert attributes, such as a location attribute to scope the analysis to
a specific data center. User can add rules that define how to map alerts for different CIs to various states.
You can compare the RCA results on actual business data, between any two RCA configurations allowing
you to select the optimal configuration in your environment. A comparison simulates analysis on historical
data, and the RCA Configs Comparison report displays the results of the comparison.
Create an RCA configuration
Service Analytics uses an RCA configuration to determine which set of alerts from the sa_analytics_alert
table to use for RCA. You can create a custom rule-based RCA configuration, compare it with another RCA
configuration and decide how efficient and helpful it is. You can then choose the RCA configuration that is
most suitable in your environment, and configure RCA to use that configuration in its analysis.
Field Description
Field Description
Field Description
5. Select the check box for the new rule and click Alert Coverage to calculate the rule's % Alert
Coverage.
• Compare the new RCA configuration with another configuration, and decide which configuration is most
helpful and should be deployed in the next RCA Learner cycle.
• After modifying a configuration, or when there is a significant number of new alerts, select any rules and
click Alert Coverage. This recalculates the % Alert Coverage for the selected rules.
6. Navigate to Service Analytics > Reports > RCA Configs Comparsion, and select a comparison to
review. Ensure that the status of the comparison is Completed.
7. Decide which configuration you want to deploy, and click the configuration link in the comparison that
you are reviewing.
8. On the configuration form, click Deploy.
Field Description
4. Double-click a root cause CI to display the alerts within this alert group.
5. Select any alert to highlight the CIs that it is associated with, and view additional details about the
alert.
Reports
Service Analytics generates several reports, to help you assess the efficiency and depth of analysis.
You can access Service Analytics reports by navigating to Service Analytics > Reports.
Comparison Report displays the RCA results for each configuration, if it would have been used during the
past 7 days. Once you start a comparison, wait for the report to show that the status is Completed.
To display the report, navigate to Service Analytics > Reports > RCA Configs Comparison, which
requires the evt_mgmt_admin role. The report has 3 sections, each section provides further details for a
selected item in the previous section.
This section lists summaries of recent comparisons, with the following details:
Column Description
Actions:
• Select a comparison for which to display further details about RCA results for Config (A) alongside
Config (B).
• Deploy an RCA configuration:
1. Click the link for the RCA configuration that you want to deploy in the Config (A) or Config (B)
column.
2. On the configuration form, click Deploy.
RCA results for Config (A) and Config (B) per comparison
This section displays a comparison of root cause analysis for Config (A) and Config (B), for the selected
comparison in the previous section. Displaying the following details:
Column Description
Column Description
Actions:
• Select a automated alert group to display further details about all the alerts within that group.
• Score results:
1. Select the group and the configuration that you want to score, in either Score (A) or Score (B)
column.
2. Double-click the score value, enter a numeric score, and then click the green check-mark. The
new group score value is aggregated into the summary score for Score (A) or Score (B) in the
comparison summary row for this comparison.
• Toggle between different display options for the automated alert groups:
• Same: Displays only the automated alert groups for which RCA results for Config (A) and Config (B)
are identical.
• Different: Displays only the automated alert groups for which RCA results for Config (A) and Config
(B) are different.
• All: Displays all automated alert groups for which there are RCA results for either Config (A) or
Config (B).
This section displays details of all related alerts within a automated alert group selected in the previous
section. Displaying the following details for both Config (A) and Config (B):
Column Description
Value Report
The Service Analytics Value Report consists of three sections, displaying metrics about alert coverage
in data analysis and its efficiency in terms of alert group correlation. You can use that data to determine
which incidents need to be created.
Actions:
• Select a preset time period for the report.
• Drag the time slider to select the period of time for the report. Or, use the right or left handle of the time
slider to extend or shorten the time period for the report.
• Hover over a report to display more data about a specific point of time in the report.
Alert Coverage
Displays the total number and percentage of alerts that are grouped into automated alert groups. Based
on that, you can evaluate the number and percentage of alerts that were left out, and not included in
any automated alert group. Higher percentage rates indicate a more efficient analysis in which higher
percentage of alerts are found to correlate with each other, likely generated due to the same problem.
The formula for this report is: Grouped Alerts/Total Number of Alerts
Displays the effectiveness of grouping alerts into automated alert groups, calculating the number of groups
being formed in relation to the number of alerts that were not grouped. Higher compression rates indicate
an efficient correlation of alerts, and a more accurate mapping of automated alert groups to incidents.
Higher compression rates indicates a smaller total number of automated alert groups, and fewer incidents
to be created.
The formula for the report is: (1 – ((No. of Groups + No. of Ungrouped Alerts) / Total No. of Alerts))
Displays a summary of feedback that was provided for automated alert groups. The report displays the
total number of groups for which positive and negative feedback was provided, and the number of groups
for which no feedback was provided.
MetricName by default). You can sort the learned patterns by pattern score, pattern frequency, and pattern
size. This report helps you identify higher-frequency occurring alert patterns.
For each alert, alert aggregation combines the associated pattern identifier attributes that represent the
alert. It then groups all the pattern identifier attributes for alerts that occur together, to create a learned
pattern. The same combination of pattern identifier attributes can exist in multiple learned patterns.
Typically, a learned pattern would indicate a single problem. The report displays metrics in a tile format
broken by learned patterns.
To display the report, navigate to Service Analytics > Reports > Learned Patterns.
You can use this report to:
• Allocate resources to the higher frequency occurring alerts to reduce the time for closing a large
number of alerts.
• Allocate resources to larger learned patterns of lower frequency.
Actions:
• Select a tile to display further details about all the pattern identifier attributes that are associated with
the selected tile.
Column Description
• Hover over the pattern number to see the pattern size, pattern frequency, and pattern score details for
the pattern.
• Sort the report by:
• Pattern Size: The number of combined pattern identifier attributes occurrences in a learned pattern.
• Pattern Frequency: The number of times that a learned pattern occurs in the data set.
• Pattern Score: Combined calculation of frequency and size, using the formula:
Frequency * (Size + 1)
Operational Metrics
Operational Metrics learns from historical metric data, and builds standard statistical models to project
expected metric values along with upper and lower control bounds. Operational Metrics then uses these
projections to detect statistical outliers and to calculate anomaly scores. Operational Metrics is available
when you activate the ITOM Metric Management (com.snc.sa.metric) plugin.
Metric data is collected by various data sources. For example, SCOM is a monitoring system that collects
metric data from the source environment regularly. Operational Metrics captures the raw data from these
monitoring systems, and uses event rules and the CMDB identification engine to map this data to existing
CIs. Operational Metrics then analyzes the data to detect anomalies and to provide other statistical scores.
By default, Operational Metrics is already partially configured for using the SCOM data source.
After processing, metrics statistics and charts are displayed in the Metric Explorer, and the Anomaly Map
displays correlated scores for CIs with the highest anomaly scores, across a timeline.
Operational Metrics uses a dedicated MID Server to process the raw data, to build a statistical model,
and to detect anomalies. The MID Server is configured with extensions that enable it to independently
transmit metric data to the instance. The MID Server transmits the statistical model and anomalies above a
specified score (4 by default) to the instance where it is accessible to users.
Metrics to CI mapping
The data that is collected on the MID Server is raw and does not relate to any specific CI in the CMDB. To
be useful, the data goes through a normalization process that uses CMDB identification rules and event
rules to uniquely identify CIs, and to map them to the raw data.
Records for mapping of raw data to CIs are automatically generated and remain in effect for a specified
length of time determined by the properties:
• sa.metric.map.with.ci.expiration.sec: If the mapping to the CI was found. Set by default to be valid for
five days.
• sa.metric.map.without.ci.expiration.sec: If mapping to the CI was not found. Set by default to be valid for
24 hours.
When similar metric data arrives within that time period, the existing mapping is used to match the data to
CIs. At the end of the time period, metric-to-CI records expire. Also, a change in the event rules triggers an
immediate expiration of the respective metric-to-CI records. Next time that raw metric data arrives, it will be
normalized again. When Discovery adds or removes CIs, mappings are adjusted to reflect these changes
at the next cycle.
Operational Metrics. If the default SCOM connector is used, then the same MID Server that is used to
collect events, should be used for Operational Metrics.
3. Configure the Operational Metric extension on page 1036.
4. To configure for pulling:
a) Create a connector definition on page 867 (For a SCOM connector - this is already defined for
pulling events from SCOM).
b) Configure a connector instance on page 863 (For a SCOM connector - see Configure the
SCOM connector instance on page 884).
3. Click Submit.
Table 442: Hardware requirements (scaled for 2,500 CIs, with 20 metrics per CI)
Component Requirement
Memory • Minimum: 4 GB
• Recommended: 8 GB
• Recommended: Quad-core
Windows 32-bit and 64-bit versions: 32-bit version of the MID Server
• Windows 2008 R2
• Windows Server 2012 R2
Linux (supported only for • Red Hat Enterprise Edition 32-bit or 64-bit version of the
pushing) Linux 6.6 or later MID Server
• CentOS Linux 6.6 or later
PowerShell (required only with Version 3.0 or later. Powershell Execution policy
the SCOM connector) must be Unrestricted or Remote
Use Get-Host to check the
PowerShell version. Signed.
• Use Get-
ExecutionPolicy to see
the security level.
• Use Set-
ExecutionPolicy
Unrestricted.
• Or, use Set-
ExecutionPolicy
RemoteSigned.
Configuration Requirement
MID Server service logon user (required only for Must be set to a user with read access to the
pulling) SCOM database (OperationsManagerDW).
1. On the MID Server, open the MID Server
Properties dialog box.
2. Click the Log On tab.
3. Select This account, and enter credentials
for a user with the required access.
4. Click Apply.
If the MID Server needs to support ServiceAnalytics as well as one or more other applications, you
can modify the definition of the ALL option to include these applications, and then select ALL in the
slushbucket. ALL is the only option to which it is valid to add the ServiceAnalytics option. Performance
might be compromised with these settings.
5. Add the ITOA Metrics capability:
a) At the center of the MID Server form, click Capabilities.
b) In the Capabilities section click Edit.
c) In the slushbucket select ITOA Metrics and click the '>' add button.
d) Click Save.
6. Click Update.
Create a failover cluster for Operational Metrics. In this cluster, add the MID Servers that were configured
for Operational Metrics. For more information, see MID Server cluster configuration.
The MID Server Operational Metric extension does not provide any API calls. However, when the Enable
REST Listener option is selected, the extension adds a handler for the supported REST APIs.
Note:
• After the initial configuration, the first metric is not included in the metrics data.
• There is a delay of one minute in receiving metric information from the synchronization of the
instance with the MID Server.
Field Description
4. When using the Push method for collecting Operational Metrics, the MID Server Operational Metric
extension must be configured with the Enable REST Listener enabled. This option enables a listener
so that a REST endpoint can receive raw metric data. The raw metric data is then placed in the
regular data flow where the data is sent to the instance and the anomaly detector looks for anomalies.
When selected, it adds a handler to the web server to listen for any metrics that are pushed to the
MID Server. When this option is selected, the Web Server extension, which starts a Web Server on
the MID Server, must also be configured. For more information, see Configure the MID Web Server
extension on page 1039.
5. Right-click the form heading and select Save.
6. Under Related Links click Start to save the Operational Metrics data in this extension and start the
extension.
Table 445: Commands available in the MID Server Operational Metric extension
Field Description
Basic
• The user must provide a username and
password. The same username and
password must be provided for every
request.
• On the instance, the password is stored
encrypted and it is sent also to the MID
Server encrypted.
• In the MID Server, the password is saved
in memory.
• When the request is received, the
password is decrypted and matched with
the password provided in the request.
Field Description
4. Select Secure Connection to provide extra protection, if required. When selected, enter the values for
these fields:
a) In the Keystore Certificate Alias field, enter the name of the keystore certificate.
b) In the Keystore Password field, enter the keystore password.
Item Value
For example:
2016-06-08T20:54:58.917Z
Content-Type application/json
Use the following request elements to generate the required string: HTTP-Verb, Content-
Type, Date, and request path. Specify these elements and place them in the following
order:
• HTTP-Verb + "\n" +
• Content-Type + "\n" +
• Date + "\n" +
• Request-Path
package sample;
import com.glide.util;
import java.security.SignatureException;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
/***
* Generates base64-encode the HMAC(Hash Message Authentication
Code) of input data
*
* @param data
* @param key
* @return
* @throws java.security.SignatureException
*/
public static String signData(String data, String key) throws
java.security.SignatureException {
String result;
try {
// get an hmac_sha1 key from the raw key bytes
SecretKeySpec signingKey = new SecretKeySpec(key.getBytes(),
HMAC_SHA1_ALGORITHM);
} catch (Exception e) {
throw new SignatureException("Failed to generate HMAC : " +
e.getMessage());
}
return result;
}
}
If you have the RSA private key in the Java keytool -import -alias keyname -
Keystore and generated the certificate from file server.cert –storetype JCEKS –
that key. keystore webserver_keystore.jceks –
storepass pwd
If you have a PKCS12 file that contains the keytool -importkeystore
RSA key and the certificate. -destkeystore
webserver_keystore.jceks -
deststoretype jceks -srckeystore
<PKCS12 filename> -srcstoretype
pkcs12
Ensure that the private key password is the same as the Java Keystore password.
To change the password, use this command:
For testing, you can use this command to generate a self-signed certificate.
The user must provide the certificate alias and password when configuring the MID Web Server extension.
For more information, see Configure the MID Web Server extension on page 1039.
Property Usage
Table Description
Metric Time Series Model Statistical models built for metric data.
[sa_time_series]
Metric Type Registration Deleted SysIds SysIds of records that were deleted from the SA
Metric Type Registration [sa_metric_registration]
[sa_metric_registration_deleted]
table.
Metric Schema Definition Map of metrics being received based on CI
class. It is used to optimize the metric data
[sa_metric_schema_definition]
payload being sent from the MID Server to the
instance.
Metric Type Metric type source.
[sa_metric_type]
Monitoring System Metric Type Deleted SysIds SysIds of deleted monitoring system metrics.
[sa_source_metric_type_deleted]
Monitoring System Metric Type Metric types per CI class, active/inactive status,
and metric source.
[sa_source_metric_type]
Name Description
Operational Metrics - Metric Learner job Runs daily and builds the statistical models used
for anomaly detection.
Operational Metrics - Purge old metric schemas Cleans up the sa_metric_anomaly_score and
and anomaly scores the sa_metric_schema_definition tables.
Name Description
Operational Metrics - Purge RRD Definitions for Clean up metric data to reflect CIs that were
deleted CIs deleted.
Operational Metrics – Sync metric schema to Synchronizing the metric schema table
mid sa_metric_schema_definition.
Operational Metrics – Sync tables with mid Synchronizing tables.
Operational Metrics – Table cleanup Cleaning up tables related to various deletion
operations.
Column Description
Note: The Active setting (true or false) for a metric in the Monitoring System Metric Types
[sa_source_metric_type] table takes precedence over the setting for the corresponding metric in
the Metric To CI Mappings [sa_metric_map] table. If a metric type in the Monitoring System Metric
Types [sa_source_metric_type] table is disabled, all records related to the corresponding metrics
are removed from the Metric To CI Mappings [sa_metric_map] table.
Column Description
Adjusting mappings to reflect the addition or removal of CIs by Discovery, takes effect only upon the
beginning of the next cycle.
Operational Metrics supports up to 200,000 active metrics.
Note: The Active setting (true or false) for a source metric in the Monitoring System Metric Types
[sa_source_metric_type] table takes precedence over the setting for the corresponding metric in
the Metric To CI Mappings [sa_metric_map] table. If a source metric type in the Monitoring System
Metric Types [sa_source_metric_type] table is disabled, all records related to the corresponding
metrics are removed from the Metric To CI Mappings [sa_metric_map] table.
through 9 are severity 2 (Major), and are displayed by dark orange color in the Metric Explorer and in the
Anomaly Map.
1. Navigate to Event Management > Settings > Anomaly Score to Severity Mapping.
2. On the Metric Anomaly Score to Event Severity Maps page, double-click the event severity that you
want to modify.
3. Modify the range of anomaly scores that map to the respective event severity.
4. Click Update.
4. Click the Chart Settings icon to toggle the display of statistics and aggregations on the chart.
Enabling or disabling an item to add or to remove metrics from a chart, also updates the chart's legend
to reflect the change.
5. Point to a legend item and click its 'X' icon to remove the corresponding metric from the chart, or to
disable the corresponding series from the Chart Settings.
• Click Add Configuration Item to add to the Metrics Explorer any CI that is not displayed by default.
• Click the Remove icon on a CI to remove it from the Metrics Explorer.
• Zoom in or out by changing the Time Range selection. Select one of the preset time periods to display
anomaly scores for the last 6 Hours or the last 1 Day for example, or specify a custom time period for
up to 90 days back.
Highlight a section of a chart by pointing to the upper left corner of the section and dragging the mouse
device to the lower right corner of the section, to zoom in. This operation changes the time range
of the chart, and if there are other charts on the canvas, they are all automatically synchronized to
display data for the same time range. On the Time Range bar, the custom dates are also automatically
updated with the new time range.
• Add a filter to display only a specific set of CIs out of the CIs that are displayed.
• Click the Export icon to download a chart as a .png or .svg image, or as a .pdf document.
• Click Add Configuration Item to add to the map a CI that has anomalies and that isn't included in the
top 10 anomalies.
• Click the Remove icon on a CI to remove it from the map.
• Zoom in or out by changing the Time Range. Select one of the preset time periods or display data for
example for the last 6 Hours or the last 3 Days, or specify a custom time period up to 90 days back. Or,
click the timestamp at a column heading to drill into the time period between the current column and the
column to the right.
• Add a filter to display only a specific set of CIs out of the CIs that are displayed.
The statistical model is used to calculate standard deviations, upper and lower bounds, and statistical
outliers which are then used to detect anomalies. An anomaly is when metric values are out of the
projected values according to the statistical model. The system monitors the frequency and persistence of
statistical outliers across time to compute a score between 0-10 that indicates how abnormal a deviation is.
Operational Metrics constantly generates anomaly alerts whenever the anomaly score is above zero. If
there is a score that is above 4 which has changed from the previous score - it is sent to the instance so
the entire sequence of deviations over time can be displayed in the Metric Explorer.
1. Navigate to Event Management > Operational Metrics > Anomaly Alerts.
2. In the Alerts Anomalies list view, double-click on an alert that you want to view.
• You can create an event rule that examines the anomaly alerts, and generates a regular system alert
that is based on anomaly alerts. Create an event rule, and specify:
• Filter: Add filter conditions such as [CI identifier] [is] [CI SysId], or [Metric Name] [is] [metric]. To
select anomaly alerts add the filter condition [Classification] [is] [Anomaly].
• On the Transform tab, add an entry to Event Compose Fields where Field is classification
and Composition is 0.
• Right-click an alert, and then click View Metrics to open the integrated Metrics Explorer and
Dependency Views map for the CI associated with the alert.
URL format
This request goes to the Operational Metrics MID Server, not the instance. The MID Server must have
the operational metric extension setup with Enable REST endpoint set to true. See Get started with
Operational Metrics for more information.
The URL of the request is the form http[https]://mid1.servicenow.com/api/mid/sa/metrics,
where mid1.servicenow.com is the FQDN or IP address of the MID Server.
Versioned URL: api/mid/sa/metrics
Default URL: api/mid/sa/metrics
Parameter Description
None
Headers
Header Description
None
Header Description
None
Status codes
The following status codes apply to this HTTP action. For a list of possible status codes used in the REST
API, see REST response codes.
Request body
The API accepts these JSON or XML elements in the request body.
Element Description
Element Description
[{
"metric_type": "Disk C: % Free Space",
"resource": "C:\\",
"node": "lnux100",
"value": 50,
"timestamp": 1473183012000,
"ci_identifier": {
"node": "lnux100"
},
"source": "Splunk"
}]
Cloud Management
With the ServiceNow® Cloud Management application, you can use a single ServiceNow interface to
define, administer, and measure workflows for provisioning cloud resources. You can also manage
the life cycles of those resources. Cloud Management is integrated with both private and public cloud
management providers, including Amazon Web Services, Microsoft Azure, and VMware offerings.
Cloud Management is available as a separate subscription from the rest of the ServiceNow platform. You
must also request activation from ServiceNow personnel.
It also requires Orchestration, which is available as a separate subscription from the rest of the platform.
VM users request and manage their cloud assets using the familiar ServiceNow service catalog interface.
Cloud Management provides process and service automation with orchestration, approvals, and service
catalog capabilities. The system can package and deliver infrastructure elements, such as servers,
networks, and storage to end users through the service catalog. The system auto-provisions the cloud
resources that a user requests and manages using a self-service portal.
The Cloud Management application offers the following capabilities:
• Abstraction of virtualization systems: Because the requesters of virtual resources are not required to
know the details of the specific virtualization system, the system uses a single interface to manage
virtual resources in public and private clouds.
• Reuse of virtual machine configurations: The system uses the templates provided by the third-party
vendors (for example, Azure templates, VMware templates, and Amazon images) to create reusable
catalog items, typically with a wide range of capability sets. Using the service catalog, the user selects
from the resulting resource templates.
• Service catalog interface: Requesting the right virtual resource for the job from the catalog is quick and
easy for the user.
• Role-based access: Role-based security ensures that users have the proper privileges for viewing,
creating, and managing virtual resources.
• Dedicated service portals: ServiceNow users view their virtual resources and request changes on a
dedicated portal. Administrative and operational users manage virtual machines, provisioning tasks,
and SLAs from portals that grant role-based access to virtual resources.
• Controlled lease duration: The system applies default end dates to all virtual machine leases. Lease
duration controls prevent unused virtual machines from persisting past their intended use date.
• Fully integrated with the ServiceNow platform: Approvals, notifications, security, asset management,
and compliance capabilities are all integrated into virtual resource management processes.
Note: The Cloud Management application was called Cloud Provisioning in earlier versions.
• Approvals: You can make any request subject to approvals, allowing you to develop complex and
business-critical processes.
• Discovery of VMs: Use the Discovery application or the standalone capability to discover virtual
resources and their relationships in their environments (for example, relationships between VMware
components in the vCenter instance).
• Capacity: Because the Discovery process returns basic capacity information for VMs, a cloud operator
can determine the best virtualization server for a VM.
• Schedule Labs: Schedule a lab framework to manage multiple groups of virtual resources for a common
purpose, such as training. Schedule lab termination date and time to shut down virtual resources as
soon as they complete their function.
• Modify VMs: Request modifications to existing VM images, for example increased memory. Workflows
can require approvals for each modification or can auto-create a change request.
• Notifications: The system delivers notifications during the virtual resource life cycle to provide
information and set expectations for system users.
• Prices: Price calculations and an integration of managed virtual machines with asset management
provide a cost-based component to Cloud Management.
• Information about requests: The system collects notes and guest customization information and
attaches the data to the provisioning request.
• Automated provisioning: A cloud administrator can configure the system to apply automatic or manual
provisioning to virtual resource requests:
• Fully automatic: Rules make all configuration decisions and processing goes directly from the
request to provisioning.
• Manual: Rules make all configuration decisions but a cloud operator can modify and approve the
request before continuing.
• Schedules: The system specifies a lease duration when creating a virtual resource. The schedule
includes start and end times, a grace period, and automatic stop/terminate actions. The system notifies
the user of lease status.
• SLAs: The system tracks SLAs and OLAs for Cloud Management requests.
• Workflows: Each action employs customizable workflows that allow business processes to include
Cloud Management as a step.
• Zero-click provisioning: Fully automated provisioning enables IT departments to respond quickly to
customer requests.
• Security: All workflows, business rules, and scripts in the base system are locked. The system restricts
cloud management actions to users with particular role.
3. Click Submit.
VMware:
Configure VMware credentials on page 1100
Discovery
Discover existing resources. AWS:
Discover and view AWS resources
Azure:
Configure and run Discovery for your Azure
subscription
VMware:
Discover vCenter resources
Azure:
Approve images to use as the basis for Azure
catalog items
Catalog creation
Use approved images to create catalog items AWS:
and customize any variables on these items.
Set up an AWS catalog offering on page 1113
Azure:
Set up an Azure catalog offering on page 1133
VMware:
Create a catalog item for VMware on page 1151
Provisioning
Configure provisioning rules Define provisioning rules on page 1182
Templates
Configure any templates for stacks or resource AWS stacks:
groups.
Create a CloudFormation template on page
1123
Billing
Set up billing download schedules. Define the schedule for downloading billing data
Reporting
View virtual assets and billing reports. Viewing and managing your virtual assets
Cloud Management reports
Cloud users can request virtual resources from the service catalog and can perform the following actions
only on VMs assigned to them.
• Modify VM (Azure and VMware only)
• Update Lease End
• Pause VM
• Stop VM
• Start VM
• Cancel
• Terminate
• Take Snapshot
• Restore from Snapshot
• Delete Snapshot
Cloud Approvers can approve or reject requests for virtual resources. Approvers have no technical
responsibilities.
Cloud operators use the Cloud Operations portal to perform the day-to-day work of cloud provisioning
and management. Cloud operators are typically assigned to particular virtualization providers and must
be technically adept with the products they support. Users with the cloud_operator role can perform the
following actions on any virtual machine:
• Modify VM (Azure and VMware only)
• Update Lease End
• Pause VM
• Stop VM
• Start VM
• Cancel
• Terminate
• Take Snapshot
• Restore from Snapshot
• Delete Snapshot
Cloud Administrators own the Cloud Management environment and are responsible for configuring the
virtualization products that are supported by the Cloud Management application on your instance. Admins
receive both the cloud_admin and the cloud_user roles. Admins can perform the following actions only on
VMs assigned to them:
• Modify VM (Azure and VMware only)
• Update Lease End
• Pause VM
• Stop VM
• Start VM
• Cancel
• Terminate
• Take Snapshot
• Restore from Snapshot
• Delete Snapshot
• Define vCenters
• Define catalog offerings
• Set pricing for the offerings
• Define provisioning rules
• Define change control parameters for a virtual machine
• Approve change requests associated with virtual machine modifications
• Set properties applicable to cloud management
• Set up networking information for VMware guest customization
• Monitor requests and key metrics related to requests surrounding virtual machines
MID Servers
The discovery of Amazon Web Services cloud is based on account information rather than IP addresses.
MID Servers are not used in this type of discovery. To perform host-based discovery of the virtual hosts
contained within an AWS Virtual Private Cloud (VPC), you need a MID Server.
To set up and configure a MID Server, see:
• MID Server installation
• MID Server configuration
Plugins
Name Description
Tables
Table Description
ec2_account Contains account details for the Amazon EC2 account being used for
provisioning or terminating instances.
ec2_approved_images
Contains images approved by a cloud_administrator for provisioning user
requests.
Table Description
ec2_image Contains a list of all EC2 images from all regions in which region settings are
defined.
ec2_keypairs Contains the keypairs for all the regions for the account.
ec2_region_settingsContains data associated with an account. This table contains the region and
the keypair to use when provisioning instances in a region.
ec2_region Contains the 5 current SOAP endpoints to Amazon's five datacenters (regions).
sc_ec2_os_selectionContains the OS types that appear in the service catalog, as defined by the
cloud_administrator.
sc_ec2_type_selection
Contains the types that appear in the service catalog. The base system
includes Small and Large, which correspond to Amazon's m1.small and
m1.large classification, respectively.
ec2_shared_image_account
Contains the names and numbers of the accounts that provide shared EC2
images.
User roles
Modules in the Amazon AWS Cloud application are accessible to users as described in User roles installed
with Amazon AWS Cloud on page 1065.
User groups
Group Description
Virtual Provisioning Cloud Administrators Members of this group own the Cloud
Management environment and are responsible
for configuring the Amazon EC2 environment.
Virtual Provisioning Cloud Operators Members of this group fulfill provisioning
requests from users. This group also contains
the EC2 Operators group.
EC2 Operators Members of this group are responsible for the
technical operation of the Amazon EC2 virtual
provisioning environment.
EC2 Approvers Members of this group approve requests for
Amazon VMs.
Virtual Provisioning Cloud Users Members of this group can request virtual
machines from the service catalog and request
changes to virtual machines assigned to them.
Script includes
Plugins
Name Description
User roles
Tables
Name Description
AWS Tags Defines the tags used for tagging Amazon resources.
[aws_tag]
AWS Tag Rules Defines the rules for automatically attaching values to tags.
[aws_tag_rule]
AWS CI Tag Value Maps a CMDCI to a tag name and tag value.
[aws_ci_tag_value]
AWS Resource Tag History Maps an aws_rsrc_tag_history reference to a tag name and tag
Mapping value.
[aws_rsrc_tag_history_mapping]
Name Description
AWS Billing Report Type Defines how to parse a downloaded Amazon billing .csv file.
[aws_billing_report_type]
AWS Report Download Log Represents a log item created during an AWS billing download.
[aws_report_download_log]
AWS Billing Rollup Defines which field to sum from on the hourly and daily billing tables.
Aggregate
[aws_billing_rollup_aggregate]
AWS Billing Rollup Pivot Defines which field to roll up on the hour and daily billing tables.
Point
[aws_billing_rollup_pivot]
AWS Billing Report Filter Inserts all unique resource IDs that are used in Amazon cost reports.
Point
[aws_billing_report_filter]
AWS Billing Hourly with Contains records corresponding to an Amazon line item with hourly
Tags data and tag mapping relationship. Keeps data less than 30 days old.
[aws_billing_hourly_with_tags]
AWS Billing Chart Definition Contains cost and usage report selections.
[aws_billing_chart_definition]
AWS Billing Rollup Daily Created from aws_billing_hourly_with_tags records table, for a 1 day
duration. Only keeps data less than 180 days old.
[aws_billing_rollup_daily]
AWS Billing Rollup Monthly Created from the aws_billing_hourly_with_tags table, for a 1 month
duration. Keeps only data less than one year old.
[aws_billing_rollup_monthly]
AWS Optimization Rule Contains rules for Amazon Elastic Block Store resources. This table
EBS is a child of the opt_rule_attached_storage table.
[aws_opt_rule_ebs]
AWS Optimization Rule Contains rules for Amazon EC2 resources. This table is a child of the
EC2 opt_rule_attached_storage table.
[aws_opt_rule_ec2]
AWS Optimization Rule Contains rules for Amazon Elastic Load Balancing. This table is a
ELB child of the opt_rule_attached_storage table.
[aws_opt_rule_elb]
CloudFormation Contains rules for Amazon Elastic Load Balancing. This table is a
Provisioning Rules child of the opt_rule_attached_storage table.
[aws_cldfrm_provisioning_rules]
Name Description
AWS CloudFormation Stack Contains an editable draft of stack changes. The records are created
Change Definition when modifying a stack.
[aws_cldfrm_stack_change_definition]
AWS CloudFormation Stack Contains an editable draft of stack parameters. The records are
Change Parameters created when modifying a stack.
[aws_cldfrm_stack_change_params]
AWS CloudFormation Script Contains a script mapping table used to resolve scripts dynamically.
Mapping
[aws_cloudformation_script_mapping]
AWS Console Access Contains either a URL or link to generate tokens for Amazon console
Request access.
[aws_console_access_request]
CloudFormation Contains a list tables that are not applicable for CloudFormation
Parameters Reference Parameters Reference Table lookup.
Table Exclusions List
[cldfrm_param_ref_table_excl]
CloudFormation Variable Contains mapping for CloudFormation template and variable sets.
Set for Template
Parameters
[cloudformation_template_to_variableset]
AWS CloudFormation Stack Contains a list of CloudFormation stacks from all Amazon regions.
[cmdb_ci_cloudformation_stack]
Name Description
AWS Security Token Contains the catalog item for AWS security tokens.
Catalog Item
[sc_aws_security_token_cat_item]
AWS Auto Scaling Group Contains a list of launch configurations from all regions.
Launch Config
[aws_asgrp_launch_cfg]
AWS Policy Contains a list of policies that can be applied to security tokens for
federated access.
[aws_policy]
AWS Security Token for Contains the mapping for a user and a security token.
User
[aws_user_token]
AWS Auto Scaling Group Contains a list of autoscaling groups from all regions.
[cmdb_ci_aws_asgrp ]
Amazon Web Services Contains algorithms, API version, and other meta data needed to
make API call for different Amazon Web Services.
[amazon_web_service]
AWS Datasource Type Contains mappings between discovered attributes and a variable in a
Mapping normalized object created on the fly.
[aws_datasource_type_mapping]
AWS Region Endpoint Contains a list of endpoints for all AWS regions.
[aws_region_endpoint]
Name Description
AWS Account Contains account details for Amazon accounts such as username
and ID.
[aws_account_admin]
AWS Availability Zone Contains a list of availability zone from all regions.
[aws_availability_zone]
AWS Credentials Contains access credentials for AWS accounts, like security keys and
access keys.
[aws_credentials]
AWS Elastic Block Store Contains a list of snapshot from all regions.
Snapshot
[aws_ebs_snapshot]
AWS Elastic Load Balancer Contains a list of load balancer policies from all regions.
Policy
[aws_elb_policy]
AWS Noun Contains nouns to create probe actions such as image and instances.
[aws_noun]
AWS Verb Contains verbs to create probe actions such as get and describe.
[aws_verb]
AWS Elastic Load Balancer Contains a list of ELB listeners from all regions.
Listener
[cmdb_ci_aws_elb_listener]
Name Description
EC2 Virtual Machine Contains a list of virtual machine instances from all regions.
Instance
[cmdb_ci_ec2_instance]
AWS VPC Security Group Contains a list of security group rule traffic types (name, protocol
Rule Traffic Type number, from_port, and to_port).
[aws_vpc_sg_rule_traffic_type]
EC2 Image Attribute Contains list of attributes specifying properties of images for spinning
up VMs.
[ec2_image_attribute]
EC2 Image Criteria Contains a list of filter names used to filter VM types and sizes.
[ec2_image_critera]
EC2 Criteria Attribute Contains list of mapping between criteria and image attribute used for
filtering VMs types and sizes.
[ec2_criteria_attribute_m2m]
EC2 Image Contains a list of all EC2 images from all regions.
[ec2_image]
EC2 Keypairs Contains the keypairs for all the regions for the account.
[ec2_keypairs]
Name Description
EC2 Provision Rules Contains rules to auto-select provisioning variables such as account,
VPC, and subnet.
[ec2_provision_rules]
EC2 Region Size Exclude Contains a list of mappings between region and sizes that are not
supported in that given region.
[ec2_region_size_exclude_m2m]
EC2 Shared Image Account Contains the names and numbers of the accounts that provide shared
EC2 images.
[ec2_shared_image_account]
EC2 Size Criteria Contains a list of mappings between image criteria and EC2 sizes
used for filtering VMs types and sizes.
[ec2_size_criteria_m2m]
EC2 Size Contains list of supported EC2 sizes from all regions.
[ec2_size]
Catalog EC2 Element Price Contains mappings between an EC2 element and the catalog price.
[sc_ec2_element_price]
EC2 Key Pair Catalog Item Contains catalog item for AWS keypairs.
[sc_ec2_key_pair_cat_item]
EC2 Catalog Offering Contains the OS types that appear in the service catalog, as defined
by the cloud administrator.
[sc_ec2_os_selection]
Properties
Name Description
Name Description
Note: To prevent conflict with production data, load demo data only on a development or test
instance.
Note: To prevent conflict with production data, load demo data only on a development or test
instance.
Three user groups and four users are created as demo data.
• The Infrastructure and Service groups are children of the Cloud Operation group.
• Users Adam Alba and Bob Brown are in the Infrastructure group.
• User Cindy Cyrus is in the Service group.
• User Dan Daniels is in the Cloud Operation group.
• All of the users have the cloud_admin role.
Note: A note indicates neutral or positive information that emphasizes or supplements important
points of the main text. A note supplies information that may apply only in special cases. Examples
are memory limitations, equipment configurations, or details that apply to specific versions of a
program.
Note: To prevent conflict with production data, load demo data only on a development or test
instance.
VPC-PubPrivSubnet VPC-PubPrivSubnet
VPC-PubSubnet VPC-PubSubnet
Note: To prevent conflict with production data, load demo data only on a development or test
instance.
The demo service catalog item names follow the pattern of "CloudFormation Stack – " plus the name of
CloudFormation template. For example, one item name is CloudFormation Stack – AdwithPrivateSubnet.
All of the demo catalog items use a demo account named AWS Account and U.S. West (Oregon) Region.
In each catalog item, you can specify input parameters to provision stacks. For example, you can specify
which public and private subnets to use for the CloudFormation Stack – LAMPstack-VPC-PubPrivSubnet-
RDS catalog item.
To help the end users to validate input parameters before actual provisioning, the order form uses dynamic
input validation client scripts.
AWS demo data: CloudFormation stacks and tags
To illustrate the use of Amazon Web Services, the system provides demo data and samples of the data
types.
Note: To prevent conflict with production data, load demo data only on a development or test
instance.
Each demo stack has been assigned with five standard tag values, as listed in the table.
Each stack comes from a completed-stage catalog item request, which is also showing as demo data in
the Amazon Assets portal page. To support some of the tag values, the following extra demo records have
been created in the corresponding tag table:
• For the Application tag: Active Directory, Community, Retail Portal, Shopping Cart and PeopleSoft
• For the Cost Center tag: Retail
• For the Project tag: Athens and Berlin
Count Stack Name Service Application Cost Center Project User Name
1 ADwithPrivateSubnet
IT Services Active IT Athens Adam Alba
Directory
2 LAMPstack- IT Services Community IT Berlin Adam Alba
VPC-
PubSubnet
3 LAMPstack- Retail SAP WEB01 Retail Athens Adam Alba
VPC-
PubPrivSubnet-
RDS
4 VPC- Retail SAP WEB01 Retail Athens Adam Alba
PubPrivSubnet
Note: To prevent conflict with production data, load demo data only on a development or test
instance.
Each demo stack contains multiple CI resources. In the My Amazon Assets portal page, you can see all
available CI resources, including VPC, subnet, VM, ELB, ELB listener, and Auto Scaling Group, as well as
CloudFormation stacks that belong to the login user. The table shows the relationship between stacks and
the different types of CI resources in the demo data .
ADwithPrivateSubnet
vpc-3e47875b
subnet-248f5041
n/a ip-172-31-46-96
n/a n/a vol-
a55a24a4
LAMPstack- vpc- subnet-729e6305
n/a ip-10-0-0-7 n/a n/a vol-88057b8d
VPC- eb46868e
PubSubnet
LAMPstack- vpc-3e47875b
subnet-248f5041,
LAMPstack- ip-10-0-2-200,
LAMPstack- LAMPstack- vol-
VPC- subnet-565c4110,
VPC- ip-10-0-1-14 ElasticL1VPC10XDXFCD2
ElasticL1VPC10XDXFCD2:HTTP:80
ff007efa,
PubPrivSubnet- subnet-9d9e63eas
PubPrivSubnet- vol-0f552b0e
RDS RDS-
WebServerGroup-
INVN3ZCQ2TUW2
LAMPstack- vpc-3e47875b
subnet-248f5041,
n/a ip-172-31-46-96
n/a n/a vol-
VPC- subnet-565c4110, a55a24a4
PubPrivSubnet- subnet-9d9e63eas
RDS
VPC- vpc-3e47875b
subnet-248f5041,
n/a n/a n/a n/a n/a
PubPrivSubnet subnet-565c4110,
subnet-9d9e63eas
VPC- vpc- subnet-729e6305
n/a n/a n/a n/a n/a
PubSubnet eb46868e
LAMPstack- vpc-3e4787bsubnet-248f5041,
LAMPstack- ip-10-0-2-21,LAMPstack- LAMPstack- vol-
VPC- subnet-565c4110,
VPC- ip-10-0-1-248Elastic- Elastic- c2502ec3,
PubPrivSubnet- subnet-9d9e63eas
PubPrivSubnet- LK5MUGT7Z7XK2
LK5MUGT7Z7XK2:HTTP:80
vol-507c0255
RDS- RDS-
RetailPortal RetailPortal-
WebServerGroup-
AYPR44QQUQ9B
LAMPstack- vpc- subnet-729e6305
n/a ip-10-0-0-254n/a n/a vol-28017f2d
VPC- eb46868e
PubPrivSubnet-
RetailPortal
POS- vpc-9e4a8afbsubnet-3c59447a,
POS- ip-10-0-2-56,POS- POS- vol-9c453b9d,
requiresVPC subnet-189a676f,
requiresVPC-ip-10-0-1-104requi- requi- vol-
subnet-928c53f7
WebServerGroup- ElasticL-1UYP73LCE5UQ6
ElasticL-1UYP73LCE5UQ6:HTTP:80
ba750bbf
ZHW2UHSZ0OS
LAMPstack- vpc-3e47875b
subnet-248f5041,
LAMPstack- ip-10-01-1-139,
LAMPstack- LAMPstack- vol-3fd333e,
VPC- subnet-565c4110,
VPC- ip-10-0-2-188ElasticL-19C6S0C8YIBNG
ElasticL-19C6S0C8YIBNG:HTTP:80
vol-647b0562
PubPrivSubnet- subnet-9d9e63eas
PubPrivSubnet-
RDS- RDS-
PeopleSoft PeopleSoft-
WebServerGroup-1KN737LA7PO1T
Username Group
Username Group
• AWS Resources
Field Description
3. Click Submit.
Note: When AWS credentials are configured on an external credential storage site, AWS
billing download is not supported. To download billing data, credentials must be configured on
an instance. AWS discovery, AWS EC2 life cycle operations, CloudFormation, and resource
optimization download will work with external credential storage.
Regions
AWS regions are the geographic locations of AWS datacenters, accessed though SOAP endpoints.
After you configure an AWS account in ServiceNow, you can create EC2 instances in any of these regions.
The AWS regions are:
• Asia Pacific (Singapore) Region
Note: You must have an AWS GovCloud account to access the AWS GovCloud (US) region.
You can update the list with any new regions that Amazon might add or restore the list, as needed.
You do not need to update regions if the region in which you want to create VMs already appears in
the AWS Regions list.
To update the AWS Regions list, run Discovery for the AWS account.
Field Description
Field Description
Token link close time Date and time the link to use the security token will expire.
Token link open time Date and time the link to generate the security token will
open.
Policy AWS policy associated with the token.
Purpose Description of the security token purpose.
Request item Indicates whether the token is explicitly requested by a user
or is generated by other processes.
State State of the user token link. It can be scheduled, active, or
closed.
Token AWS security token that is tied to the record.
User User to assign the security token to.
5. Click Update.
2. Enter a unique and descriptive name for the new shared account and provide Amazon EC2
account number of the shared account.
3. Click Submit.
4. Repeat this procedure for additional shared accounts.
2. Click the Update Images related link to populate the EC2 Images related list with images provided by
Amazon.
AWS permissions
While configuring ServiceNow to connect to Amazon, you can supply credentials for a user. The user
permissions in AWS determine which AWS tasks the user can perform in the ServiceNow instance.
The Administrator role provides all privileges available in AWS. This includes access to every operation
that ServiceNow supports plus all of the features that ServiceNow does not use. Using the Administrator
role is a simple way to grant a ServiceNow instance full power.
It is possible to define permissions for a user that provide the ServiceNow instance enough access to
perform Discovery or Cloud Management operations without granting full Administrator privileges.
Discovery operations
AutoScaling DescribeAutoScalingGroups
DescribeLaunchConfigurations
CloudFormation DescribeStacks
GetTemplate
ListStackResources
ListStacks
EC2 DescribeAccountAttributes
DescribeAvailabilityZones
DescribeImages
DescribeInstanceStatus
DescribeInstances
DescribeKeyPairs
DescribeRegions
DescribeSecurityGroups
DescribeSnapshots
DescribeSubnets
DescribeVolumes
DescribeVpcs
Elastic Load Balancing DescribeLoadBalancers
CloudFormation CreateStack
(When using templates, include permissions for DeleteStack
the operations required within the templates for
the specific resources and services contained.) DescribeStacks
GetTemplate
ListStackResources
ListStacks
UpdateStack
ValidateTemplate
CloudWatch GetMetricStatistics
EC2 AttachVolume
CreateKeyPair
CreateSnapshot
CreateTags
DeleteSnapshot
DeleteTags
DescribeImages
DescribeInstanceStatus
DescribeInstances
DescribeKeyPairs
DescribeSecurityGroups
DescribeSnapshots
DescribeSubnets
DescribeTags
DescribeVolumeStatus
DescribeVolumes
DescribeVpcs
DetachVolume
RebootInstances
RunInstances
StartInstances
StopInstances
TerminateInstances
S3 DeleteObject
GetObject
ListBucket
PutObject
GetBucketLocation
Elastic Load Balancing DescribeLoadBalancers
SNS ConfirmSubscription
CreateTopic
Subscribe
STS GetFederationToken
Note: Running multiple scheduled jobs at one time can cause significant performance degradation.
When possible, schedule cloud discovery, billing data download, and resource optimization jobs
to run a few hours apart instead of overlapping. To view scheduled jobs, navigate to System
Definition > Scheduled Jobs.
1. Navigate to the account to discover: Either: Amazon AWS Cloud > Configuration > Accounts or
AWS Discovery > Accounts.
2. Select the account to discover and click the Create Discovery Schedule related link.
3. On the Discovery Schedule page, click the Discover now related link.
The system performs the Discovery process and lists the results in the Discovery Status list.
Discovered resources are listed on the Account page, grouped by type on separate tabs.
The current state of the resource, if applicable, is updated accordingly (for example, EC2 virtual
machine instances, AWS VPCs, AWS subnets, etc.).
Role required:
• Users with the aws_admin role can configure, execute and view billing reports.
• Users with the aws_user role can only view reports.
AWS Billing offers a set of billing reports on your AWS usage based on hourly usage. You can view charts
of the data on the Cost reports page. The report details list tags associated with the resources. You can
categorize VMs by tagging the instances.
Note: To receive the reports, you must have an Amazon S3 bucket available in your AWS
account.
The billing file generated needs to be at the root directory of the bucket defined on the report schedule in
this format:
<your-acct-num>-aws-billing-detailed-line-items-with-resources-and-tagsXXX
If this file is not available when the report is run, a NoSuchKey error may occur.
Note: If a billing report job is deleted, all associated scheduled jobs are deleted.
Job Name Name of the scheduled job that will run the
report.
AWS Account Name of the AWS account to run the billing
report on.
Credential AWS account credentials.
Bucket The S3 bucket of the AWS account that
contains the data.
Scheduler [read-only] Name of the script that generates the report.
Last Execution [read-only] Completion timestamp of the most recent
run.
Field Description
3. Click Submit.
Note: Ensure that the basic configuration tasks are complete before attempting to add a price
structure to your virtual resources.
Pricing can be used to integrate Cloud Management with asset management model categories. The
catalog price is calculated by the system as multiples of a unit price. Change the price per unit for EC2
images in the price factor record. This price is used to calculate the price for each EC2 image provisioned
from a specific instance.
Note: AWS Config is region-specific, therefore you must create one SNS topic per region of
interest.
SNS notifications for all supported AWS resource types are processed. For supported AWS resource
types, go to AWS Documentation and navigate to Management Tools > AWS Config > Developer
Guide > What Is AWS Config > Supported Resources, Configuration Items, and Relationships.
The ServiceNow instance parses the SNS notification and updates the CMDB.
1. Once an SNS Topic is created on the AWS console, create a new subscription for it.
a) Select the Topic ARN from the topic that you created.
The Amazon Resource Name (ARN) is necessary for binding an AWS Config SNS to the CI.
b) Set the Protocol to https.
c) Set the Endpoint to: https://<user_id>:<password>@<instance.domain>/
aws_evt_mgmt_proc.do.
Where user_id and password are the user ID and password of the user with the
aws_integration role, and instance.domain is the domain of the ServiceNow instance.
2. Wait until the subscription goes from Pending to Confirmed and the subscription ARN is populated.
3. A user with the admin role can add the applicable property for the desired feature to the
sys_properties table of the ServiceNow instance.
The AWS Config processor processes three types of messages:
• ConfigurationItemChangeNotification
• ConfigurationHistoryDeliveryCompleted: You can enable this property by setting
itom.aws.processConfigHistory to true.
• ConfigurationSnapshotDeliveryCompleted: You can enable this property by setting
itom.aws.processConfigSnapshot to true.
Property Description
itom.aws.logEvent When this property is enabled, SNS
responses sent by AWS can be viewed in the
aws_sns_event table.
Default is off.
For instructions on how to create your Active Directory application, go to the Microsoft Azure website and
search for the document Create Active Directory application and service principal using portal. Make
a note of the Azure Tenant ID, Client ID, and Key for use later in the configuration.
Note: You must grant the service principal the Contributor role for each subscription that you
want to manage.
Roles
Cloud administrators are members of the Virtual Provisioning Cloud Administrators group. The Virtual
Provisioning Cloud Administrators group has or inherits these roles:
• itil
• cloud_admin
• cloud_operator
• cloud_user
Field Value
Name Enter the name of the service principal to register
with the ServiceNow instance.
Tenant ID and Client ID Copy/paste the values that you obtained from the
Azure portal.
Authentication method Select Client secret.
Secret key Paste the secret key that was generated while
creating the Azure Service Principal.
This field appears when Authentication method
is Client secret.
Note: This discovery process discovers only which subscriptions the Service Principal has
access to. The process does not discover other resources. You schedule resource Discovery
in a later step.
Note: Running multiple scheduled jobs at one time can cause significant performance degradation.
When possible, schedule cloud discovery, billing data download, and resource optimization jobs to
run a few hours apart instead of overlapping.
Note: Do not modify the values in the remainder of the fields. These values are populated
automatically during discovery.
Field Value
Name Name of the Azure subscription that uses the
Discovery schedule that is defined on this page.
Discover Type of resources to discover.
MID server MID server that will manage the Discovery
process.
Active Check box to activate this schedule.
Max run time Overall time period over which the schedule
applies. For example, enter 365 days if you want
this schedule to be used for one year. Specify a
period less than a day using hh:mm:ss format.
Run The options are described in the following step.
5. Specify the Run option.
6. Click Submit.
Discovery runs and populates your CMDB (configuration management database) and images that you
can later approve to offer to cloud users in the catalog. The current state of the resource, if applicable,
is updated accordingly (for example, Azure virtual machine instances, Azure virtual networks, Azure
subnets, etc.). The Cloud admins use the Accounts (Subscriptions) module to specify which subscriptions
to manage and to configure the Discovery schedule for subscriptions.
Azure subscriptions
Cloud admins use the Accounts module to specify which subscriptions to manage, and to configure the
Discovery schedule for subscriptions.
Cloud admins use the Accounts (Subscriptions) module to:
• View the subscriptions that the service principal can access.
• Specify which subscriptions you, the cloud_admin, want to administer.
• Configure the ServiceNow Discovery schedule that maintains the database of Azure resources in the
subscription.
Table 472:
Field Value
3. Click Submit.
Every resource state and configuration change to the Azure subscription is now captured and processed in
ServiceNow®. Azure sends an alert to the ServiceNow instance where you have configured Azure alert.
If a new virtual machine is created in the Azure subscription directly on the Azure portal, Azure will send
an alert about this change and Cloud Management will process the event and create a new entry for the
virtual machine on the Azure portal. The benefit of this is that you do not have to wait for Discovery to run
until the new instance entry is created in the ServiceNow database.
If you want the events to be logged in the Azure alerts events table, then navigate to Microsoft Azure
Cloud > Properties and select the Yes checkbox under Log events from Azure alert. By default, it is set
to No.
Note:
Azure Alert API is a preview feature and contains the following limitations:
• There is a fifteen minute time delay in event generation.
• Basic authentication is not supported. You need to disable authentication to be able to use the
API.
3. Click Submit.
Tables
Table Description
Table Description
Workflow activities
The following workflow activities are installed with the Orchestration - VMware Support plugin. For
additional details about these activities, see Orchestration VMware Activities.
Change State global Sends commands to vCenter to control the state of a given VMware
virtual machine (powers the VM on or off).
Check VM Alive global Uses the VMware API to determine if a newly configured VM is
alive. The virtual machine is alive if its state is powered on and if it
uses the same IP address with which it was configured.
Clone global Sends commands to vCenter to clone a given VMware virtual
machine.
Configure Linux global Sends commands to vCenter to set the identity and network
information on a given VMware virtual Linux machine.
Configure Windows global Sends commands to vCenter to set the identity and network
information on a given VMware virtual Windows machine.
Workflows
Name Description
VMware - Provision Main workflow that clones, configures, and then powers on the virtual
machine. This workflow uses the VMware - Wait for VM to start subflow,
closes the task as complete or incomplete. Requesters receive an email
if the provisioning was successful, and the Virtual Provisioning Cloud
Operators group receives an email if the provisioning fails.
VMware - Wait for VM Subflow that queries the virtual machine using VMware Tools for its state
to start and IP address to determine if it has powered up after its configuration
changes.
Name Purpose
Table 477:
Field Description
Field Description
3. Click Submit.
For instructions on activating the Orchestration - VMware Support plugin, which requires a separate
subscription, see Configuring VMware Cloud on page 1098. Activating this plugin also activates the
Orchestration and web service Consumer plugins if they are not already active.
Install a ServiceNow MID Server on a suitable machine in your network and configure it to communicate
with the instance. Ensure that the MID Server version is compatible with the ServiceNow instance version.
• MID Server system requirements
• MID Server installation
• Configure MID Server credentials from the credentials table
Install vCenter
Install the vCenter management application from VMware. Create the Windows and Linux templates on
your ESX Server that the system can use to create virtual machines from service catalog requests. Refer
to VMware product documentation for vCenter and the ESX Server for details about these procedures.
Configure the MID Server to access your ESX Server
Install a MID Server on the same network as the ESX Server.
Role required: admin
1. Navigate to Orchestration > MID Server Properties.
2. Set the Default MID Server to use for Orchestration Activities to the new MID Server.
3. Navigate to MID Server > IP Ranges.
4. Click New.
5. Enter a Name for the IP range. Enter the IP Range that contains your ESX Server.
6. Click Submit.
7. Return to the list of MID Servers.
8. Select the MID Server you just installed.
vCenter is the VMware management console that manages the activities of ESX Servers. ESX Servers
contain the virtual server templates and hosts running virtual machines. Refer to VMware product
documentation for instructions about installing and configuring vCenter and ESX Servers. Observe the
following requirements when setting up the system to interact with vCenter and the ESX Server:
• Ensure that all VMware products that are advertised in the service catalog have corresponding
templates on the ESX Server. The names of the templates on the ESX Server should be descriptive
enough to simplify selection during the manual phase of provisioning.
• The MID Server probe's user must log in to vCenter with the proper VMware role to execute the probe’s
action.
• The ServiceNow instance supports Cloud Provisioning on vCenter versions 4.1 through 6.0. Using
Cloud Management with other versions of vCenter may cause unexpected results.
Full access
It is possible define a role that provides the ServiceNow instance enough access to perform all supported
operations without granting full Administrator privileges. This role should include the following permissions:
vCenter Permissions
vCenter Permissions
With this role, ServiceNow users can run Discovery, view all resources, perform all operations (Start, Stop,
Pause, Snapshot, Terminate, VM Modifications), and provision new VMs (including guest customization).
Read-only user
The "Read-only" role allows a user limited read access to the system without any other privileges. The role
allows ServiceNow users to run Discovery and view resources.
The role does not have permission to provision new VMs or to run any VM operations.
Several templates are available by default with Cloud Management. Discovering vCenter makes the
templates available for configuration as catalog items. To create additional virtual machine templates,
contact your vCenter administrator.
1. Navigate to VMware Cloud > vCenter and click New.
2. Enter the IP address of your vCenter instance in the URL field. The URL is filled in after discovery.
Note: The Create Discovery Schedule link is only visible when a valid IP address is entered.
Note: Running multiple scheduled jobs at one time can cause significant performance
degradation. When possible, schedule cloud discovery, billing data download, and resource
optimization jobs to run a few hours apart instead of overlapping. To view scheduled jobs,
navigate to System Definition > Scheduled Jobs.
All required tasks within Cloud Management are performed by members of these groups:
• Virtual Provisioning Cloud Administrator: Cloud administrators own the cloud provisioning environment
and are responsible for configuring the virtualization providers that are used by Cloud Management.
Cloud administrators can create service catalog items, approve requests for virtual machines, and
monitor the Cloud Management environment using the Cloud Admin Portal.
• Virtual Provisioning Cloud Approver: Cloud Approvers can approve or reject requests for virtual
resources. Approvers have no technical responsibilities.
• Virtual Provisioning Cloud Operator: Cloud operators fulfill provisioning requests from users. Cloud
operators perform the day-to-day work of Cloud Management by completing tasks that appear in the
Cloud Operations Portal. Cloud operators are assigned to specific Cloud Management Providers and
must be technically adept with the providers they support.
• Virtual Provisioning Cloud User: Cloud users can request virtual machines from the service and use the
My Virtual Assets portal to manage any virtual machines that are assigned to them.
VMware Cloud
The VMware Cloud application integrates with VMware vCenter to expose virtual machine and
infrastructure management to ServiceNow users. The application enables users to request VMware virtual
resources through the service catalog and provides flexible reporting for VMware admins.
Orchestration integrates with the vCenter API and adds VMware workflow activities to the existing
Workflow application. The activities enable Orchestration to clone new VMs from templates, configure
VMs, and power VMs on and off.
VMware modules appear in the ServiceNow navigator under VMware Cloud and Administration >
VMware Cloud. When a user requests a virtual server, Orchestration executes preconfigured approval and
provisioning tasks. If the request is approved, Orchestration automatically creates a virtual server from a
stored template, configures the virtual machine, and then starts the server.
VMware Cloud is a feature of Orchestration and is available as a separate subscription from the rest of
the ServiceNow platform. For information on purchasing a subscription, see Configuring VMware Cloud on
page 1098.
The Virtual Provisioning Cloud Administrators group has or inherits these roles:
• itil
• cloud_admin
• cloud_operator
• cloud_user
Note:
If you are upgrading from Helsinki:
If you are activating Cloud plugins for the first time after upgrading from Helsinki, then you may
see a completely blank page on both the Cloud Operations portal as well as the Cloud Resources
portal. To resolve this, please contact ServiceNow Technical Support and request that the
glide.cms.enable.responsive_grid_layout system property be enabled. This step is not
necessary if you already had Cloud plugins activated before moving to the Istanbul release.
To customize the automated naming scheme used for VMs created from this image, seeCreate a custom
VM naming script on page 1255
Important: Ensure that users are assigned to the following groups before offering Amazon VMs in
the service catalog.
• EC2 Approvers: Approve or reject all instances requested through the service catalog for provisioning.
• EC2 Operators: Select the Amazon EC2 account, region, and image to fulfill approved requests from
the service catalog.
Table Description
Table Description
EC2 Approved Image [ec2_approved_images] Contains only the EC2 images approved for
provisioning through the service catalog. This
table extends the Virtual Machine Template
[cmdb_ci_vm_template] table.
EC2 Image [ec2_image] Contains all the EC2 images in the system,
including approved images. This table
extends the Virtual Machine Template
[cmdb_ci_vm_template] table.
EC2 Size [ec2_size] Contains all the EC2 size (type) definitions. This
table extends VM Size Definition [vm_size] table.
Field Description
6. To override the price calculation manually, clear the Calculate check box and enter a new price for
this item.
7. Clear the Skip approval check box to require an approval for this item from a user in the EC2
Approvers group.
By default the system skips the approval process. Skipping this approval does not affect other
approvals that may apply to requests through the service catalog.
8. Select the appropriate Task automation mode:
Mode Description
9. Click Update.
Basic service catalog offerings Use ServiceNow presets to test cloud provisioning
in your environment and to determine how you want
to customize service catalog offerings.
Custom service catalog offerings Build on the basic configuration by adding features
to the provisioning workflow. Give users more
choices and apply prices to your catalog items.
Advanced service catalog offerings Customize your catalog offerings, allowing users
to request special configurations or allowing
provisioners to skip approvals and automate
provisioning tasks.
Configure Amazon basic service catalog offerings
Link offerings associated with Amazon EC2 images to the default Amazon EC2 Instance item in the service
catalog.
Users requesting virtual machines from this source can specify the offering, the number of virtual machines
to create, and the size (provided by Amazon). Perform the following steps in the order listed.
1. Link the approved EC2 images to the service catalog offerings (operating systems).
2. Open the service catalog and click the Amazon VM link under Virtual Resources to request a virtual
machine.
3. Specify your parameters.
• Offering: The operating system you want.
• Number of instances: The number of instances to provision.
• Size: The predefined EC2 size.
When you order the virtual machine from the basic configuration, the system creates manual approvals
and provisioning tasks.
Configure Amazon advanced service catalog offerings
The cloud administrator builds on the basic catalog offerings configuration and the custom catalog offerings
configuration to provide advanced customization to the service catalog, and to the offerings.
Role required: cloud_admin, aws_admin
The cloud administrator creates the catalog items and configures them for approval and task automation.
Ensure that the basic and custom catalog offerings configurations are complete before attempting to add
customizations.
1. Create catalog items from approved EC2 images.
These are the images that are available to users in the service catalog.
2. Edit the lease duration and grace period.
The default length of a lease in the base system is 60 days. The maximum lease duration permitted is
90 days.
3. Edit the checkout redirect property.
This property redirects users either to the Order Status form or the My Virtual Assets portal after they
check out.
4. Configure the system to create change requests for specific modifications requested for EC2 virtual
machines.
Examples of modifications that generate change requests are lease extensions, requests for additional
memory, and requests to terminate a virtual machine.
Categories
See Service Catalog Categories for instructions on creating categories and sub-categories for catalog
items. The category hierarchy determines the category path for locating EC2 virtual machines in the
service catalog.
The One-step checkout redirect property (glide.vm.checkout_redirect) controls the view presented
to virtual machine requesters in the service catalog. By default, this property is set to false, which redirects
the view to the Order Status form when the requester clicks Order Now. When this property is set to
true, the system redirects the requester to the My Virtual Assets portal. This property is located in Cloud
Management > Administration > Properties.
Size definitions
The system includes a full range of current Amazon EC2 sizes. The prices shown for these sizes are
arbitrary and do not reflect any realistic price structure. These prices are calculated from the price per unit
defined in the EC2 Element Price record.
Some examples of the EC2 sizes are:
• M3 Double Extra Large
• M2 High-Memory Extra Large
• M2 High-Memory Double Extra Large
• M2 High-Memory Quadruple Extra Large
• C1 High-CPU Medium
• C1 High-CPU Extra Large
Title Description
Title Description
4. Click Submit.
Caution: Do not delete the Price factor record. This action sets all EC2 image prices to zero.
Example
The default name of a service catalog item is EC2 Instance - T1 Micro - ami-vpc-nat-1.0.0-beta.i386-ebs.
You change the size to M1 Medium without modifying the Name field. The system changes the name to
EC2 Instance - M1 Medium - ami-vpc-nat-1.0.0-beta.i386-ebs. If you edit the Name field before changing
the size, the item name is not changed.
Each of the four pre-populated fields behaves the same and is independent of the others. For example, if
you change the name of the item but not the short description before selecting a new size, the name value
is unaffected, but the short description adjusts to match the requirements of the new size.
When you launch a VM in a VPC, up to five security groups can be assigned to the instance. Security
groups act at the instance level, not the subnet level. Each instance in a subnet in your VPC, therefore,
could be assigned to a different set of security groups. When provisioned, an instance is automatically
assigned to the default security group for the VPC.
Each security group has a set of rules that control the inbound traffic to instances, and a separate set of
rules that control the outbound traffic.
Field Description
1. Navigate to Amazon AWS Cloud > Managed Resources > EBS Volumes.
2. Click a volume.
CloudFormation
Amazon Web Services (AWS) CloudFormation enables developers and systems administrators to create
and manage a collection of related AWS resources, and provision and update them in an orderly and
predictable fashion.
AWS CloudFormation enables you to use a template file to create and delete a collection of resources
together as a single unit called a stack. With CloudFormation, you no longer need to figure out the order for
provisioning AWS services or the details of making those dependencies work. After the AWS resources are
deployed, you can modify and update them in a controlled and predictable way. This allows you to apply
version control to your AWS infrastructure the same way you do with your software.
To provision an AWS CloudFormation stack, admins perform the following tasks:
• Configure an AWS Account
• Define a CloudFormation template
• Create a Catalog Item for the template
AWS CloudFormation requires you to activate the Amazon Web Services plugin.
Create a CloudFormation template
You can define a template that describes the AWS resources, and any associated dependencies or
runtime parameters required to run your application
Role required: cloud_admin or admin
1. Navigate to Amazon AWS Cloud > CloudFormation > Templates.
2. Click New.
3. Enter a Name and Description for the template.
4. CloudFormation template parameters enable you to pass values into the template at stack-creation
time. Parameters let you create templates that can be customized for each stack deployment. To use
a JSON template that contains parameters to include in the template, click Use JSON and then enter
the template code in the text box.
5. Optional. Enter the URL of an existing template in the Amazon S3 Template URL text box. The URL
should point to an S3 bucket containing a CloudFormation template in JSON format.
6. Click Submit.
CloudFormation templates
You can create CloudFormation templates or use an existing Amazon template to describe the AWS
resources and any associated dependencies or runtime parameters required to run your application.
You can add a template that you define as a catalog item in the Service Catalog. The system can then use
the template to create and launch stacks.
In addition, you can configure CloudFormation template parameters that enable you to pass values into the
template at stack-creation time.
Configure parameters for a CloudFormation template
Parameters let you create templates that can be customized for each stack deployment. When creating a
stack from a template that contains parameters, you can specify values for the parameters.
Role required: cloud_admin or admin
If you specified a JSON template that contains parameters, the template parameters appear on the
CloudFormation Template Parameters tab and the variable set appears in the CloudFormation
Variable Set for Template Parameter tab.
The following example shows the KeyPairName and InstanceType parameters:
"Parameters" : {
"KeyPairName": {
"Description" : "Name of an existing Amazon EC2 key pair for RDP
access",
"Type": "String",
},
"InstanceType": {
"Description" : "Amazon EC2 instance type",
"Type": "String",
}
}
Each parameter name must be alphanumeric and unique among the parameters in the template. You
can declare a parameter as any of the following data types:
Note: An error message appears if you attempt to submit a template with an invalid parameter
type.
Option Description
Note: While a CloudFormation stack is performing an update request, the NoEcho field for a
parameter can only be modified through the template definition.
Define the NoEcho parameter by setting NoEcho to true in a stack template parameter definition..
Table 483: Tables used to manage catalog items from CloudFormation templates
Table Description
Virtual Machine Template Contains virtual machine templates from
[cmdb_ci_vm_template] Amazon EC2 and VMware.
EC2 Approved Image [ec2_approved_images] Contains only the EC2 images approved for
provisioning through the service catalog. This
table extends the Virtual Machine Template
[cmdb_ci_vm_template] table.
EC2 Image [ec2_image] Contains all the EC2 images in the system,
including approved images. This table
extends the Virtual Machine Template
[cmdb_ci_vm_template] table.
EC2 Size [ec2_size] Contains all the EC2 size (type) definitions. This
table extends VM Size Definition [vm_size] table.
Perform the following operations to prepare to create a catalog item from a CloudFormation template:
• Configure the categories and sub-categories for catalog items. The category hierarchy determines the
category path for locating CloudFormation templates in the service catalog.
• Determine the appropriate setting for the One-step checkout redirect
(glide.vm.checkout_redirect) property.
• Configure the CloudFormation templates for catalog items.
Note: Values in the pre-populated fields are based on the CloudFormation template that you
select. If you change the template, but no other values, the pre-populated data updates to
the parameters on the new template. However, if you change a value in a pre-populated field
before changing the template, the field value you changed is not updated when the template
changes. This behavior allows you to change the price, name, or description of stacks.
Note: To view current catalog items that were created based on the current template, click the
View Catalog Items related link.
The template contributes the Template Parameters set (unique for each template; variables that were
generated from the template parameters)
1. Navigate to Amazon AWS Cloud > Maintain Catalog Items > CloudFormation Items and select the
template.
2. Click the Refresh Variables from Parameters related link.
If the variables do not exist, the system generates them. If the variables already exist, the system refreshes
them with the updated attribute settings.
Verify the path to a CloudFormation catalog item
After you create a catalog item and category path, verify the configuration in the Service Catalog.
3. Click Update.
Navigate to Amazon AWS Cloud > Maintain Catalog Items > CloudFormation Items.
Note: These are the instructions for configuring the default settings for requested items. For
instructions on configuring lease start and end times for individual virtual machines, see Configure
the lease duration for a CloudFormation catalog item on page 1130.
Group by the type of change from the This property (glide.vm.lease_duration) controls
baseline. the length of the lease period that is configured
for all Amazon resource requests. The default
duration is 60 days from the lease start time,
which begins on the current date and time of the
request. The actual time of the lease is calculated
from the time the resource is provisioned, after
any approvals.
Max lease duration This property (glide.vm.max_lease_duration)
controls the maximum length of the lease period
permitted for a resource. The default maximum
duration is 90 days from the lease start time.
This property prevents resources that have been
ignored from running indefinitely.
Field Description
Field Description
1. Navigate to Amazon AWS Cloud > Administration > Account and select an account.
2. Click the Create Discovery Schedule related link to open the Discovery Schedule page.
3. Click the Discover Now related link to start the .
When Discovery finishes, the Account page lists all discovered Amazon resources. You can also click
Amazon AWS Cloud > CloudFormation > Stacks to display the resources.
"Parameters": {
"KeyName": {
"Description": "Describe key name",
"Type": "AWS::EC2::KeyPair::KeyName",
"ConstraintDescription": "must be the name of an existing EC2
KeyPair.",
"MinLength": 8,
"MaxLength": 35,
"AllowedPattern": "[0-9a-zA-Z]+"}
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"3088fe53-c967-4b4b-8d0d-fcdfb5ea7a23": {
"size": {
"width": 60,
"height": 60 },
"SNC::Parameter::Metadata": {
"KeyName":{
"ReferenceType":"AWS::EC2::KeyPair::KeyName"
}
}
}
}
To define a password parameter, use the NoEcho parameter. In this example, if the user sets the NoEcho
parameter to true, the ReferenceType is ignored and the corresponding catalog variable has the
Reenter text box on the catalog order page.
When the admin creates ReferenceType metadata, the following table is created to associate a
reference for the given type.
Name Description
After approval, you can either create a catalog item from this image, or you can add this image to any
catalog offerings and then select it from the Azure VM Instance -- Advanced catalog item.
To customize the automated naming scheme used for VMs created from this image, seeCreate a custom
VM naming script on page 1255
Important: Ensure that users are assigned to the following groups before offering Azure VMs in
the service catalog.
• Azure Approvers: approve or reject all instances requested through the service catalog for provisioning.
• Azure Operators: select the Azure account, region, and image to fulfill approved requests from the
service catalog.
4. When you have configured all settings and the catalog item is ready for consumption, click Publish.
5. You can verify that the item behaves as you expect through the ordering and provisioning processes.
Click Try It to go through the process of configuring, ordering, and then validating the VM.
Cloud users can request the item using the Virtual Assets portal.
eliminate many of these steps, since the offerings are already pre-configured with certain key provisioning
information such as subscription and region.
1. Navigate to Maintain Catalog Items > ARM Template Items.
2. Click on the Azure VM Instance - Advanced item.
3. From the "Related Links" section, select Duplicate Catalog Item.
4. On the new form, modify the Name and specify the following settings as desired:
Field Description
Field Description
5. In addition to these entries, you can opt to pre-configure additional variables that aid in the
provisioning of requested virtual machines. These variables are available from the Variable Sets tab,
located below the Related Links section. There are seven Variable Sets available for configuring:
Name Description
Name Description
Note: Changes made by the cloud admin to the last two Variable Sets (Azure Single VM
Variables, Single Advanced Template Parameter) apply to ONLY the particular catalog item
from which the change was made. Changes made to all other Variable Sets apply to all catalog
items made from the same VM template.
6. When you have configured all settings and the catalog item is ready for consumption, click Publish.
7. You can verify that the item behaves as you expect through the ordering and provisioning processes.
Click Try It to go through the process of configuring, ordering, and then validating the VM.
Cloud users can order the item from their Cloud Resources page.
Note: For information on authoring an ARM template, see the Azure documentation at https://
azure.microsoft.com and search for Azure Resource Manager Templates.
Once the ARM template is created, the system converts the parameter settings to template parameters, as
listed in the ARM Template Parameters list.
• You can further configure each template parameter with additional attributes. For example, you can
specify the type for the parameter (string, Boolean, array, table reference, and so on).
• After all parameters are fully defined, you can apply the parameter settings to generate the variables
that will appear to requesters in catalog items.
Field Value
Name Name of the parameter. For existing parameters,
the name is already specified. For new
parameters, the name must be alphanumeric and
unique among the parameters in the template.
Template Source template for the parameter that defines
this variable.
Description Description of the variable that gets generated
from this parameter.
4. Specify the Type of the variable that the system is to generate from this parameter.
If the variable offers the values in an existing table as choices, select a Type of String and then:
1. Specify a System type of Reference.
2. Select the Table and the Column that holds the allowed values.
Type Description
Type Description
5. Specify the attributes of the variable that the system is to generate from the parameter.
Attribute Description
Now that the parameter is fully defined, you can use the parameter settings to generate the variables
that appears in catalog items for requesters of virtual resources.
6. Repeat the process for all parameters in the template.
The ServiceNow instance adds the following variable sets to each catalog item:
• Terms and conditions
• VM Provisioning variables (includes Assignment group, Business purpose, Lease start/end, and CMDB
relationship tags, Application and Business service)
• Cloud auto-tagging variables (includes Cost Center and Project by default)
1. Navigate to Microsoft Azure Cloud > ARM Templates and select the template.
2. Click the Refresh Variables from Parameters related link.
If the variables do not exist, the system generates them. If the variables already exist, the system refreshes
them with the updated attribute settings.
Create an Azure catalog item based on an ARM template
After you are satisfied with the settings and the generated variable set for an ARM template, you can use
the template to generate a resource group catalog item.
Role required: cloud_admin
1. Navigate to Microsoft Azure Cloud > Maintain Catalog Items > ARM Template Items.
2. In the Azure ARM Templates list, click the template. On the form, verify the Name and content for the
item and then click the Create Catalog Item related link. If needed, specify or update the following
settings:
Allow deployment to existing group Allows the user to choose to deploy into an
existing Resource Group. If left unchecked,
then a new Resource Group is required.
3. When you have configured all settings and the catalog item is ready for use by Cloud users, click
Publish.
The system adds the ARM template item to the catalog.
"parameters": {
"vnetName": {
"type": "string",
"defaultValue": "testdummyvnet",
"minLength": 9,
"maxLength": 39,
"metadata": {
"description":"...",
"SNC::Parameter::Metadata": {
"constraintDescription":"Please provide a valid vnetName",
"referenceType":"Microsoft.Network/virtualNetworks",
"allowedPattern":"[0-9a-zA-Z]+"}
}
},
"password": {
"type": "securestring",
"defaultValue": "testfordummy",
"minLength": 8,
"maxLength": 25,
"metadata": {
"description":"...",
"SNC::Parameter::Metadata": {
"constraintDescription":"Please provide a valid password",
"allowedPattern":"[0-9a-zA-Z]+"}
}
},
"storagedisksize": {
"type": "int",
"defaultValue": "testfordisksize",
"minValue": 3,
"maxValue": 9,
"metadata": {
"description":"...",
"SNC::Parameter::Metadata": {
"constraintDescription":"Please provide a valid disk size",
"referenceType":"storage:disksize",
"allowedPattern":"[0-9]+"}
}
},
"virtualnetworkname": {
"type": "array",
"defaultValue": "testfornetworkname",
"minLength": 1,
"maxLength": 10,
"metadata": {
"description":"...",
"SNC::Parameter::Metadata": {
"referenceType":"Microsoft.Network/virtualNetworks",
"allowedPattern":"[0-9a-zA-Z]+"}
}
}
}
To define a password parameter, use the secureString parameter. In this example, if the admin sets the
type as secureString, then referenceType is ignored and the corresponding catalog variable has
the Reenter text box on the catalog order page.
When the admin creates referenceType metadata, the following table is created to associate a
reference for the given type.
Name Description
To reference an Azure image using referenceType metadata, the publisher, offer, and SKU for that
image must always be specified. If the version is not also specified, then the last known will be used.
Cloud admins can employ these templates to create an image variable containing these parameters. This
variable automatically populates the provided information for a requested VM, thus simplifying the ordering
process for a cloud user.
The following is an example to define image referenceType metadata in an ARM template, and creates
a group variable called image1.
"imageOffer": {
"type": "string",
"metadata": {
"description": "image offer",
"SNC::Parameter::Metadata": {
"groupKey":"image1",
"referenceType" :"Microsoft.Compute/Images::Offer" }
}
},
"imagePublisher": {
"type": "string",
"metadata": {
"description": "image publisher",
"SNC::Parameter::Metadata": {
"groupKey":"image1",
"referenceType" :"Microsoft.Compute/Images::Publisher" }
}
},
Metadata for image resource type must contain both referenceType and groupKey to utilize image
reference. Parameters with groupKey are hidden on the catalog order page. If the parameter metadata
contains only referenceType, then a group variable is not created and the variable shows on the catalog
order page. Removing all parameters with the groupKey metadata removes the group variable.
If you encounter a template validation error, then make sure that:
• The groupKey value differs from the name of its parameter. These two values must not be identical.
• The prefix used in referenceType (the part before the ::) is the same for all parameters that have
the same groupKey value.
Note: If your organization does not use ServiceNow Discovery, and you will run the Discover
vCenter Data utility to collect network data from vCenter, you must create the IP Pools for your
system before running the utility.
When you provision a new virtual machine (VM) through vCenter, select a VMware network (Select
Datacenter, Network, and Folder Orchestration activities). Orchestration assigns an available IP
address from the VMware network's IP pool to the new virtual machine. If the VMware network contains
multiple IP pools, Orchestration selects an IP address from the pool with the most available addresses.
1. Navigate to Configuration > VMware Cloud > Networks.
2. Select a network from the list.
3. In the IP Pools related list, click Edit to add an existing IP network to the network.
5. Click Save.
The new IP pool appears in the related list in the VMware vCenter Network form.
Configure an IP pool
An administrator must ensure that there are IP pools associated with all active VMware networks, and that
the IP pools contain enough IP addresses to meet the demand for new VMs.
Role required: cloud_admin, vmware_operator, cloud_operator, vmware_operator, admin
IP pools are collections of IP addresses that can be or have been assigned to newly-provisioned virtual
machines (VMs). Each IP pool can be associated with one or more VMware networks. When the Select IP
Address activity runs, it identifies the IP pools associated with the VMware network selected for the virtual
machine (generally by the Select Datacenter, Network, and Folder activity), chooses the one with the
most available IP addresses, and allocates an IP address to the virtual machine from that pool.
Associate VMware networks with an IP pool by editing the VMware Network related list on the IP Pool
form. When you provision a new virtual machine through vCenter, select a VMware network. Orchestration
assigns an available IP address from the VMware network's IP pool to the new virtual machine. If the
VMware network contains multiple IP pools, Orchestration selects an IP address from the pool with the
most available addresses.
Note: vCenter contains VMware networks that Discovery (including DiscoverNow) adds to the
CMDB as VMware CIs.
2. Open the VMware catalog item to customize and then select the Apply guest customization check
box.
3. For Customization Specification, select the specification that you created in the earlier steps.
Note: Alternatively, you can use a provisioning rule to select the customization specification.
4. Define the workflow logic that selects the IP address -- the IP selection workflow.
The workflow that you create is called as a subflow from the VMware VM Provision workflow and
can take any inputs that can be passed from VMware VM Provision. For example, the subflow can
take the sys_id of the VMware network that was selected by provisioning rules. The subflow can also
take requester answers at request time because VMware VM Provision takes as input the sys_id
(sc_req_item) of the request item record so you can look the actual record up and access its variables
object. The subflow must return a JSON string that represents an object with the following fields:
• status: Success or failure.
• ip: String version of the IP address to assign to the VM.
• netmask: The netmask for the network that the IP is on.
• dns: The IP address of the DNS server.
• dns2: Optional: The IP address of a secondary DNS server.
• gateway: The gateway for the network that the IP is on.
• error_message: Optional: To add a failure message, add a field called error_message with a
string value that describes the failure.
6. Update an extension point with information on releasing the IP address when the VM is terminated.
In vm_extension_point, there is a key VMWPostProvision with a description of Set up ip. Update
the script to: current.ip_address = workflow.scratchpad.provisionResult.ip; The
base system script attaches the provisioned VM record to the record in the base system IP Pool table.
Because you are replacing the selection logic and not using the base system IP Pool table, you must
remove this logic, but the IP address to be set on the VM instance must remain (so the system knows
which IP address to release when the VM is terminated).
7. Define the logic that releases the IP address -- the IP address release workflow.
The workflow that you create is called as a subflow from the VMware Termination workflow and can
take any inputs that can be passed from VMware Termination. The return should be a JSON string
with a key of status and a value of success or failure. For example, {"status":"success"} or
{"status":"failure"}.
8. Cause the VMware Termination workflow to call the IP release workflow: Modify the extension point in
vm_extension_point with key VMWareReleaseIPWFSelection to point at the IP address release logic
subflow that you created.
Field Description
Other approvals that apply to requests through the service catalog are still required.
8. Select the appropriate Task automation mode:
Mode Description
9. Select Guest customization if any operating system-specific customizations will be applied to the
newly provisioned virtual machine. This option is only allowed for Linux and Windows virtual resources
and not for custom templates with other offerings.
10. Select a Customization specification that contains all the settings to be applied to the newly
provisioned virtual resource. This option is only available if Guest customization is selected.
If you change the image size, but no other values, the pre-populated data adjusts according to the
parameters of the new size. However, if you change a value in a pre-populated field before selecting
the new size, the field value you changed is not updated when the size changes. This behavior allows
you to change the price, name, or description of specific sized images.
Example:
The default name of a service catalog item is VMware Instance - Small - Red Hat 6 Server. You
change the size to Medium without modifying the Name field. The system changes the name to
VMware Instance - Medium - Red Hat 6 Server. If you change the Name field to Dev Red Hat 6 Server
- Small before changing the size, the name of the item is not changed.
Each of the four pre-populated fields behaves the same and is independent of the others. For
example, if you change the name of the item but not the short description before selecting a new size,
the name value is unaffected, but the short description adjusts to match the requirements of the new
size.
To customize the automated naming scheme used for VMs created from this image, seeCreate a custom
VM naming script on page 1255
Types of catalog offerings for VMware
You can use several base configurations to create cloud resource offerings.
• Basic service catalog offerings: Use the base system presets to test cloud provisioning in your
environment and to determine how you want to customize service catalog offerings. This is the easiest
and quickest procedure for configuring Cloud Management.
• Custom service catalog offerings: Build on the basic configuration by adding features to the
provisioning workflow. Give users more choices and apply prices to your catalog items.
• Advanced service catalog offerings: Customize your catalog offerings, allowing users to request
special configurations, or to skip approvals during provisioning and automate provisioning tasks.
These catalog items minimize requester involvement by simplifying the process of selecting the appropriate
virtual server. An administrator can create virtual machines with different specifications that are built from
the same template and presented to users in the service catalog by size.
The following tables are used for this feature.
Table Description
Virtual Machine Template Contains virtual machine templates from Amazon EC2 and
[cmdb_ci_vm_template] VMware.
VM Catalog Item [sc_vm_cat_item] Contains virtual machine catalog items created from VMware
templates. This table extends the Catalog Item [sc_cat_item]
table.
VMware Size Definition Contains size definitions (specifications) for the VMware
[vmware_size] offerings. This is the parent table to the Catalog VM Class
Selection [sc_vm_class_selection] table.
Field Description
Data disk not desired Select to opt out of an additional data disk.
Description Description for the size.
4. Click Submit.
You can customize service catalog hardware selections so that users can request the virtual server data
disk size.
Role required: cloud_admin, vmware_operator
The service catalog shows the data disk size option only when the user declines the choice of a predefined
virtual server class.
1. Navigate to VMware Cloud > Data Disk Size Selections and click New.
2. In the Data disk size label field, enter the name to be displayed in the service catalog.
This field typically includes a number and size abbreviation, such as 20GB.
3. In the Value field, enter the number of megabytes of disk space this option represents.
In the following example, the data disk has a value of 20,480. The system calculates the cost for the
disk size selected based on the configured price per unit.
4. Click Submit.
4. Click Submit.
Note: Complete the basic configuration before adding the customizations discussed here.
Note: Prices for these specifications are calculated automatically from amounts configured
in the Prices module.
2. Define prices for each hardware element: CPU, memory, and data disk size.
The catalog price is calculated by the system as multiples of a unit price. All prices on the
VMware Size Definition [vmware_size] table and the hardware selection tables for CPU,
memory, or data disk size are calculated from prices defined in the Catalog VM Element Price
[sc_vm_element_price] table. Pricing can be used to integrate Cloud Management with asset
management model categories. For instructions on setting prices for hardware elements, see Pricing.
3. Assign collections of IP addresses or IP pools to one or more VMware networks.
When a virtual machine is provisioned from that network, the workflow identifies the IP pool associated
with the network and selects an IP address from the pool for the newly-provisioned machine.
4. Create custom specifications for the catalog item, based on the different versions of Windows and
Linux operating systems that your organization offers.
This value appears in the Operating System choice list for the catalog item. The user's selection tells
the provisioning task which configuration information to use.
Note: Ensure that the basic and custom configurations are complete before attempting to add the
customizations provided by these procedures.
2. [Optional] Reconfigure the defaults for the duration of a virtual server lease in the appropriate
properties.
The default length of a lease in the base system is 60 days. The maximum lease duration permitted is
90 days.
3. Configure the provisioning rules.
Provisioning rules enable you to select which vCenter resources (datacenter, network, and folder) are
used to provision virtual machines for a specific category, such as Dev, QA, or Prod. Use provisioning
rules to organize multiple vCenters around the types of virtual machines they support.
4. Create catalog items for virtual servers from VMware templates and present the items to users in the
service catalog.
These catalog items minimize a requester's involvement by simplifying the process of selecting
the appropriate virtual server. A cloud administrator can create virtual machines with different
specifications that are built from the same template. Some of the options you can configure in the
VMware catalog item record are:
• Skip approval: Allow requests for this item to bypass an approval from the VMware Approvers
group.
• Task automation: Select the automation level for this item: Fully automatic, Semi automatic, or
Manual.
• Guest customization: Select the settings to be applied to the newly provisioned virtual resource
when guest customization is enabled.
5. [Optional] Configure the system to create change requests when users request modifications to their
VMware machines from the My Virtual Assets portal.
Examples of modifications that generate changes are lease extensions, requests for additional
memory, and requests to terminate a virtual machine.
4. Click Update.
The system recalculates the price for all items in the service catalog that use this Element type.
Guest customization
Guest customizations are operating system-specific customizations for virtual machines.
Configure a customization specification for VMware Linux VMs
Customization specifications for Linux VMs. Specify configuration settings that the system applies after a
user submits a request.
Role required: cloud_admin, vmware_operator
1. Navigate to VMware Cloud > Customization Specifications > Linux and click New.
2. Enter a unique and descriptive Name that includes the operating system.
The Name appears in the Operating System choice list for the catalog request item. The selection
tells the provisioning task which configuration information to use.
3. Enter the DNS name in the Domain field.
4. Choose whether to use IP Pool or DHCP in the Choose networking through field.
5. Click Submit.
Field Description
By allocating reserved space when a VM is requested and accounting for recently provisioned space, the
user can be assured that sufficient space will still be available for provisioning when the request is fulfilled.
To accomplish this, space on the datastore is processed as follows:
• When a VM is requested, the amount of reserved space is incremented to account for the requested
disk size (template size plus additional disk size). If the reserve space request fails because of
insufficient space after discovery, a task to fix the issue is created for the cloud operator. If the VM
request is canceled, the reserved space is decremented to remove requested disk space.
• For automated provisioning, the datastore with the least disk space, but sufficient for the VM request, is
automatically selected.
• When the VM is provisioned, the reserved space is decremented to remove provisioned disk size, and
recently provisioned space is updated to provisioned disk size.
• Whenever a datastore is rediscovered and updated accordingly, the amount of recently provisioned
space is reset to 0. When a VM that was provisioned from a ServiceNow instance is terminated, the
provisioned space is released and the free space is incremented. If the workflow is canceled before the
VM is provisioned, the reserved space is updated to remove not provisioned disk size.
• When a VM is modified, the requested amount of space is reserved, and the reserved space field is
updated. When the Modify VM workflow finishes, the reserved space is updated to decrease the disk
size added and the recently provisioned space is increased. When the VM is terminated (Terminate
VM), the recently provisioned space in all affected datastores is released.
When a catalog task is created and a cloud operator chooses a datastore, only those datastores with the
enough space to continue are shown in the choice list.
Set the minimum free disk space for VMware datastores
You can set the minimum free disk space on datastores to ensure that the vCenter functions correctly.
Role required: cloud_admin
Navigate to Cloud Management > Administration > Properties.
Property Description
Minimum free disk space on each VMware The amount of free disk space required on a
datastore (MB) datastore. The default value is 1024 MB.
[glide.vmware.provisioning.datastore.reserved_space]
Roles Required
Members of these groups can configure change control for virtual machine modifications:
• Virtual Provisioning Cloud Administrators
• Virtual Provisioning Cloud Operators
Tables
Change Condition Records that define the conditions for creating a change
[vm_instance_change_condition] request.
VM Instance Action Records that define the type of change (action) that requires
[vm_instance_action] an approval, such as additional CPUs or an increase in the
data disk size.
Change conditions
The following change conditions in the base system require change approvals for actions performed on
production environments:
5. Click Update.
2. Click OK.
The system creates the change record with data from the original request and displays the record.
If the change request specifies an item that is a service catalog offering, such as a lease extension,
then a message appears at the top of the form containing a link to the request number. The requested
modification is noted in the Description field. The provisioning workflow begins but waits for change
request approval.
• If the requester clicks Proceed with Change, the provisioning workflow completes and makes the
requested modification.
• If the requester clicks Cancel Change, the workflow exits without making any modifications to the
virtual machine.
After either selection, the system returns the requester to the source portal from which the request was
made. The asset view shows the state of the virtual machine.
Required Roles
• System Administrator (admin role): Users with this role are permitted to add, edit, or delete SLAs and
OLAs.
• Virtual Provisioning Cloud Administrators group (cloud_admin role): Users in this group can view lists of
breached SLAs and access requested items in a breached state.
• Virtual Provisioning Cloud Operators group (cloud_operators role): Users in this group can view lists of
breached OLAs in the Cloud Operations Portal and access provisioning tasks in a breached state.
When a task is assigned to one of the operator groups, The system creates an OLA. If an error occurs
during provisioning, the system assigns a task to one of the operator groups and triggers an OLA.
Members of the Virtual Provisioning Cloud Operators group can view a list of breached OLAs in the Cloud
Operations Portal.
To change the default duration of an OLA:
1. Navigate to Service Level Management > SLA > SLA Definitions.
2. Open the OLA record appropriate for your virtualization product.
3. Change the value in the Duration field or select a preconfigured Duration type, and then click Update.
Figure 199: VM SLA Record © 2018 ServiceNow. All rights reserved. 1173
Istanbul ServiceNow IT Operations Management
The system generates a unique Tag Set ID number and applies the ID to a resource when the resource
is provisioned. Along with the other tags for the resource, the system pushes the ID to the cloud provider
as the value for the Tag Set tag. Tag key/value mappings are stored in the instance as references as soon
as the CI is created (pre-provision). When pushed to the provider, they will use the display names of the
referenced values.
Tagging allows you to track resources in several dimensions:
• Ownership
• Cost management
• Business service/application relationships
• Audit and compliance
• Monitor account/subscription proliferation
CloudFormation administrators can categorize and assign metadata to cloud resources by tagging them.
Caution: Do not select a cloud resource in a list and select Tag in the related list. When you
do so, the system generates a ServiceNow system tag instead of the desired cloud metadata
tag on the resource.
Field Value
Name Unique name for the tag.
Category • Standard: Provided with the base system and
determined during provisioning. By default,
includes Application, Cost Center, Project,
Service, User Name
• Other: Tags that you specify.
3. After you specify the Value type, you specify the value for the tag.
Option Description
• This page is common to all types and allows a user to set values for any of the defined cloud
management tags in the “Standard” category.
4. Select the tags and enter the values. Submitting the page starts a job that will set the specified tag
values in ServiceNow for all resources selected.
5. For on-provider tagging, an entry must be added to the task_action_workflow table mapping the
"tag_resources" operation to the workflow for the relevant CI table. This is only supported for one type:
EC2 instances on AWS.
Note: When you delete a job definition, all associated scheduled jobs are deleted.
1. For your provider; Amazon AWS Cloud or Microsoft Azure Cloud: Navigate to Administration >
Report Schedules.
2. Select an existing job or click New and then specify the following values:
Field Value
Job name Name of the scheduled job that will download the
billing data to the ServiceNow database.
EA credential Select the appropriate Enterprise Agreement
credential.
Field Value
Field Description
3. Click Submit.
The job definition is complete. Jobs will run on the specified schedule. You have the option to update
the scheduler that controls the interval that the download repeats at.
Note: By default, the job downloads billing and cost data for the current month from the provider.
To download data from another specified time period, modify the job scheduler before running the
job.
1. For your provider; Amazon AWS Cloud or Microsoft Azure Cloud: Navigate to Administration >
Report Schedules.
2. Select a scheduled job.
3. Click Run now to download the most recent billing and cost data from the provider.
2. Configure the Lease is expiring notification time. Specify the number of days in the Time prior to
lease end to notify requester property (glide.vm.lease_end_notification).
The ServiceNow instance sends the following system notification to the user on the specified number
of days before lease end:
A configurable grace period enables an administrator to delay the termination of a virtual machine
when the lease end date expires. When the lease ends, the virtual machine is powered off, but is
available for use until the end of the grace period.
To change the default grace period of 7 days, navigate to Cloud Management > Administration >
Properties and edit the value in the Grace period after lease end until VM termination property
(glide.vm.grace_period).
When the lease ends, the platform runs the Amazon EC2 End of Lease workflow, which powers off the
virtual machine and notifies the requester that the lease has expired. The Amazon EC2 End of Lease
workflow evaluates the glide.vm.grace_period property to determine when the Terminate Amazon
EC2 Instance workflow should run. The requester is notified when the virtual machine is terminated (or
when termination has failed).
4. To configure a different workflow to run when a lease is terminated, in the application navigation filter,
enter task_action_workflow.list.
1. Select end_of_lease in the Action field.
2. Select the appropriate table in the Table field.
• For an AWS VM instance: [cmdb_ci_ec2_instance] table.
• For an Azure VM instance: [cmdb_ci_azure_instance] table.
Requesters of virtual resources can configure lease start and end times for individual virtual machines.
When the lease ends, the platform runs the VMware End of Lease workflow, which notifies the requester
that the lease has expired, and then powers off the virtual machine. The VMware End of Lease workflow
evaluates the glide.vm.grace_period property to determine when the VMware Termination workflow
should run. The requester is notified when the virtual machine is terminated, or if termination failed.
To configure a different workflow to run when a lease is terminated:
1. In the application navigation filter, enter task_action_workflow.list.
2. Select the end_of_lease action for the VMware Virtual Machine Instance
[cmdb_ci_vmware_instance] table.
1. Navigate to Cloud Management > Administration > VM Snapshot Eviction Policy and then click
New.
2. Configure the settings as shown in the table and then click Submit.
Field Description
Field Description
Field Value
Applies to items of type The type of ordered catalog items that the
rule applies to. For example, VMware VMs (in
the sc_vmware_cat_item table) or Amazon
EC2instances (in the sc_ec2_cat_item table).
A rule can include only those assignments that
match the Applies to items of type setting in the
rule.
Field Value
performed before assignments with higher
values.
This is a standard ServiceNow Order field.
An assignment runs in the order that appears
in the related list in the rule. You can modify the
settings from the list view.
1. While configuring an assignment, specify a Value type of Set by Filter and then configure the following
settings:
Field Value
Provisioning rule variable Specifies an array of sys_ids (for records
in the specified Table) that is defined in a
scripted provisioning rule assignment to apply
the condition against. The condition should
be designed to select one item from the list.
Otherwise the first rule variable that is returned
from the query is picked.
If Provisioning rule variable is blank, then the
filter runs against all records in the Table that you
specify.
See the Script assignment type.
In the following example, the networkIds array is populated by an assignment that will run before
this assignment (the Order is lower for the assignment that will run earlier). networkIds is an array of
sys_ids to records in the cmdb_ci_vcenter_network table. The Network variable is being assigned to
the first record in networkIds that has a tag with value of non-PII. non-PII is a standard ServiceNow tag
that you can apply to any record in the system.
1. Navigate to your provider; Amazon AWS Cloud or Micorosoft Azure Cloud or VMware Cloud and
click Administration > Provisioning Rules.
2. Click New, specify a unique Name and Description for the rule, and then configure the following
settings:
Field Value
Applies to items of type The type of resource that the rule applies to, for
example, Azure VMs, Amazon EC2instances,
VMware VMs, and so on. (The types are held in
the sc_<provider>_cat_item tables.)
Applies only to item Only consider this rule if the specified catalog
item is ordered. The list is filtered based on the
Applies to items of type selection.
If this field is left blank, then the rule applies to
any catalog item where the type matches the
Applies to items of type value.
Field Value
rules, you can ensure that provisioning rules with
more specific conditions are evaluated before
(low values) provisioning rules with more general
catch-all conditions (high values).
Figure 202: Example of filtering on variables that are part of the request
Note: For automated provisioning, create a rule to configure a value for each required variable
in the requested resource. When a required value is not set, a cloud operator must set the
missing value.
The Provisioning Rule Assignments related list now displays the assignments in the order of the
assignment Order settings.
Note: To update the Order setting for any assignment, click the value in the list.
Weight
You may want more than one datacenter, network, or folder to be used for virtual machines on a particular
vCenter instance and category. For example, you have a vCenter containing two datacenters, and you'd
like to provision 75% of the virtual machines to one datacenter, and 25% to the other.
You can do this by creating two provisioning rules for the particular vCenter and category with the same
order value for each. Multiple rules with the same vCenter, Category, and Order values trigger this
special behavior. Give each provisioning rule a Weight value proportional to the percentage of the time
you want the rule to be used. For this example, you might choose Weight values of 300 and 100. Any
other numbers in the same proportion would also work, like 3 and 1. To check the percentage for any given
rule, calculate the Weight value of that rule divided by the sum of the Weight values for all the rules with
the same vCenter, category, and order value. In the example, the calculations would be 300 / (300 + 100)
= 75%, and 100 / (300 + 100) = 25%, which meets the goal.
Creating a Provisioning Rule
1. Navigate to VMware Cloud > Rules > Provisioning Rules.
2. Click New and then create a unique Name for this rule.
3. Select a Category.
If you do not select a category, the rule applies to any category.
4. Select the vCenter to use for this category.
The Datacenter field appears.
5. Select a datacenter from the list of datacenters available for this vCenter.
The Network and Folder fields appear.
6. Select an appropriate network and a folder.
7. Enter an Order value to establish the order in which this rule is evaluated.
8. If you create multiple rules with the same Category, vCenter, and Order, select an appropriate Weight
to determine which virtual machines, by proportion, are provisioned with this rule.
9. Click Submit.
Caution: Termination deletes the resource and all associated data. You cannot restart a
terminated resource. Be sure to capture all needed data before terminating.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
Important: AWS: The system does not delete volumes for terminated resources. Use the
AWS console to delete volumes.
Any script or logic can be invoked. You can also populate any custom workflows to
execute before the termination or after the termination.
Add all workflows to the workflow.scratchpad.before_subflows or
workflow.scratchpad.after_subflows variables in the appropriate extension
points. A return value of successindicates if the extension point was successful or not.
The termination workflows run the custom deprovision workflows one at a time and if any
of them have errors, a task is created for the operator to manually resolve, and either
proceed with termination if Before Deprovision fails, or just exit the termination workflow
if After Deprovision fails.
To delete these stale resources, you can configure properties to enable automatic pruning (permanent
deletion) of stale records after a set number of days.
1. Navigate to Cloud Management > Administration > Properties.
2. Change the following properties:
Enable automatic pruning of cloud records Activates the scheduled job which triggers
cleaning of records. This property is disabled by
default.
Number days for pruning cloud table records Sets number of days at which to delete a record.
(Days) Default value is 396.
3. Click Save.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. lick VM name.
The VM’s relationships to other CIs appears in the Related Items section. The view includes Business
Services and Applications referenced by tags on the VM or its parent resource group.
3.
Click the BSM icon.
The BSM map appears in a separate window.
Note:
If you are upgrading from Helsinki:
If you are activating Cloud plugins for the first time after upgrading from Helsinki, then you may
see a completely blank page on both the Cloud Operations portal as well as the Cloud Resources
portal. To resolve this, please contact ServiceNow Technical Support and request that the
glide.cms.enable.responsive_grid_layout system property be enabled. This step is not
necessary if you already had Cloud plugins activated before moving to the Istanbul release.
VM provisioning overview
The process of provisioning a virtual machine involves a combination of automated and manual tasks.
1. Cloud operators fulfill provisioning requests from users.
Provisioning procedures and responsibilities are described in these pages:
• Amazon EC2 Approvals and Provisioning for Cloud Provisioning
• Approving and Provisioning VMware Requests for Cloud Provisioning
2. Fulfill user requests for modifications to existing virtual machines, such as increased memory or data
disk size, and for lease extensions.
Tip: Cloud operators also can change the state of a virtual machine to start, stop, or pause;
terminate virtual machines; and take and restore to snapshots.
3. Resolve errors that occur during provisioning or when modifying an existing virtual machine.
4. [Optional] Configure the system to create change requests for specific modifications to VMware virtual
machines requested from the My Virtual Assets portal. Examples of modifications that generate
change requests are lease extensions, requests for additional memory, and requests to terminate a
virtual machine.
• CloudFormation Administrator: Members of this group own the Cloud Management environment and
are responsible for configuring the AWS resources used by cloud provisioning. Administrators can
create service catalog items from Amazon EC2 images and CloudFormation templates and approve
requests for CloudFormation stacks, and monitor the Cloud Management environment using the Cloud
Admin Portal.
• CloudFormation Operator: Members of this group fulfill provisioning requests from users. Operators
perform the day-to-day work of Cloud Management by completing tasks that appear in the Cloud
Operations Portal. Operators are assigned to specific virtualization providers and must be technically
adept with the products they support.
• CloudFormation Users: Members of this group can request CloudFormation stacks from the service
catalog and use the My Amazon Assets portal to manage any stacks that are assigned to them.
If the user request is approved, members of the provisioning group create the instance. When the virtual
machine is provisioned, the system creates a new asset in Asset Management. The new asset appears in
the My Assets portal of the requester.
1. Approve the available images
After you establish and configure the Amazon EC2 account, you retrieve the list of available images
and approve them for the service catalog offering. In this task, the administrator decides which images
are available for selection in the specific regions defined by Amazon. Only approved images are
available to fill requests from the service catalog.
2. Set up the Service Catalog Offering
The administrator must link the approved images to the operating systems offered to users in the
service catalog.
3. Approve and provision the request
4. Configure the lease period and duration
Note: If the approver rejects the request, the process is finished and no instance is provisioned.
The system notifies the user that the request was rejected.
Approved requests appear in the Service Desk > My Groups Work queue for the members of the EC2
Operators group.
1. Open the task and select the Amazon Web Services account to use to provision the requested
instance.
2. In Region settings, select a region for the instance (an Amazon EC2 datacenter). Available regions
are those selected during the Amazon EC2 configuration.
The choice list of available images is filtered to show images in the region selected for the account
with the requested OS and size.
3. Enter a user-friendly name for the instance in the Instance name field.
The system uses the name to identify the instance in the My Virtual Assets portal and in the CMDB.
Amazon uses the name as the Name Tag in the EC2 instance list. If you request more than one
instance, the system adds a unique number to the name for each instance. For example, three
requested instances with the name TestLab become TestLab1, TestLab2, and TestLab3. If the
Instance name field is blank, the instance is identified by a machine-generated string created by
Amazon.
4. Select an Image to provision and click Close Task to launch the provisioning workflow that creates
the EC2 instance.
When the workflow finishes provisioning the instance, the requester receives an email containing the
instance ID, IP address, and public DNS for the instances created. If provisioning fails, the workflow
notifies the provisioning group by email.
Create a concept topic to introduce the background needed to perform this process or task.
Workflow
In the base system, the approval and provisioning process for each virtual server request is controlled by
a workflow (Workflow > Workflow Editor) called Virtual Server. This workflow performs the following
tasks:
• Creates approval tasks for the approval group.
• Collects the provisioning information when the request is approved. (If the request is rejected, the
workflow ends.) The workflow determines if the request is for a Windows or Linux virtual machine.
• Creates a catalog task to select the proper virtual server template and supply it with the necessary
requirements, including the appropriate ESX resource pool.
• Sets the variables and provisions the virtual server. Using the template selected, Orchestration
clones the virtual server, attaches the IP address to the new virtual machine if guest customization is
configured.
• Powers up the virtual server.
• Notifies the requester that the virtual machine has been created successfully.
After any member of the approval group approves the request, the workflow creates the provisioning
task. The approval request is updated in the queues of other approval group members.
4. In the Description column, click the down-arrow for the VM. The feature summary appears.
Role required: cloud_operator, vmware_operator, itil role (in the VMware approvers group) or cloud_admin
role (in the Virtual Provisioning Cloud Administrators group)
1. Navigate to Service Desk > My Groups Work.
2. Open the provisioning task and then fill in the fields as described in the table. Some information is pre-
populated from the catalog request.
Options Description
Datastore Select the datastore on which to provision the virtual server and
any data disks. To put the virtual server on the datastore where the
template is located, select None. The available datastores are for the
selected host.
Network Select the network that the virtual server will use. Available networks
are those from the datacenter in which the selected destination folder
resides.
Options Description
Guest If guest customizations are configured for this virtual server, select the
customizations appropriate value.
• VM host name: [Required] Name of the server hosting the virtual
machine being provisioned. Check the Notes in the request form
for the name designated by the requester.
• Windows/Linux configuration: [Required] Specifics for
Windows or Linux virtual machines configured in VMware Cloud
Management Instance > Customization Specifications.
Configuration information includes DNS for Linux and the product
key and domain credentials for Windows.
• Network configuration: [Required] Select the network and
the network configuration for the virtual server. The IP address
allocated to the virtual server is selected from a list of available
addresses configured in the network record.
Provisioning rules enable an administrator to select which vCenter resources (datacenter, network, and
folder) are used to provision virtual machines for a specific virtual machine category (such as Dev, QA, or
Prod) or for any category if the Category field in the rule is left empty.
2. Complete all mandatory fields as appropriate for the virtual machine. For more information about this
form, see Provisioning Virtual Servers.
3. Select No from the Guest customization choice list.
4. Click Close Task.
Column Description
Recently Value that was actually provisioned. This value is cleared after
provisioned value Discovery runs.
Reserved value Value that is being requested.
2. Configure the values for Overallocation percentage for VMware cluster CPUs and Overallocation
percentage for VMware cluster memory. The default values are 100%.
Note: Some changes to specifications and some state changes might require a change request.
Users with the cloud_user and cloud_admin roles can perform these actions only on virtual machines
assigned to them.
• Request new virtual machines from the service catalog for the virtualization products that your instance
supports (for example, VMware or Azure).
• Modify VM (Azure and VMware only): Request modifications to their virtual machines, such as
increased data disk size and memory or a state change such as start or stop.
• Update the end of the lease
• Pause VM
• Stop / Start VM
• Cancel VM
• Terminate VM
• Take a snapshot and schedule snapshots
• Restore from snapshot
• Delete a snapshot
For all actions that are subject to change control, if change control is enabled, the action is added to
change request page. After the change request is approved, the user must return to the virtual asset page
to click link Proceed with Change under Related Lists.
Field Description
Field Description
Resource Group
Use existing group Deploy new VM into an existing Resource
Group. If left unchecked, then a new
Resource Group is created.
Subscription Azure subscription for the Resource Group.
Group Name Name of the Resource Group.
Region Azure region where the Resource Group
resides.
Lease
Start Date and time that this VM is to be
provisioned.
End Date and time that this VM is to be retired
(this lease can be extended later).
Virtual Resource
Business purpose Detailed description of the business
purpose of this VM, to facilitate any
necessary approvals and/or administrative
decisions.
Used for Usage category, to ensure that the VM is
provisioned properly for your needs.
CMDB Attributes
Field Description
VM Parameters
VM Name Name of the virtual machine.
Field Description
• For Linux
• The value must be from 1 to 64
characters long.
• The user names administrator,
admin, admin1, admin2, 1, a are
considered as invalid.
• The special characters \, /, " ",
[ ], :, |, +, =, ;, ,, ?, *, @, in the
user name are considered as invalid.
• For Linux
• The value must be from 6 to 72
characters long.
• Must have at least 3 of the following:
one lower case character, one upper
case character, one number, and one
special character.
Field Description
Network
Virtual Network Resource Group Azure Resource Group where this virtual
network is located.
Virtual Network Name Unique name for this virtual network.
Subnet Name Unique name for this subnet.
Options
Need Public IP Select if the VM requires a publicly
accessible IP address.
Use Existing Network Interface Card If not selected, a new NIC is created using
the information entered.
Network Interface Card Visible only when Use Existing Network
Interface Card is selected. List of available
NICs. If Need Public IP is also selected,
then only NICs that have public IP
addresses are listed.
Public IP Address Type Visible only when Need Public IP is
selected and a new NIC is being created.
Private IP Address Type Visible only when Use Existing Network
Interface Card is selected.
Field Description
Lease
Start The date and time you would like this VM to
be provisioned.
End The date and time this VM may be retired
(your lease may be extended later).
Virtual Resource
Business purpose A detailed description of the business
purpose of this virtual machine to
facilitate any necessary approvals and/or
administrative decisions.
Used for Select an appropriate use category to
ensure your VM is provisioned properly for
your needs.
CMDB Attributes
Field Description
Note:
If you are seeing a blank page:
You may see a completely blank page on the Cloud Resources portal as a result of a
recent upgrade and subsequent activation of the Cloud management feature. If this
has happened to you, please contact your System Administrator and request that the
glide.cms.enable.responsive_grid_layout system property be enabled for the system. Note that
enabling this property may affect dashboards in other products as well.
Note: The topics in this section describe VMs, which are based on AWS, Azure, or VMware
images. AWS stacks and Azure Resource groups, in contrast, are collections of related cloud
resources.
Request a VM
You can request a VM from the Cloud Resource Catalog. The cloud admin has approved all of the
options in the catalog. The cloud operator will work to approve your request.
Role required: cloud_user, cloud_operator, or cloud_admin
Requested VMs are subject to normal approvals and some special provisioning tasks. Users can make a
service catalog request to terminate a VM. Any VM created from a ServiceNow instance can be destroyed
from that instance.
1. Open the list of VM catalog items: Navigate to Self-Service > Cloud Resource Catalog and
then select your provider; AWS Virtual Machines or Azure Virtual Machines or VMware Virtual
Machines.
2. In the list, click the name of the desired VM template.
Specify settings for the VM.
Note: The number and kinds of settings depend on the template. Here are some common
variables:
Field Description
[Tag settings] Your cloud admin may have configured the system to request
values for tags that are applied to the resource. For example,
when you specify a value of Marketing for the Cost Center
tag, then the system can correctly charge the resource to the
Marketing department. Here are some typical tag names:
• Application
• Cost Center
• Project
• Service
• User Name
Field Description
Lease Start and End Select the start and end times for this virtual machine lease. The
lease start time is set automatically for the current date and time.
In the base system, the lease end time is set to 60 days after the
start time, and the maximum lease duration is limited to 90 days.
The system does not allow requesters to set a lease end time
beyond the configured limit. The lease duration is calculated from
the time the virtual machine is actually provisioned, which occurs
after the request is approved. If you request a virtual machine for
the current date (now), and there is a delay in approval, the end
date is reset according to the configured lease duration time.
Virtual Resource
Size Select the predefined size for this offering that has the desired
features (memory, storage, CPU speed).
Business purpose Enter a brief description of how this virtual server will be used.
Used for Select the purpose of this virtual machine. The categories
in this choice list are found in the Business Service
[cmdb_ci_service] table and also are used to define the
categories in the provisioning rules. The system does not
automatically end the lease for virtual machines marked as
Production. Instead, the system renews the lease on Production
virtual machines automatically for the default lease duration
and sends a notification to the requester each time the lease is
renewed.
CMDB Attributes
Assignment Group The user group that will own the provisioned configuration item.
Business Service Select the business service that depends on this virtual machine.
When Orchestration creates the virtual server, it also creates the
relationships to this business service in the CMDB.
A Used by::Uses relationship between the requested virtual
machine and the service is generated if a business service is
specified.
Cost Center The cost center that this request is billed against.
Application Select the principal application that depends on this virtual
machine, such as an exchange server or SQL Server database.
When Orchestration creates the virtual machine, it also creates
the relationships to this application in the CMDB.
Project The project the instance is billed against.
Size The instance size that best suits your needs. Not all sizes are
available for all images.
Instance name The Name tag set on the instance.
Admin name / Admin Admin user name and password for VM.
password
Computer name Name of computer.
Network Network.
Field Description
3. Specify the Quantity of VMs to order and then click Order Now. The system submits your request. If
approval is required, there may be a delay while a cloud operator approves the change.
The view changes either to the My Virtual Assets portal or to the Order Status form, depending on how
the service catalog is configured. The portal shows the various gauges associated with the logged
in user's virtual assets and requests. To view the current request, click the request number in the
My Virtual Asset Requests list or expand the Stage column to determine where the request is in the
provisioning process.
You are notified by email of the results of your request. If the provisioning request fails for any
reason, an incident is automatically created and assigned to the Cloud Administrators group (if the
glide.vm.create_incident system property is enabled).
4. After the request is approved and the VM is provisioned, navigate to My Virtual Assets. Click the new
VM.
Request an Amazon VM
Users requesting an Amazon EC2 instance from the service catalog must have the cloud_user role.
Role required: cloud_user, cloud_operator, or cloud_admin
1. Navigate to Self-Service > Cloud Resource Catalog and click Amazon Virtual Machines.
2. Click Amazon EC2 Instance - Advanced to request an Amazon virtual machine and fill in the fields,
as appropriate.
Field Description
Field Description
Field Description
Request an Azure VM
Use the Cloud Resource Catalog to request an Azure virtual machine to launch an instance.
Role required: cloud_user, cloud_operator, cloud_admin
1. Navigate to Self-Service > Cloud Resource Catalog and click Azure Virtual Machines.
2. Click Azure VM Instance - Advanced to request an Azure virtual machine and fill in the fields, as
appropriate.
Field Description
Resource Group
Use existing group Deploy new VM into an existing Resource
Group. If left unchecked, then a new
Resource Group is created.
Subscription Azure subscription for the Resource Group.
Field Description
Field Description
VM Parameters
VM Name Name of the virtual machine.
Admin Username Admin username for VM. The admin
username has the following requirements:
• For Windows
• The value must be from 1 to 15
characters long.
• The user names administrator,
admin, admin1, admin2, 1, a are
considered as invalid.
• The special characters \, /, " ",
[ ], :, |, +, =, ;, ,, ?, *, @, in the
user name are considered as invalid.
• For Linux
• The value must be from 1 to 64
characters long.
• The user names administrator,
admin, admin1, admin2, 1, a are
considered as invalid.
• The special characters \, /, " ",
[ ], :, |, +, =, ;, ,, ?, *, @, in the
user name are considered as invalid.
Field Description
• For Linux
• The value must be from 6 to 72
characters long.
• Must have at least 3 of the following:
one lower case character, one upper
case character, one number, and one
special character.
Network
Virtual Network Resource Group Azure Resource Group where this virtual
network is located.
Field Description
• Rejected: A user's request for a cloud resource was not approved. Subject: Your requested item
<number> for <VM type> has been rejected.
• Scheduled: The requested instance is scheduled for creation. Subject: <VM type> instance <name>
has been successfully scheduled.
• Provisioned: A user's cloud resource was successfully provisioned and is ready for use. Subject: <VM
type> instance <name> has been successfully provisioned.
• Modified: Applies to VMware only. A user's VM configuration (such as memory allocation or storage
capacity) was modified. Subject: The configuration of VMware instance <name> has been successfully
updated.
• Extended: The lease end date for a cloud resource was extended. Subject: The lease end of virtual
instance <name> has been successfully updated.
• About to expire: The lease on a user's cloud resource expires in n days. The default lead time in the
base system is one day. Subject: Virtual instance <name> will be terminated in <n> days.
• Terminated: The lease for a user's cloud resource has expired, and the resource was terminated.
Subject: VMware instance <name> has been successfully terminated.
Field Description
Lease
Start The date and time you would like this VM to
be provisioned.
End The date and time this VM may be retired
(your lease may be extended later).
Virtual Resource
Business purpose A detailed description of the business
purpose of this virtual machine to
facilitate any necessary approvals and/or
administrative decisions.
Used for Select an appropriate use category to
ensure your VM is provisioned properly for
your needs.
CMDB Attributes
Assignment Group The User Group that will own the
provisioned Configuration Item.
Application The application that will generate a
Used by::Uses relationship between
the requested virtual machine and the
application.
Field Description
Option Description
Note: If the VM is under change control (default setting and typically the case for production
systems), the system generates a change request when you submit the request. You receive
email when the cloud operator approves. After approval, return to the VM Instance form and
click the Proceed with Change related link.
After the network adapter is added, the system adds a network adapter record to a related list on the
VM record.
Note: If the VM is under change control (default setting and typically the case for production
systems), the system generates a change request when you submit the request. You receive
email when the cloud operator approves. After approval, return to the VM Instance form and
click the Proceed with Change related link.
When the network adapter is removed, the system removes the network adapter record from the
related list on the VM record.
VM operations
To make a change to an existing VM, select an action from a context menu or use the Related Links in the
VM record.
• Context menu: The list view displays all the virtual servers you or your teams requested through the
service catalog. Machines in all states appear on the list, including those that are terminated or have
been deleted. The state of the virtual machines is shown in the list, as are the Lease start and Lease
end times for each virtual machine. List editing is not enabled for lease times. To initiate a change from
the list of virtual machines, right-click an instance and select VM Management from the context menu.
Only the actions that are appropriate for the State of the virtual machine are available.
• Related Links: Open the virtual machine instance record and select an action from the Related Links.
The controls that appear depend on the State of the virtual machine.
Stop or start a VM
Cloud users can power-off and power-on their assigned VMs. Cloud admins and operators can perform the
actions on any VM.
Role required: cloud_user, cloud_operator, or cloud_admin
Restrictions:
• Only a resource that was successfully provisioned can be started or stopped.
• The link that appears depends on the State of the virtual machine (On/Stop or Off/Start).
If the start or stop action is subject to change control, a pop-up window asks whether to proceed.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. lick Stop or Start related link.
Option Description
If the VM is subject to change control (as is typically the case for production systems), the system
generates a change request. There may be a delay while an admin approves the change. You receive
email when the admin approves. After the admin approves, return to the VM Instance form and click
the Proceed with Change related link.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. lick Subscribe related link.
You are immediately subscribed to receive all notifications for the resource.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. Click Update Lease End in the Related Links section.
4. Use the calendar to set a new date and time for the lease to expire. Click Update.
The system submits your request.
5. If approval is not required, the lease is immediately updated. If approval is required, there may be a
delay while an admin approves the change. You receive email when the admin approves. After the
admin approves, return to the VM Instance form and click the Proceed with Change related link.
State Controls
On • Modify VM
• Update Lease End
• Stop VM
• Pause VM (VMware)
• Terminate VM
Off • Modify VM
• Start VM
• Update Lease End
• Terminate VM
Paused • Modify VM
• Start VM
• Update Lease End
• Terminate VM
Scheduled • Modify VM
• Cancel VM
• Update Lease End
State Controls
Error • Terminate
• Update Lease End
Note: The system auto-deletes the oldest snapshot when the Snapshot eviction policy limit is
exceeded. See Configure snapshots to execute on a schedule on page 1231.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. lick Take Snapshot related link.
Field Description
4. Click OK.
Note: If the VM is running, the VM is shut down at the start of the snapshot process and then
restarted when the process finishes.
If approval is not required, the system immediately takes the snapshot. If approval is required, there
may be a delay while an admin approves the change. You receive email when the admin approves.
After the admin approves, return to the VM Instance form and click the Proceed with Change related
link.
If the VM is running at the start of the snapshot process, the VM is shut down and then restarted when the
process finishes.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. lick Schedule Snapshot related link.
Field Description
Field Description
If approval is not required, the system immediately takes the snapshot. If approval is required, there
may be a delay while an admin approves the change. You receive email when the admin approves.
After the admin approves, return to the VM Instance form and click the Proceed with Change related
link.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. lick Restore From Snapshot related link.
4. If the resource Used for setting is Production, then the system creates a change request that must
be approved by a cloud operator or cloud admin. If approval is not required, the system immediately
overwrites the VHD with the snapshot. If approval is required, there may be a delay while an admin
approves the change. You receive email when the admin approves. After the admin approves, return
to the VM Instance form and click the Proceed with Change related link.
Delete a snapshot
You can delete a snapshot to free up disk space.
Role required: cloud_user, cloud_admin
Note: The system auto-deletes the oldest snapshot when the Snapshot eviction policy limit is
exceeded. See Configure snapshots to execute on a schedule on page 1231.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
Caution: Termination deletes the resource and all associated data. You cannot restart a
terminated resource. Be sure to capture all needed data before terminating.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. Click the VM name.
3. lick Terminate VM related link.
If the resource Used for setting is Production, then the system creates a change request that must
be approved by a cloud operator or cloud admin. If approval is not required, the system immediately
terminates the VM. If approval is required, there may be a delay while an admin approves the change.
You receive email when the admin approves. After the admin approves, return to the VM Instance
form and click the Proceed with Change related link.
The State of the VM becomes Terminated and the VM will no longer be visible in the My Virtual
Assets portal.
Important: AWS: The system does not delete volumes for terminated resources. Use the
AWS console to delete volumes.
Any script or logic can be invoked. You can also populate any custom workflows to
execute before the termination or after the termination.
Add all workflows to the workflow.scratchpad.before_subflows or
workflow.scratchpad.after_subflows variables in the appropriate extension
points. A return value of successindicates if the extension point was successful or not.
The termination workflows run the custom deprovision workflows one at a time and if any
of them have errors, a task is created for the operator to manually resolve, and either
proceed with termination if Before Deprovision fails, or just exit the termination workflow
if After Deprovision fails.
1. Open the list of VMs by navigating to Cloud Management > Cloud Resources and clicking the
Compute tab.
2. lick VM name.
The VM’s relationships to other CIs appears in the Related Items section. The view includes Business
Services and Applications referenced by tags on the VM or its parent resource group.
3.
Click the BSM icon.
The BSM map appears in a separate window.
• For their personal cloud resources, cloud_user users can manage some settings directly and must
change other settings by making a request to the cloud_operator.
• cloud_admin users have management access to all cloud resources
Requested stacks are subject to normal approvals and some special provisioning tasks.
1. Navigate to Self-Service > Cloud Resource Catalog.
2. The CloudFormation stack request form appears. Complete the following fields:
Field Description
Assignment Group Select the user group that will own the
stack.
Business purpose Enter a brief description of how this stack
will be used.
Start and End Select the start and end times for this
stack lease. The lease start time is set
automatically for the current date and time.
In the base system, the lease end time
is set to 60 days after the start time, and
the maximum lease duration is limited
to 90 days. The system does not allow
requesters to set a lease end time beyond
the configured limit. The lease duration is
calculated from the time the stack is actually
provisioned, which occurs after the request
is approved. If you request a stack for now
(the current date), and there is a delay in
approval, the end date is reset according to
the configured lease duration time.
Business Service Name a business service that depends on
this stack.
Application [Optional] Name the principal application
that depends on this stack, such as
an exchange server or an SQL Server
database.
Account Select the AWS account that the stack
belongs to. If the account is already
selected, then this field will be read-only.
Used for Select the purpose of the stack (such as
Development or Training). the ServiceNow
instance does not automatically end the
lease for a stack marked as Production.
Instead, the instance renews the lease
on Production stacks automatically for
the default lease duration and sends a
notification to the requester each time the
lease is updated.
Stack Name Enter the name for the stack.
Field Description
Field Description
A user with the Cloud Operator role can view the CloudFormation stack in the CMDB Business Service
Management Map (BSM) as a network of related resources.
When you initiate and save a change request, the stack template body and parameters are copied over
and a new template is created containing the changes. Parameters that are no longer used in the updated
template are deleted and any new parameters are created. Any default values for the new parameters are
copied to the template parameter value table.
Users with the cloud_admin, cloud_user or cloudformation_operator roles can update stacks. Users with
the cloud_user can update only the stacks that they own.
1. Navigate to Amazon AWS Cloud > Managed Resources > Stacksand select a stack.
2. Click Create New Update Change Record.
If a draft change already exists for the stack, click Modify Update Change Draft to update the
change.
3. Modify the template body and template parameter values.
The Phase field indicates the state of the change request. Possible values are draft, requested, in-
progress, approved, rejected, published, and error.
4. Click Validate Stack Change to validate template body changes.
5. Click Validate Parameter Values to validate template parameters changes.
Any error messages appear at the top of the form.
6. Click Request Stack Change to submit the workflow.
7. Click OK.
If the stack change is under change control, you are redirected to a change request page that shows
the change request that was submitted.
If the stack change is not under change control, the update workflow is triggered.
8. After the change request is approved, log in to your instance again.
9. Navigate to your stack and click Proceed with Change to trigger the update workflow.
When the change workflow is completed, the system sends a "stack change success" email to the
user and updates the stack status, output, tags, parameters, template, and other information.
Note: After a change definition has been submitted, you cannot modify the stack until the change
is canceled or the change workflow has been completed. You must create a new change definition
to make an additional change.
A cloud administrator can configure the system to generate a change request whenever a user requests
the early termination of a stack.
1. Navigate to Amazon AWS Cloud > Managed Resources > Stacksand select a stack.
2. Select a CloudFormation stack to terminate. You can terminate a stack in any of the following states:
• Create Complete
• Rollback Complete
• Rollback Failed
Note: You cannot terminate a stack if there is an open change request (stopping, pausing,
and so on).
When the stack has been terminated, the following information appears:
• Stack Status - Delete Complete
• Install Status - Retired
• State - Non-Operational
The system creates skeleton records for the resources are added and CMDB CI relationships to verify the
correlation of the stack resources.
1. Navigate to Amazon AWS Cloud > Managed Resources > Stacks and select a stack.
2. Click the Dependency Views icon.
The Dependency Views map appears in a separate window.
Discovery workflows are supported for the following CloudFormation stack resources types:
• AWS::EC2::VPC - The VPC discovered information includes VPC ID, VPC name (name tag), region,
Classless Inter-Domain Routing (CIDR) block information, the Dynamic Host Configuration Protocol
(DHCP) ID, the instance tenancy attribute (dedicated/default), VPC state, and the is default attribute of
the VPC.
• AWS::EC2::Subnet - The discovered subnet information includes Subnet ID, VPC ID, Subnet name
(name tag), region, Classless Inter-Domain Routing (CIDR) block information, the available IP address
count, is default for region attribute, the map public IP on launch attribute, Subnet state, and the is
default attribute of the subnet.
The topics in this section describe resource groups -- collections of resources -- not individual VMs.
An Azure Resource Group is a collection of resources (Infrastructure, Networking, Managed Services, and
their access control policies) that you manage as a single unit. The collection can be a database server, a
database, a website and the subnets they reside in.
When you want to deploy, manage, test, and monitor them as a group, you define them within an Azure
Resource Manager (ARM) template(s) and manage them as a single resource group. ARM templates are
declarative JSON schemas that define your deployment or changes to your deployment within a resource
group. You can read more about resource groups on Microsoft's website: https://azure.microsoft.com/en-
us/documentation/articles
Request an Azure resource group
Resource groups are based on ARM templates that an admin configured and approved to be included in
the catalog.
Role required: cloud_user or cloud_admin
1. Navigate to Self-Service > Cloud Resource Catalog and click Azure Resource Groups.
2. In the list, click the template for the resource group to request.
Specify settings for the resource group. The number and kinds of settings depend on the template.
Here are some common variables:
Field Description
Field Description
Start and End Select the start and end times for this virtual
machine lease. The lease start time is
set automatically for the current date and
time. In the base system, the lease end
time is set to 60 days after the start time,
and the maximum lease duration is limited
to 90 days. The system does not allow
requesters to set a lease end time beyond
the configured limit. The lease duration is
calculated from the time the virtual machine
is actually provisioned, which occurs after
the request is approved. If you request a
virtual machine for now (the current date),
and there is a delay in approval, the end
date is reset according to the configured
lease duration time.
Business Service Select the business service that depends
on this virtual machine. When Orchestration
creates the virtual server, it also creates the
relationships to this business service in the
CMDB.
Application Select the principal application that
depends on this virtual machine, such as an
exchange server or SQL Server database.
When Orchestration creates the virtual
machine, it also creates the relationships to
this application in the CMDB.
Used for Select the purpose of this virtual
machine. The categories in this choice
list are found in the Business Service
[cmdb_ci_service] table and also
are used to define the categories in the
provisioning rules. The system does not
automatically end the lease for virtual
machines marked as Production. Instead,
the system renews the lease on Production
virtual machines automatically for the default
lease duration and sends a notification
to the requester each time the lease is
renewed.
Offering Select the offering—such as the operating
system, database server, or web server—for
the virtual machine you are requesting.
Use predefined size Select Yes to accept the default
specifications for the operating system
and size. Select No to define custom
specifications for this virtual machine.
Size Select the predefined size for this offering
that has the desired features (memory,
storage, CPU speed).
Field Description
3. Specify the Quantity of VMs to order and then click Order Now. The system submits your request. If
approval is required, there may be a delay while an admin approves the change.
The view changes either to the My Virtual Assets portal or to the Order Status form, depending on
how the service catalog is configured. The portal shows the various reports associated with the logged
in user's virtual assets and requests. To view the current request, click the request number in the
My Virtual Asset Requests list or expand the Stage column to determine where the request is in the
provisioning process.
4. After the request is approved and the VM is provisioned, navigate to My Virtual Assets. Click the new
VM.
5. On the VM Instance form, click the Proceed with Change related link.
You are notified by email of the results of your request. If the provisioning request fails for any
reason, an incident is automatically created and assigned to the Cloud Administrators group (if the
glide.vm.create_incident system property is enabled).
Update the end of the lease for an AWS stack or Azure resource group
When a resource group or stack reaches the end of its lease (or its grace period), the system terminates
the resources and notifies the user. As a cloud user, you can extend or shorten a lease for any of your
cloud resources.
Role required: cloud_user or cloud_admin
1. Open the list of stacks or resource groups and then select one:
• Amazon AWS Cloud > Managed Resources > Stacks.
• Microsoft Azure Cloud > Managed Resources > Resource Group Instances.
1. Open the list of stacks or resource groups and then select one:
• Amazon AWS Cloud > Managed Resources > Stacks.
• Microsoft Azure Cloud > Managed Resources > Resource Group Instances.
2. Click the Create Draft Deployment related link. The State for the deployment changes to Draft.
Important: Any changes that you make on the form are applied to the draft and are not yet
applied to the deployed resource group.
3. Make the changes to settings. When you have validated the changes, return to the stack or resource
group form and perform one of the following actions:
Option Action
Submit the request to update the item with the Click the Implement Draft Deployment related
draft deployment that you configured on this link
form.
Save the draft deployment that you Click Update.
configured on this form for use at another
time.
Discard the draft deployment that you Click Delete.
configured on this form
The system processes your request for changes to the stack or resource group.
Note: Terminating a stack or resource group results in the loss of all data and configuration that
is associated with the affected resources. Before terminating, be sure to back up any data that you
want to keep.
2. Select the stack or resource group to terminate. You can terminate a stack or resource group with
Resource Group status of Succeeded.
Note: You cannot terminate a stack or resource group that has an open change request
(Update in progress, Delete in progress, and so on).
The following information appears when the stack or resource group has been terminated:
• Resource Group Status - Deleted
• Status - Retired
VM extension points
• CloudFormationSetErrorNotification
• CloudFormationSetSuccessNotification
• CloudFormationAfterSchedule
• EC2SetErrorNotification
• EC2SetSuccessNotification
• EC2AfterSchedule
• AzureTemplateSetErrorNotification
• AzureTemplateSetSuccessNotification
• AzureTemplateAfterSchedule
• AzureVMAfterSchedule
and can associate custom tags with resources. Tags enable you to analyze the data using criteria that you
specify.
Report Description
Cost Report Shows totals and trends for the billed cost across
all categories.
Compute Usage Shows totals and trends of VM instance usage
(instance hours).
Network Usage Shows totals and trends of network usage, both
incoming and outgoing (GB transferred).
Storage Usage Shows totals and trends of storage capacity utilized
(GB hours).
Costs Shows Performance Analytics indicators and
breakdowns tracking cost.
This report is only available with the Performance
Analytics plugin.
1. For all providers, navigate to Cloud Management > Reports > Cost & Usage Reports.
2. To refine the output, modify the data filters to specify how to select, group, and display the data.
• Filters include time range, cloud provider (Amazon, Azure, or all), account (subscription name
from Azure, or linked account ID from Amazon), region, application, business service, cost center,
project, and user name.
• Grouping includes category, provider, account, application, business service, cost center, project,
region, and user name.
• All cloud users can view resource optimization reports on resources assigned to themselves.
• Additionally, users with the cloud_admin or cloud_operator role can access reports on all resources,
across all cloud providers and for all users, bu going to the Cloud Management > Cloud Operations
Portal.
You can view resource optimization reports from the Cloud Management > Cloud Resources page.
1. Navigate to Cloud Management > Cloud Resources page.
The dashboard automatically updates with the data for all cloud providers, for all accounts, and in all
locations.
2. Use the pulldowns on the right side to drill down.
Field Description
3. Click Update.
4. By default, the report scheduler, which runs every Monday, is disabled. To enable for your provider;
Amazon AWS Cloud, Microsoft Azure Cloud or VMware Cloud, navigate to Cloud Management >
Administration > Properties and select Enable Resource Optimization Report scheduled job.
By default, the scheduled job stops after 24 hours (86400 seconds). To change the setting, specify the
number of seconds in the The maximum time (in seconds) a resource optimization analysis is
allowed to run, before termination property.
Note: You cannot create a new rule. You can, however, unselect the Active check box to stop
tracking metrics for a particular resource.
1. Navigate to Amazon AWS Cloud > Administration > Resource Optimization Rules and select a
rule.
2. Update the rule as needed and then click Update.
Table 516: Resource Optimization Rules form, common fields for all rules
Field Description
Field Description
Field Description
Field Description
Field Description
EC2 - Underutilized CPU Utilization, CPU Usage and CPU utilization < CPU
NetworkIn, NetworkOut Network IO Usage AND NetworkIn
+ NetworkOut <
NetworkIO
EC2 - Stranded StatusCheckFailed N/A StatusCheckFailed = 1
Instances
EBS - Underutilized VolumeReadOps, IOPS VolumeReadOps +
VolumeWriteOps VolumeWriteOps <
IOPS
EBS - Unattached N/A N/A cmdb's volume_status
field = unattached
ELB - Underutilized RequestCount Number of requests RequestCount <
Number of request
ELB - Unattached HealthyHostCount, No instance attached HealthyHostCount
UnHealthyHostCount AND
UnHealthyHostCount
< 1 AND No instance
attached is selected.
ELB - Attached HealthyHostCount Number of attached HealthyHostCount <
healthy instances Number of attached
healthy instances
Note: CloudWatch metric data is kept for two weeks. A rule that tries to fetch data older than two
weeks will not return data. The maximum recommended period for any rule is 14 days.
Underutilized resources appear in the Resource Optimization report. The VM is considered underutilized
when either the CPU or memory is low. Data is collected from the following two VM metrics defined in
Azure diagnostics:
• Windows:
• \Processor(_Total)\% Processor Time
• \Memory\% Committed Bytes In Use
• Linux:
• \Processor\PercentProcessorTime
• \Memory\PercentUsedMemory
1. Navigate to Microsoft Azure Cloud > Administration > Resource Optimization Rules.
2. Click an existing rule or click New and then specify the conditions that a resource meets to appear in
the report as underutilized:
Field Description
Note: If the VM is turned off the usage metrics are not returned for that VM.
The job definition is complete. Jobs will run on the specified schedule. You have the option to update
the scheduler that controls the interval that the download repeats at.
Note: The reports can display only the data that has been downloaded from the provider to the
ServiceNow database. Cloud admins can schedule periodic downloads. If billing data is missing,
cloud admins can run a download job on-demand to download the missing data.
1. Navigate to Cloud Management > Reports > Cost & Usage Reports.
The Billing Reports page displays the reports. To generate the reports, certain filters are applied to
the data that was downloaded from the provider. You can modify the filter settings to customize the
content and appearance of the reports.
2. To customize the reports, modify the data filters (standard ServiceNow filter controls). Click Filters
and then specify how to select, group, and display the data.
Advanced topics
You can customize your Cloud Management service.
Additional customizations to your Cloud Management service include custom providers and orchestration
activities.
Field Description
Field Description
Field Description
Placement Ext Key The extension key to run when the catalog
item's task automation mode is set to Semi
automatic or Fully automatic. This
allows you to provide your own functions to
select provisioning details, such as the ESXi
host, datastore, network, and folder. The
default for VMware is VMWPlacement; the
default for Amazon EC2 is EC2Placement.
For details on how to edit this script, see
Editing the Placement Extension Key script
on page 1258
Before Deprovision Ext Key The extension key to take action before the
instance is terminated. There is no default
extension key provided, so no action will
be taken. Take any pre-cleanup action like
backup before instance is terminated here.
To use this feature, you must
create a new extension point in the
[vm_extension_point] table.
After Deprovision Ext Key The Extension key to take action after
instance is terminated. There is no default
extension key provided. Take any post-
cleanup action, like deleting snapshots, for
the instance being terminated here.
To use this feature, you must
create a new extension point in the
[vm_extension_point] table.
The Virtual Machine Request workflow defines the flow to request a VMware machine.
VM Creation workflow
The VM Creation workflow defines the flow to create a VMware instance, and picks up the provision
workflow from the virtualization_provider table and invokes it.
The VMware Provision workflow defines the flow to provision a VMware instance.
Property Description
Property Description
Property Description
Handlers
To view the details of the Virtualization Provider handlers, navigate to Cloud Management >
Virtualization Provider Handlers.
Field Description
var ipAddress =
workflow.scratchpad.PostProvisionSource.instances[0].ip_address;
var os = workflow.scratchpad.PostProvisionSource.instances[0].os;
A PostProvisionSource object is created after provisioning with the collected IP address and
the populated private IP address. It can also contain additional properties if so desired. A sample
PostProvisionSource object look like:
{
"instances" : [
{ "os": "Linux"
"ip_address": "02.03.06.05",
"primary_private_ip": "10.0.0.0"}
]
"extension" : {} // other custom properties may be added to this
extension object
}
{
"instances" : [
{
"availabilityZone": "us-west",
"dns": "ec2-0160903.aws.com",
"image_id": "ai356tleqjk71",
"instance_id": "b13w345pmc",
"instance_name_tag": "ec2-7-1",
"instance_state": "7",
"instance_type": "d1-ety",
"ip_address": "09.03.17.08",
"os": "linux",
"private_dns": "ilr434j930j-3",
"primary_private_ip": "02.10.01.40",
"root_device_name": "null",
"root_device_type": "inst-wei",
"subnet_id": "savt-784b2fg",
"tags": {
"Name": "ec2na7nql",
"Tag Set": "MLQ49AG02",
"User Name": "System Administrator"
},
"volumes": [
],
"vpc_id": "vza3bf83df71dy"
}
]
"extension" : {} // other custom properties may be added to this
extension object
}
{
"instances" : [
{
"admin_username": "qzrwnfsj",
"computer_name": "qzrwnfsj470",
"container": "hcqi",
"disks": "[]",
"image_id": "23vejd903d9231071438fdfho3cnxt891",
"ip_address": "02.401.200.13",
"name": "qzrwnfsj470",
"os": "Linux",
"primary_private_ip": "03.0.0.40",
"region": "West US",
"state": "on",
"vm_size": "Basic A0"}
]
"extension" : {} // other custom properties may be added to this
extension object
}
{
"instances" : [
{
"config": "Yes",
"config_automation": "Yes",
"cpus": "1",
"datastore_morid": "dst32c7i3",
"disks": "[{\"provisionMode\":\"fr\",\"size\":\"25\"}]",
"hostname": "gjerr",
"instanceUuid": "lafjPN1nf4457jfbsD03FDN",
"ip_address": "01.02.04.03",
"memory": "123",
"morid": "v4559",
"name": "fbiwerier493",
"network_config": "dhcp",
"os": "Linux",
"serialNumber": "VMWare483 jiejf3 343 034fjAL23NA33043 30efj34878n,
"template": "b392djkf39j20cn3j03jazxsd",
"uuid": "3483vnhg4s3nfw3910j043h4",
"vcenter": "03.01.04.05"
}
]
"extension" : {} // other custom properties may be added to this
extension object
}
• Change State
• Create Tag
• Describe Images
• Describe Instances
• Describe KeyPairs
• Describe Regions
• Run Instances
• Terminate Instances
Field Description
Variable Description
Field Description
Field Description
Variable Description
Field Description
Variable Description
Field Description
Variable Description
Field Description
Variable Description
Field Description
Variable Description
Field Description
Variable Description
Conversion function
Converts a UUID to the proper format automatically.
A function called turnCorrelationIdToUuid in the VMUtils script include converts a UUID to the
proper format automatically. Use this function to convert a UUID to the proper format within the workflow:
workflow.scratchpad.uuid=VMUtils.turnCorrelationIdToUuid('42 10 c1 62 e3 1e
70 e6-4a 03 b4 6a bb e1 7b 4f')
You can then access this variable from any activity by entering ${workflow.scratchpad.uuid} in the
VM uuid activity input variable.
Determine VMware activity result values
VMware activities communicate with vCenter through a MID Server.
When a VMware activity sends a request, the MID Server sends a response to the ECC Queue. All
VMware activities set their activity.result value based on the payload of the response from the MID
Server. If this payload contains an error, the activity result value is failure. If the payload contains no error,
the activity result value is success. Some activities, such as Delete Snapshot, specify additional result
values.
Managed Object Browser
The Managed Object Browser (MOB) is a vCenter utility that allows users to view detailed information
about vCenter objects, such as images and virtual machines.
To find the UUID of a virtual machine using the MOB:
1. Navigate to https://<vCenter IP>/mob to log into the MOB.
2. Browse to content, rootFolder, and down the childEntity list into the appropriate datacenter, datastore,
and finally down to the actual virtual machine.
3. In the virtual machine, click config and find the uuid field.
Copy this value into the VM uuid activity input variable.
Field Description
Field Description
Field Description
Field Description
Clone activity
The Clone activity sends commands to vCenter to clone a given VMware virtual machine or virtual machine
template.
Several input variables require a MOR ID. ServiceNow stores MOR data in the Object ID field of
configuration item records.
These variables determine the behavior of the activity.
Field Description
Field Description
Field Description
Field Description
Field Description
Result Description
Field Description
Destroy activity
The Destroy activity sends a command to vCenter to destroy the named VMware virtual machine.
This activity deletes the virtual machine and removes it from the disk.
These variables determine the behavior of the activity.
Field Description
Field Description
Field Description
Field Description
Variable Description
Field Description
Reconfigure activity
The Reconfigure activity updates the number of CPUs and the amount of memory assigned to a virtual
machine.
These variables determine the behavior of the activity.
Field Description
Field Description
Snapshot activity
The Snapshot activity creates a snapshot of a virtual machine.
A snapshot stores the current state of a virtual machine, but is not a full backup.
These variables determine the behavior of the activity.
Field Description
Azure troubleshooting
Common issues while configuring Microsoft Azure Cloud.
• Make sure that you have used the correct Tenant/Client IDs and access key
• • Save the secret key when you create it on the Azure portal
• Ensure that the Service Principal has proper permissions as described in the first section
• Check the Discovery log and ECC queue for errors (under Discovery status)
• Provisioning can fail on the Azure platform. In this case, the system generates a task that the cloud
operator must resolve.
• • Look for information in the task or go to the workflow context to look for clues. Sometimes Azure
provisioning succeeds but the task timed out waiting for the VM/resource group to come up. An
operator can make sure they are up on Azure portal and close the task to re-try Discovery.
I requested an action (like Start, Stop, or Modify) on a VM, but nothing happens
• Some VMs are under change control. If the request meets a change condition defined by the cloud
admin, then the request must be approved before the action can occur.
• After the change request is approved, you may need to reload the VM form to see the Proceed with
change action.
• It is possible that the request failed on the Azure platform. In such a case, the system generates a CI
task for the cloud operator to resolve the issue.
This task shows that a requested instance named global-by-1 has generated an error.
3. Open the task and identify the error. The description shows that global-by-1 is a duplicate name.
4. Click the link in the Request item field to open the original request.
5. Enter a unique name for the virtual machine and click Update for the catalog task to appear. The
clone name for the virtual machine has the following requirements:
• Maximum length is 80 characters. The special characters %, /, and \ are escaped in a clone name.
A slash(/) is escaped as %2F or %2f, a backslash(\) is escaped as %5c or %5C, and a percent is
escaped as %25. The special character % is not escaped when used to start an escape sequence.
• Clone name is unique within a folder.
6. Click Close Task. The Cloud Operations Portal appears and the task shows a state of Closed.
7. The workflow resumes and checks the new clone name with vCenter.
8. vCenter approves the name, and provisioning continues.
Note: You can activate Performance Analytics content packs and in-form analytics on instances
that have not licensed Performance Analytics Premium to evaluate the functionality. However, to
start collecting data you must license Performance Analytics Premium.
Content packs
The Performance Analytics widgets on the dashboard visualize data over time. These visualizations allow
you to analyze your business processes and identify areas of improvement. With content packs, you can
get value from Performance Analytics for your application right away, with minimal setup.
Note: Content packs include some dashboards that are inactive by default. You can activate these
dashboards to make them visible to end users according to your business needs.
To enable the content pack for Cloud Management, an admin can navigate to Performance Analytics >
Guided Setup. Click Get Started then scroll to the section for Cloud Management. The guided setup takes
you through the entire setup and configuration process.
Index
A Amazon Resource Optimization reports 1248
Amazon resource optimization rule
active change window 952 configure 1247
active connection information Amazon virtual machine 1205
Solaris 371 Amazon VM
add a service to a group 635 request 1216
Add segment of service 652 Amazon VMs
administer 854 requesting 1214
aggregated connections 677 Amazon Web Services
aggregated edges 677 account details 1078
alert creating an AWS account 1078
impact 995, 998, 999 defining AWS policies 1083
monitoring 987, 990, 992, 993, 995, 998, 999, 1003 external credential storage 1079
report 1003 security tokens 1082
alert maintenance 977 Amazon Web Services (AWS) cloud
alert rules 849 discovering 380
Align versions of customized probes and sensors 266 Analyzing
Amazon service 666
approve the request 1193 Application Dependency Mapping (ADM)
catalog items 1117 application relationships 177
cloud administration 1111 CI relationships 178
cloud administrator tasks 1112 overview 173
complete the provision task 1196 requirements
configure 1079 Windows 2000 server 173
configuring tags 1174 application form
create catalog items 1114 personalize 514
create catalog items from EC2 images in Cloud application profile discover (APD)
Management 1113 APD files, create 509
prices 1119 APD folder, create 505
pricing catalog items 1119 application, classify 498
provision a virtual machine 1192 Discovery, run 501
Amazon AWS enable 495
configure 1055 environment file location, set 495
Amazon AWS Cloud environment file, create 496, 506
installed tables 1066 environmental details, view 510
installed user roles 1065 Linux 495, 495, 496, 497, 498, 501, 505, 506, 507, 508,
plugins 1063, 1065 510
properties 1072 proxy servers 495, 505, 508
Amazon CloudFormation 1122 SAP application on a server 509
Amazon CloudWatch API 1249 version file, create 497, 507
Amazon EBS 1121 Windows 495, 505, 508
Amazon EBS snapshots Windows 2003 server 505
discovering 1122 application profile discovery
viewing 1122 customization 511
Amazon EBS volumes application relationships
discovering 1121 Application Dependency Mapping (ADM) 177
viewing 1121 ARM parameters
Amazon EC2 variables 1139
approve images 1112 ARM templates 1138
asset management 1091 arrange services in groups 635
catalog items from EC2 images 1118, 1120 assets
Cloud Management tracking in Amazon EC2 1091
service catalog 1113 Assign resources
create catalog items from EC2 images 1118 multiple items request
image permissions 1084 two-step checkout 1244
installed with 1063 assignments
service catalog 1192 in the base system 1188
Amazon Elastic Block Store 1121 provisioning rules 1182
S customizing maps
merging connector lines 696
SAN storage Discovery 443 data collection 517
scale environment 849 Debug mode 748
SCCM discovery flow 517
Asset Intelligence scheduled imports 82 discovery patterns 730
integration with Discovery 81 Discovery patterns 762
tracking software with 84 discovery rule 642
schedule discovery schedules 638
application 638 domain separation 628, 702
applicative CI 638 entry point 735
snapshot Azure 1231, 1231 entry point connectivity 624
scheduled jobs 831, 845 F5 644
Scheduled jobs installed with Event Management 831 finalizing patterns 792
scheduler Get started 537
billing data download 1178 Guided Deployment 621, 624, 625, 627
SCPRelay Probe 226 import business services 614
script import entry points 614
on classification 100 import from CSV 617
script includes 831 Installation 537
Script includes installed with Event Management 831 IP address connectivity 624
scripted integration 896 load balancer 644
security lsof 522
cloud operations 1059 Mapping
security groups Checking results 646
AWS VPCs 1123 Discovery messages 646
segment of business service 652 Errors 646
Service Analytics documentation links 813 Results 646
service health mapping flow 517
monitoring 987 mapping patterns 730
service level management maps 666
cloud management 1170 MID Server
Cloud Management 1172 IP range 706
cloud provisioning 1170 MID Server algorithm 704
service mapping MID Server choice 706
service groups 635 MID Server pool 706
Service Mapping MID Server selection 704
activating 538 MID Server selection criteria 704
activation 538 MID Server selection parameter 704
add business services manually 620 Netflow 522
application permissions 612 netstat 522
application rights 612 network devices permissions 612
behavior 644 network devices rights 612
Best Practice 614, 616, 616, 617, 619, 620 new algorithm 704
business rules 535 OID 609
business service 614, 628 Pattern Designer 795
business services 516 patterns
check connectivity 624 domain separation 737
commands 543, 561, 609 plan 614
configuration 702, 702, 721 planned business service 624, 625, 627
Connect to existing service 652 planned entry points 624
Create new service 652 Planner 614
Create new service from here 652 planning
create pattern 730 create business service 621, 625
creating a traffic based discovery rule 642 discover business service 625
creating a traffic-based discovery rule 642 fine-tune business service 627
credentials 540, 542 fix business service 627
Credentials 537 implement 627
CSV file 614 map business service 625
customization 721, 730, 792 planning business service 616, 616, 617, 619, 620
customize map view 696 planning phase 616, 616, 619
populate phase manually 620
W
WebLogic
sudo 423
Windows
data
help the help desk 324
Windows operating system
fetch data 240
Windows VMs
VMware 1161
WMI method invocation
Pattern Designer 661, 743, 743, 743, 743, 754, 757,
759, 762, 782, 784, 787, 795
WMI method invocation operation 784
WMI query operation 785
workstation
reclassifying
ServiceNow Discovery 109
ServiceNow Discovery
reclassifying 109