DNA HistoricalServices PDF
DNA HistoricalServices PDF
DNA HistoricalServices PDF
Baseline
Historical Services
Configuration and
Administration Reference
Document Revision 1.1
Telvent Telvent
10333 Southport Rd., S.W. 7000A Hollister Rd.
Calgary, Alberta, Canada, T2W 3X6 Houston, Texas, U.S.A., 77040-5337
The information contained in this document is confidential and proprietary to Telvent Canada Ltd. It is not to be copied or disclosed for any purpose except as specifically
authorized in writing by Telvent. Although the information contained herein was correct and verified at the time of publication, it is subject to change without notice.
Trademark Acknowledgments
The following is a list of trademarks which may appear in this document:
RealTime Service, XOS, Historical Service, MICRO/1C, PoleCAT, SAGE 2000, SAGE 2100 are all trademarks of Telvent Canada Ltd
AutoCAD is a registered trademark of Autodesk, Inc.
DATEK is a registered trademark of Datek Industries, Ltd.
Ethernet is a registered trademark of Xerox Corporation
MODBUS is a registered trademark of Gould, Inc.
Excel, MS-DOS, WINDOWS, WINDOWS NT are registered trademarks of Microsoft Corporation.
SYBASE, Data Workbench, SQL Server, are registered trademarks of Sybase, Inc.
Document Revision History
Baseline SIG
Date Additions and Changes
Revision Revision
1.0 2003/05/26 For XOS 7.3.1 release (Remote XOS) no document changes made.
1.0 2003/07/23 Historical Service upgrade 7.3.1, no document changes.
1.1 2003/10/31 SPR 30067
SPR 28427
SPR 30081
For details, refer to “Detailed Document Revision History”.
1.1 2004/02/06 Release for OASyS DNA 7.4
Contacts
Lead Writer: Author variable
Project Manager: PM variable
Project Leader: PL variable
Software Version
This document describes the following components of OASyS® DNA:
• Historical 7.4
• XOS 7.4
MODULE 1
Introduction
1.1 HistoricalDB Structure ................................................................................................................... 1-2
1.2 Basic Historical Service Procedures ............................................................................................... 1-3
MODULE 2
Editing Historical Data
2.1 A Note on HistoricalDB Time ........................................................................................................ 2-2
2.2 Editing the timeline Database ....................................................................................................... 2-2
2.2.1 The Timeline Collect Editor.................................................................................................. 2-2
2.2.1.1 The Point Select Dialog Box ........................................................................................ 2-4
2.2.1.2 The Timeline Edit Dialog Box...................................................................................... 2-5
2.2.1.3 Adding New Records to the timeline Database .......................................................... 2-6
2.2.1.4 Modifying timeline Database Records ......................................................................... 2-7
2.2.1.5 Filtering Data Displayed on the Timeline Collect Editor ........................................... 2-7
2.3 Editing the accum Database.......................................................................................................... 2-7
2.3.1 The Accum Hour Editor ........................................................................................................ 2-8
2.3.1.1 The Accum Edit Dialog Box ......................................................................................... 2-9
2.3.1.2 Adding New Records to the accum Database .......................................................... 2-10
2.3.1.3 Modifying accum Database Records ......................................................................... 2-10
2.3.1.4 Filtering Data Displayed on the Accum Hour Editor ............................................... 2-10
2.4 Editing the CommStats Database................................................................................................ 2-11
2.4.1 The Remote Communication Statistics Editor................................................................... 2-11
2.4.1.1 The Communication Statistics Edit Dialog Box ........................................................ 2-13
2.4.1.2 Adding New Records to the CommStats Database .................................................. 2-14
2.4.1.3 Modifying CommStats Database Records ................................................................. 2-14
2.4.1.4 Filtering Data Displayed on the Remote Communication Statistics Editor............ 2-14
2.5 The Connection Statistics Editor................................................................................................. 2-15
2.5.1 The Connection Statistics Edit Dialog Box ........................................................................ 2-17
MODULE 3
Data Collection and Data Summary
3.1 The Collection Entry Dialog Box................................................................................................... 3-1
3.1.1 The Collect Select Dialog Box .............................................................................................. 3-2
3.1.2 Data Collection by Exception............................................................................................... 3-3
3.2 Configuring Collect Points ............................................................................................................ 3-3
MODULE 4
Historical Application Installation Tool
4.1 SQL Application Definition ........................................................................................................... 4-2
4.2 Historical Application Definition.................................................................................................. 4-3
4.2.1 Application Sequence Files .................................................................................................. 4-3
4.2.2 Application Suite Files .......................................................................................................... 4-4
4.3 Directory Structure ........................................................................................................................ 4-5
4.4 The Full Historical Installation Option ......................................................................................... 4-5
4.5 Selective Application Installation ................................................................................................. 4-6
4.6 System-Specific Configuration...................................................................................................... 4-7
MODULE 5
Historical Replication
5.1 MS SQL Replication Components and OASyS DNA Mapping ..................................................... 5-1
5.1.1 Replication Types.................................................................................................................. 5-1
5.1.1.1 Snapshot Replication................................................................................................... 5-1
5.1.1.2 Merge Replication ....................................................................................................... 5-1
5.1.1.3 Transactional Replication............................................................................................ 5-2
5.1.2 Globally Unique Identifiers .................................................................................................. 5-2
5.1.3 Publications and Articles ...................................................................................................... 5-2
5.1.4 Publishers .............................................................................................................................. 5-2
5.1.5 Subscriptions ......................................................................................................................... 5-3
5.1.5.1 Push Subscriptions ....................................................................................................... 5-3
5.1.5.2 Pull Subscriptions......................................................................................................... 5-3
5.1.5.3 Anonymous Subscriptions ........................................................................................... 5-3
5.1.5.4 Subscribers as Publishers ............................................................................................. 5-3
5.1.6 Distributor............................................................................................................................. 5-4
5.1.7 Replication Agents ............................................................................................................... 5-4
5.1.7.1 Snapshot Agents.......................................................................................................... 5-4
5.1.7.2 Merge Agents .............................................................................................................. 5-4
5.2 The SQL Server Enterprise Manager............................................................................................. 5-5
5.3 Historical Data Replication Configuration................................................................................... 5-6
5.3.1 Viewing Replication Component Properties....................................................................... 5-8
MODULE 6
Archiving
6.1 The Archive Process ....................................................................................................................... 6-1
6.1.1 xis_archive ............................................................................................................................. 6-2
6.1.2 cmx_archive........................................................................................................................... 6-3
6.2 The Archive/Dearchive Edit Dialog Box........................................................................................ 6-4
6.2.1 The Schedule Summary Window ......................................................................................... 6-4
6.2.1.1 The Schedule Configuration Dialog Box .................................................................... 6-6
6.2.1.2 Archive Cutoff.............................................................................................................. 6-9
6.2.1.3 Archive Intervals .......................................................................................................... 6-9
6.2.2 The Add Device Editor........................................................................................................ 6-12
6.2.3 The Media Initialization Dialog Box.................................................................................. 6-13
MODULE 7
HistoricalDB Databases
7.1 The timeline Database.................................................................................................................... 7-1
7.1.1 tag Table................................................................................................................................ 7-2
7.1.2 collect Table ........................................................................................................................... 7-2
7.1.3 hour Table.............................................................................................................................. 7-2
7.1.4 day Table ............................................................................................................................... 7-3
7.1.5 month Table........................................................................................................................... 7-3
7.1.6 year Table .............................................................................................................................. 7-3
7.2 The accum Database ...................................................................................................................... 7-3
7.2.1 hour Table.............................................................................................................................. 7-4
7.2.2 day Table ............................................................................................................................... 7-4
7.2.3 month Table........................................................................................................................... 7-4
7.2.4 year Table .............................................................................................................................. 7-4
7.3 The event Database........................................................................................................................ 7-4
7.3.1 summary Table....................................................................................................................... 7-5
7.4 The CommStats Database .............................................................................................................. 7-5
7.4.1 RemPeriodStats Table ............................................................................................................. 7-6
7.4.2 ConnPeriodStats Table ......................................................................................................... 7-6
7.5 The archive Database ..................................................................................................................... 7-7
7.5.1 schedule Table ....................................................................................................................... 7-8
7.5.2 device Table ........................................................................................................................... 7-8
7.5.3 catalog Table.......................................................................................................................... 7-8
7.5.4 dumpSchedule Table .............................................................................................................. 7-9
7.5.5 rearchive Table ....................................................................................................................... 7-9
7.5.6 validDeviceTypes Table ......................................................................................................... 7-10
Index
Introduction
Long-term or historical data is stored in HistoricalDB for report and accounting purposes. A
full implementation of the MS SQL is used to effectively manage historical data and
administer related historical services.
HistoricalDB is composed of databases that are designed to receive data from RealTimeDB.
HistoricalDB in relation to other services (Figure 1-1) provides an overview of how
HistoricalDB relates to RealTimeDB, XOS, and other offline media.
HistoricalDB
RealTimeDB
analog
rate
remote archive
.
.
.
status
The archive
table holds data
for configuring
the archive
process.
offline media
• Selected data from analog, remote, status, and accumulator-type rate points
• Event and alarm messages, and related information
• Communication statistics
Data obtained from analog and rate points can be summarized to provide hourly, daily,
monthly, and yearly summaries. These data can be manually edited, plotted, and then
archived to offline storage.
The modules in this component discuss the tools that are provided in configuring and
administering historical services and managing historical data.
1 Double-click the NMC icon to open the OASyS DNA Network Management Console
(Figure 1-2). For more information, refer to the Network Configuration Guide.
Alternatively, you can open the OASyS DNA Network Management Console
(Figure 1-2) by doing the following:
1 Double-click the NMC icon to open the OASyS DNA Network Management Console
(Figure 1-2). For more information, refer to the Network Configuration Guide.
Alternatively, you can open the OASyS DNA Network Management Console
(Figure 1-2) by doing the following:
8 Right-click Historical.
9 Click Startup, Failover, or Shutdown.
The HistoricalDB Edit Menu (Figure 2-1) allows you to create or modify a HistoricalDB
record and define its data value at any specified timestamp in history.
NOTE You cannot edit or display newly created or modified HistoricalDB records in
XOS dialog boxes or windows until the top of the hour has passed.
• Click the HistoricalDB Edit Menu icon (Figure 2-2) on the XOS toolbar.
The following table lists the buttons on the HistoricalDB Edit Menu (Figure 2-1) and the
corresponding action when these are chosen.
Button Action
Timeline Data Opens the Timeline Collect Editor (Figure 2-3)
Accumulator Data Opens the Accum Hour Editor (Figure 2-6)
Button Action
RTU Communication Statistics Opens the Remote Communication Statistics Editor
(Figure 2-8)
Communication Line Statistics Opens the Connection Statistics Editor (Figure 2-10)
Dismiss Closes the HistoricalDB Edit Menu (Figure 2-1).
If the remote provides the information, the timeline and event tables also record the
milliseconds part of the seconds.This is recorded in the timeline..hour table’s minMilli and
maxMilli fields and in the event..summary table’s msec field.
Summary data is timestamped according to the setting of the TSTAMP_PREFERENCE registry
setting. If this is set to TOP_OF_PERIOD, then data is timestamped with the start of the time
period. If it is set to BOTTOM_OF_PERIOD, then the data is timestamped with the end of the
time period. The variable also determines to which interval the boundary times are
assigned.
The following table lists the column headings, fields, and buttons on the Timeline Collect
Editor (Figure 2-3).
Field/
Column/
Button Description
Time This column shows the time interval over which the point’s values (as shown in
the Value column) and data quality status (as shown in the Status column) are
valid. Time is further split into one-minute intervals. For more information, refer
to A Note on HistoricalDB Time (Section 2.1).
Value This column shows the point’s value at the start of the hour.
Field/
Column/
Button Description
Status This column shows the point’s data quality status. Data quality status (Table 2-3)
lists the data quality indicators.
Point: This field is for the name of the record. Click the arrow to the right of this field to
open the Point Select dialog box (Figure 2-4). For more information, refer to The
Point Select Dialog Box (Section 2.2.1.1).
Start Time: Acting as a filter, this field specifies the start time of the time interval shown in
the Time column.
End Time: Acting as a filter, this field specifies the end time of the time interval shown in
the Time column.
Add Click this button to add a new record to the timeline database. Refer to The
Timeline Edit Dialog Box (Section 2.2.1.2).
Filter Clicking this button displays data values on the Timeline Collect Editor
(Figure 2-3) that are filtered according to the Point:, Start Time: and End Time:
field entries.
Status
Indicator What it means
NULL or “ “ Collected data is good
X Status of the database could not be determined
O Point is offscan (i.e. the point has been removed from the nor-
mal polling cycle)
M Point has been placed in manual mode
A Point is in alarm state
• Click the arrow next to the Point: field on the Timeline Collect Editor (Figure 2-3).
The following table lists the fields and buttons on the Point Select dialog box (Figure 2-4).
Button/Field Description
Enable Filters Click this button to filter the timeline points that are displayed on
the Point Select dialog box (Figure 2-4).
Remote: Enter the remote name by which to filter.
Group: Enter the group name by which to filter.
Point Name This column lists the collect points.
Point Description This column lists the collect points’ descriptions.
Point Field Description This column lists the descriptions of the collect point’s fields.
Search Click this button to display the data that correspond to the pro-
vided filters.
Dismiss Click this button to close the dialog box.
The following table lists the fields and buttons on the Timeline Edit dialog box (Figure 2-5).
Table 2-5 Fields and buttons on the Timeline Edit dialog box
Field/Button Description
Time: Enter in this field the time in the prescribed format. Form more information,
refer to A Note on HistoricalDB Time (Section 2.1).
Value: Enter in this field the point’s value.
Add or Modify Click this button to add the new data into the timeline database.
Dismiss Click this button to close the Timeline Edit dialog box (Figure 2-5).
NOTE When a new record is added, the letter “M” appears in the Status column on
the Timeline Collect Editor (Figure 2-3) to indicate that the record has been added man-
ually.
1 Click the row header of the record on the list displayed on the Timeline Collect Editor
(Figure 2-3) to open the Timeline Edit dialog box (Figure 2-5).
2 Type the time.
3 Type the value.
4 Click Modify.
5 Click Dismiss.
The following table lists the column headings, fields, and buttons that appear on the Accum
Hour Editor (Figure 2-6).
Table 2-6 Fields and other items on the Accum Hour Editor
Column Heading/
Field/Button Description
Time This is the timestamp of the hour. For more information, refer to A Note
on HistoricalDB Time (Section 2.1).
Adj Count This is the adjusted accumulator value.
Volume This is the volume during the hour.
Status This is the data quality status. Refer to Data quality status (Table 2-3).
Point: This field contains the name of the record. Type or select a rate record.
Start Time: Acting as a filter, this field specifies the start time of the time interval
shown in the Time column.
Table 2-6 Fields and other items on the Accum Hour Editor
Column Heading/
Field/Button Description
End Time: Acting as a filter, this field specifies the end time of the time interval
shown in the Time column.
Add Click this button to add a new record to the accum database. Refer to
The Timeline Edit Dialog Box (Section 2.2.1.2).
Filter Click this button to display the data values on the Accum Hour Editor
(Figure 2-6) filtered according to the Point:, Start Time:, and End
Time: field entries.
The following table provides brief descriptions of the fields and buttons on the Accum Edit
dialog box (Figure 2-7).
Table 2-7 Fields and buttons on the Accum Edit dialog box
Field/Button Description
Time: Enter the time in the prescribed format.
Adjusted Count: Enter the point’s value.
Volume: Enter in this field the volume during the hour.
Add or Modify Click this button to add the new data into the timeline database.
Dismiss Click this button to close the Accum Edit dialog box (Figure 2-7).
NOTE When a new record is added, a status of M is provided to indicate that the record
has been added manually. This status is as shown on the Status field on the Accum
Hour Editor (Figure 2-6).
1 Click the row header of the record on the list displayed on the Accum Hour Editor
(Figure 2-6) to open the Accum Edit dialog box (Figure 2-7). Refer to The Accum Hour
Editor (Section 2.3.1).
2 Type the time, adjusted count, and volume.
3 Click Modify.
4 Click Dismiss.
1 Open the Accum Hour Editor (Figure 2-6). Refer to The Accum Hour Editor
(Section 2.3.1).
2 Select a point using the Point Select dialog box (Figure 2-4), or type the name of the
point. Refer to The Point Select Dialog Box (Section 2.2.1.1).
3 Type the start time and end time.
4 Click Filter.
• Click RTU Communication Statistics on the HistoricalDB Edit Menu (Figure 2-1).
The following table provides descriptions of the fields and buttons on the Remote
Communication Statistics Editor (Figure 2-8).
Table 2-8 Fields and Buttons on the Remote Communication Statistics Editor
Field/Button Description
Remote: This field contains the name of the remote. Click the arrow to the right of this
field to open a select dialog box, which allows you to select a remote record.
Connection: This field contains the name of the connection. Click the arrow to the right of
this field to open a select dialog box, which allows you to select a connection
record.
Table 2-8 Fields and Buttons on the Remote Communication Statistics Editor
Field/Button Description
Start Time: Acting as a filter, this field specifies the start time of the time interval shown in
the Time column.
End Time: Acting as a filter, this field specifies the end time of the time interval shown in
the Time column.
Add Click this button to add a new record to the CommStats database. Refer to The
Timeline Edit Dialog Box (Section 2.2.1.2).
Filter Clicking this button displays data values on the Remote Communication Statis-
tics Editor (Figure 2-8) that are filtered according to the Remote: or Connec-
tion: field entries and the Start Time: and End Time: field entries.
The following table describes the column headings that appear on the Remote
Communication Statistics Editor (Figure 2-8).
Refer to Column headings on the Remote Communication Statistics Editor (Table 2-9) for
descriptions of the fields that appear on the Communication Statistics Edit dialog box
(Figure 2-9). Wherever slight variations in the way the fields are named occur, refer to the
following equivalence table:
1 Click the row header of the record on the list displayed on the Remote Communication
Statistics Editor (Figure 2-8) to open the Communication Statistics Edit dialog box
(Figure 2-9). Refer to The Remote Communication Statistics Editor (Section 2.4.1).
2 Type or select the remote.
3 Change field entries that need to be changed.
4 Click Modify.
5 Click Dismiss.
NOTE When Modify is available, the Remote:, Time:, and Connection: fields are not
available, which indicates that these fields cannot be modified.
NOTE If only one of these time values is specified, a message box appears stating that
both values are required. If both fields are left blank, the default values are applied. The
default for End Time: is the last time the Remote Communication Statistics Editor
(Figure 2-8) was opened; the default for Start Time: is three hours earlier than the
default end time.
5 Click Filter.
• Click Communication Line Statistics on the HistoricalDB Edit Menu (Figure 2-1).
The following table provides a brief description of each of the items that appear on both
the Connection Statistics Editor (Figure 2-10) and the Connection Statistics Edit dialog box
(Figure 2-11).
Field/Button Description
Connection: This fields contains the name of the connection
Start Date: Acting as a filter, this field specifies the start date of the time/date interval
shown in the Time column
End Date: Acting as a filter, this field specifies the end date of the time/date interval
shown in the Time column
Add Clicking this button opens the Connection Statistics Edit dialog box
(Figure 2-11).
Filter Clicking this button filters the data displayed on the Connection Statistics Edi-
tor (Figure 2-10) according to the Connection:, Start Date:, and End Date:
entries.
Item Description
Time Time data was collected/recorded
Connection Name of the connection
Good Message Number of remote messages processed successfully
Bad Message Number of remote messages that failed
Throughput percentage of successful connections for the time period
Conn None Number of times that there was no connection
Connected Number of times that a connection was established
Connecting Number of times that a connection was attempted
Conn Fail Number of times that a connection failed
Conn Error Number of connection errors
Conn Retry Number of times connection was retried
Poll Cycle Number of poll cycles
Off Line Number of times that the line was found to be offline when
connection was attempted
DB Error Number of database errors
Modem Unavail Number of times that the modem was unavailable for connec-
tion
Modem Reserve Number of times that a modem reservation request was
granted
Modem Hangup Number of times the modem hung up before a connection was
completed
Tx Error Number of system-level transmission errors
Tx Short Number of times that some of the bytes in a message were not
transmitted
Rx Error Number of system-level reception errors
Item Description
Rx Short Number of times that some of the bytes in a message were not
received.
Rx None Number of times that no message was received
Normal Number of good connection sequences
NOTE Some of the fields that appear on the Connection Statistics Edit dialog box
(Figure 2-11) also appear as column headings on the Connection Statistics Editor
(Figure 2-10). For a description of each of the fields on the Connection Statistics Edit
dialog box (Figure 2-11), refer to Column headings on the Connection Statistics Editor
(Table 2-11).
1 Open the Connection Statistics Editor. Refer to The Connection Statistics Editor
(Section 2.5).
2 Click Add. This opens the Connection Statistics Edit dialog box (Figure 2-11) with an
Add button.
The collection of some types of data (such as event messages and communication statistics)
and their transfer into HistoricalDB is performed automatically. However, the collection of
data from analog, rate, remote, and status tables can be configured.
• Click Historical... on the Analog Edit dialog box, Rate Edit dialog box, Remote Edit dia-
log box, or Status Edit dialog box . For information on these dialog boxes, refer to the
RealTime Services Configuration and Administration Reference.
The Collection Entry dialog box (Figure 3-1) displays a list of fields (of a point) from which
data is currently collected. These fields are described in Fields and Other Items on the
Collection Entry Dialog Box (Table 3-1).
Table 3-1 Fields and Other Items on the Collection Entry Dialog Box
Field/Button/
Column Name Description
Description This column lists a description of each of the point’s fields from which data is
collected.
Collect Type This column lists the collection type that is used to collect data from the
point.
Column This column lists the names of the point’s fields from which data is collected.
Name: This contains the name of the point’s field from which to collect data. Click
the arrow beside the Name: field to open the Collect Select dialog box
(Figure 3-2), which allows you to choose the point’s field from which to col-
lect data. Refer to The Collect Select Dialog Box (Section 3.1.1).
Collection Type: The collection type can be one of the following:
• sample (Data is collected at periodic intervals. Choosing this type acti-
vates the fields associated with Collect Every:.)
• offline (Data is not collected. This setting can be used to stop collecting
data temporarily from the point without losing other settings.)
• exception (Data is collected only when there is a significant change.
Choosing this type activates the Deadband: field.)
Collect Every: This field contains the collection interval, if data is collected at set intervals
(i.e. Collection Type: is set to sample). For example, to collect every half
hour, enter 30 and minute. The interval must be between one minute and
60 minutes (inclusive) and must divide the hour evenly (for example: one
minute, five minutes, six minutes, 20 minutes, etcetera). The Collect Every:
fields cannot be set beyond an hour.
Deadband: The deadband indicates the amount by which the acquired value must
change from the previous value before a new value is collected and then sent
to HistoricalDB. This field is available if Collection Type: is set to exception.
Data collection by exception applies to curval, cursta and currate only. There-
fore, the reported data is always in engineering units (EGU). Reporting for
analog and rate points from the remote uses a different deadband, which is
discussed in the RealTime Services Configuration and Administration Refer-
ence. The collection deadband resides in the host and is defined in raw units.
For example, a deadband value of 10 means that only values that are at least
10 raw counts higher or lower than the last value sent to HistoricalDB are
transferred. The deadband must be a positive value.
Enable Summary For each collect point, summarization of data in the timeline and accum data-
bases is enabled by selecting the Enable Summary check box. Refer to Data
Summary (Section 3.3).
Fast Trend only Select this checkbox to collect up to 200 realtime values within a circular
(no data to histo- buffer; it also collects a sample at the top of the hour. This is only supported
rian) for collect-by-exception and summary is not supported.
NOTE Clicking a row entry on the Collect Select dialog box (Figure 3-2) automatically
loads the Name: field of the Collection Entry dialog box (Figure 3-1).
• The value has increased or decreased by an amount greater than or equal to the
value in the Deadband: field (which cannot be a negative number).
• The data’s quality status has changed. Refer to Data quality status (Table 2-3).
• The time is the start of the hour.
Collecting data on an exception basis usually requires less disk space. (Collecting data by
exception from RealTimeDB into HistoricalDB is similar to, but independent of, reporting
data by exception from RTUs into RealTimeDB as discussed in the RealTime Services
Configuration and Administration Reference.
3 Select a field by clicking a row entry. This automatically loads the selected field into the
Name: field on the Collection Entry dialog box (Figure 3-1).
4 Select the collection type.
• Select sample to collect data at periodic intervals.
• Select offline to temporarily stop collecting data.
• Select exception to collect data only when there is a significant change.
5 If Collection Type: is set to sample, type the collection interval in the Collect Every:
field.
6 If Collection Type: is set to exception, type in the Deadband: field the amount by which
the acquired value must change before a new value is collected.
7 Select Enable Summary if required.
8 Click Add.
Other Tasks
Refer to the Guía de Navegación Estándar e Información de Configuración for more
information on the following tasks:
Timestamps and the data quality status of collected values are also recorded. These are
stored in the hour table of the timeline database. For the average, the data quality status
is the highest-precedence status over the hour. For example, an alarm (A) status is
considered a higher-precedence status than manual (M). Refer to Data quality status
(Table 2-3).
Similarly, at the start of each hour, data in the hour table of the timeline database is
examined and the following are determined:
Timestamps and the data quality status of these values are also recorded. These are then
stored in the day table of the timeline database. A new day record is created as soon as the
day starts. The day record is also updated each hour.
Similarly, at the start of each day, the hour data is examined and the summaries in the
month table of the timeline database are updated. Also, the month data is examined and
the summaries in the year table of timeline database are updated. (As with day records, a
new month and year record are created as soon as the month or year starts. These are also
updated daily.)
1 Open the Rate Edit dialog box. For more information, refer to the RealTime Services
Configuration and Administration Reference.
2 Type or select the name of the rate record for which data is summarized.
3 Click Averages & Integration... to open the Rate Averages & Integration dialog box
(refer to the RealTime Services Configuration and Administration Reference).
4 Select Enable Integrations.
Once data summary in the accum database is enabled, the hourly, daily, monthly, and
yearly integrated accumulator counts (or volumes) are updated. Updates are done at
the start of each hour.
Historical Application
Installation Tool
The Historical Application Install Tool (Figure 4-1) is designed to assist the system
integrator or system administrator in installing RDBMS-resident parts of the OASyS DNA
HistoricalDB. It enables the user to install global server settings, which OASyS DNA expects
the Historical server to possess. This tool also aids in installing individual applications or in
upgrading existing applications.
NOTE There are three phases involved in setting up the Historical component of OASyS
DNA: planning, hardware setup, and software installation. The Historical Application
Install Tool (Figure 4-1) provides assistance during the software installation phase.
1 On your local disk, navigate to Program Files > Telvent > bin.
2 Double-click HDBIntallTool.exe.
The following table lists the fields, buttons, and tabs that appear on the Historical
Application Install Tool (Figure 4-1).
Item Description
Server: This field contains the name of the active SQL server to which the
application connects. The drop-down list contains the names of
known SQL servers that are visible on the network.
Configuration File: This field specifies the server configuration file. Selecting a configu-
ration file enables the Application, Server Cfg, and Error tabs.
Refer to Application Sequence Files (Section 4.2.1) and Application
Suite Files (Section 4.2.2).
Installation Configuration This displays all the application SQL script files, which describe base-
Directories line application databases. It also displays all the server SQL script
files, which describe MSSQL databases such as model, master,
temp, etcetera. Refer to Directory Structure (Section 4.3).
Set Directories This allows you to select a configuration directory. Refer to Direc-
tory Structure (Section 4.3).
Install All Clicking Install ALL initiates a full Historical installation. The Histor-
ical Application Install Tool (Figure 4-1) ignores individual selections
that are made. Server configuration elements are processed first
followed by application elements under the Application tab. Refer
to The Full Historical Installation Option (Section 4.4).
Application This tab is used to manage selective application upgrades. For more
information, refer to SQL Application Definition (Section 4.1) and
Historical Application Definition (Section 4.2).
Server Cfg This tab is used to manage selective global setting upgrades. Refer
to Selective Application Installation (Section 4.5).
Error This tab contains a copy of the errors that are encountered during
the installation process.
Install Selected Clicking this button installs the selected application and server
upgrades. Refer to Selective Application Installation (Section 4.5).
In general, application components that reside in the RDBMS are one of the following:
For a reasonably complex application, break up these aspects of the application into
individual files. A construct called the applications sequence file (*.seq) is provided to allow
developers to specify the order in which individual files are loaded. This file is an ordered
list of the SQL files that describe the application.
Dependencies may exist between applications. This means that there is an order in which
applications must be loaded into the RDBMS. The application suite file (*.ste) is an ordered
list that specifies the order in which individual applications are installed.
The Historical Application Install Tool (Figure 4-1) reads and uses the applications sequence
and application suite files to construct the tree view that is shown on the Application tab.
There are two levels of application grouping that application you must provide. One of
these is applications sequence, which defines the order of loading of the SQL scripts that
define an application. The other is the application suite, which defines an application with
respect to the other applications on which it depends.
Syntax Elements of an Application Sequence File (Table 4-2) describes the elements of
application sequence files.
Element Description
Comment Lines Any line that starts with a #
Group Header A line containing simple text bounded by square brack-
ets [ ]
SQL File Name A line containing a text specification of an SQL Script
file name
Blank lines are allowed but ignored.
###################################################################
# archive.seq:
# This is the application sequence file for archive. It specifies
# the order in which the SQL script files of archive should be
# loaded. The SQL script files are organized into different groups t
# that are more intuitively obvious when examined in the Historical
NOTE For global server configuration, the file suffix “.svr” denotes that the file con-
tains server configuration information.
Element Description
Comment Line • Any line that begins with a #
• Comment lines are ignored when Application Suite files are pro-
cessed.
Application Definition Line The application definition line has the following basic structure:
[<Application Name>] <white space><Application Sequence File Spec>
Where:
• <Application Name> is arbitrary text that is displayed on the tree-
view browser. The [ ] provides boundaries to the name.
• <white space> is any sequence of spaces and tabs.
• <Application Sequence File Spec> provides the name and location
of the sequence file for the application.
Blank lines are allowed, but are ignored.
###################################################################
# Baseline.ste:
# This is the baseline Historical application suite file. It
# specifies the order in which application sequence files should be # pro-
cessed when installing the baseline applications.
#
###################################################################
[Archive] archive\archive.seq
[Accum] accum\accum.seq
[XOSApp] xosapp\xosapp.seq
[CommStats] commstats\commstats.seq
[Event]event\event.seq
[Timeline] timeline\timeline.seq
[Replication] replication\rep.sql
The global server configuration file (.svr) should reside in this directory. All the
sequence files listed in the .svr file will be under this directory.
• GMS\Config\application
• GMS\Config\serversql
The Historical Application Install Tool (Figure 4-1) uses the baseline configuration
directory, which is defined in the XIS_CONFIG registry as the default installation
configuration directory. If your project uses a different directory, add the XIS_CONFIG
registry so that it appears under the Installation Configuration Directories list.
All errors that are encountered are written to the HistoricalInstall.log under the
directory defined in the XIS_ERRLOG registry. Click the Error tab to review the errors.
Before doing a full Historical Application installation, you must do the following:
NOTE The default directory that is displayed when you click the “...” button is Tel-
vent\DNA\Historical\config\.
If your project has a different configuration directory, click Set Directories to select the
directory.
WARNING Make sure that the baseline directory is the first element in the Installation
Configuration Directories list. This is because baseline applications should be installed
first, followed by any project specific applications.
4 Click the Server Cfg tab. The right hand side of the tab should display the contents of
the file entered in the Configuration File: field.
5 Click Install XIS.
6 When the “complete” message box appears, acknowledge it.
7 Select the Error tab to examine any errors that occurred during the installation.
Selecting the Application tab reveals a tree view that shows the applications suites and the
constituent applications. Expanding each leaf of the tree exposes more details down to the
lowest available level of the individual SQL script files.
Beside each node of the tree is a checkbox. Selecting a checkbox instructs the installer to
process the corresponding node and all of its children.
CAUTION If you click the Install All button, everything is processed regardless of the
selections made on the Application tab.
If your project has a different configuration directory, click Set Directories to select
the directory.
NOTE Different systems may have different configuration files. These files usually
reside in the config directory. Ensure that you are selecting the right file for the
installation tool.
The server configuration file is a SQL script file that is written using SQL 92 with either
Transact-SQL (Sybase) or T-SQL (Microsoft) extensions applied.
==================================================================
-- Historical_install.cfg : Configuration file used for Historical
-- installation. It defines baseline required Historical
-- Databases.
==================================================================
==================================================================
-- accum Database definition.
==================================================================
-- Drop existing database
==================================================================
drop database accum
go
==================================================================
=
-- Create new database
==================================================================
=
create database accum
on primary
(name = accum_Data,
filename = 'C:\XIS DATA\XISDisk1\MSSQL\accum_Data.mdf',
size = 31MB,
maxsize = 31MB,
filegrowth = 0)
log on
(name = accum_Log,
filename = 'C:\XIS DATA\XISDisk2\MSSQL\accum_Log.ldf',
size = 3MB,
maxsize = 3MB,
filegrowth = 0)
go
exec sp_dboption 'accum', 'autoclose', 'false'
go
exec sp_dboption 'accum', 'select into/bulkcopy', 'false'
go
exec sp_dboption 'accum', 'trunc. log on chkpt.', 'true'
go
===================================================================
-- xosapp Database definition.
===================================================================
===================================================================
-- Drop existing database
===================================================================
drop database xosapp
go
===================================================================
-- Create new database
===================================================================
create database xosapp
on primary
(name = xosapp_Data,
filename = 'C:\XIS DATA\XISDisk1\MSSQL\xosapp_Data.mdf',
size = 7MB,
maxsize = 7MB,
filegrowth = 0)
log on
(name = xosapp_Log,
filename = 'C:\XIS DATA\XISDisk2\MSSQL\xosapp_Log.ldf',
size = 2MB,
maxsize = 2MB,
filegrowth = 0)
go
exec sp_dboption 'xosapp', 'autoclose', 'false'
go
exec sp_dboption 'xosapp', 'select into/bulkcopy', 'false'
go
exec sp_dboption 'xosapp', 'trunc. log on chkpt.', 'true'
go
==================================================================
-- archive Database definition.
==================================================================
==================================================================
-- Drop existing database
==================================================================
drop database archive
go
==================================================================
-- Create new database
==================================================================
create database archive
on primary
(name = archive_Data,
filename = 'C:\XIS DATA\XISDisk1\MSSQL\archive_Data.mdf',
size = 21MB,
maxsize = 21MB,
filegrowth = 0)
log on
(name = archive_Log,
filename = 'C:\XIS DATA\XISDisk2\MSSQL\archive_Log.ldf',
size = 10MB,
maxsize = 10MB,
filegrowth = 0)
go
exec sp_dboption 'archive', 'autoclose', 'false'
go
exec sp_dboption 'archive', 'select into/bulkcopy', 'false'
go
exec sp_dboption 'archive', 'trunc. log on chkpt.', 'true'
go
==================================================================
-- CommStats Database definition.
==================================================================
==================================================================
-- Drop existing database
==================================================================
drop database CommStats
go
==================================================================
-- Create new database
==================================================================
create database CommStats
on primary
(name = CommStats_Data,
filename = 'C:\XIS DATA\XISDisk1\MSSQL\CommStats_Data.mdf',
size = 512MB,
maxsize = 512MB,
filegrowth = 0)
log on
(name = CommStats_Log,
filename = 'C:\XIS DATA\XISDisk2\MSSQL\CommStats_Log.ldf',
size = 100MB,
maxsize = 100MB,
filegrowth = 0)
go
exec sp_dboption 'CommStats', 'autoclose', 'false'
go
exec sp_dboption 'CommStats', 'select into/bulkcopy', 'false'
go
exec sp_dboption 'CommStats', 'trunc. log on chkpt.', 'true'
go
==================================================================
=
(name = timeline_Log,
filename = 'C:\XIS DATA\XISDisk2\MSSQL\timeline_Log.ldf',
size = 2GB,
maxsize = 2GB,
filegrowth = 0)
go
exec sp_dboption 'timeline', 'autoclose', 'false'
go
exec sp_dboption 'timeline', 'select into/bulkcopy', 'false'
go
exec sp_dboption 'timeline', 'trunc. log on chkpt.', 'true'
go
==================================================================
Historical Replication
This module describes the implementation of Historical Replication for OASyS DNA based
on Microsoft SQL Server 2000. The replication architecture that is described here has been
implemented and verified with respect to baseline requirements. Where project-specific
customizations are at issue, and for full details of specific issues, consult the SQL Server
Books Online supplied and installed with the MS SQL Server product.
While the MS SQL Server allows both data and database objects to be replicated, OASyS
DNA uses MS Replication for data only. Baseline replication is also configured to run
continuously, as opposed to being intermittently synchronized at specific times/intervals.
This is based on the assumption that source and destination will generally be in constant
contact, and that potential outages of either partner are unlikely to exceed the inherent
buffering capabilities of the MS SQL Server.
update lag times. It also provides adequate conflict resolution of incompatible concurrent
updates.
With the exception of the archive database, which is considered highly specific to each
machine, a publication is defined for each HistoricalDB database, and all dependent tables
are defined as articles. HistoricalDB tables are configured to be replicated in their entirety.
5.1.4 Publishers
A Publisher is an MS SQL Server instance that has been designated to serve as the source of
replicated data. Publications and their associated articles are defined in the context of a
publishing server. In fact, the MS SQL Server instance must be formally configured as a
publisher before publications may be defined.
OASyS DNA configures the primary or main Historical server instance as publisher. While
any or all servers may be simultaneously designated as publishers, individual publications
replicated between two specific servers may only be associated with a single publisher.
5.1.5 Subscriptions
The association of a publication (representing the replication source) with a destination
(server) is represented by a Subscription. The subscription also designates the method or
type of replication, and frequency/schedule of synchronization.
NOTE Push and pull subscriptions share the requirement that the publisher be aware
of all defined subscribers; collectively, the subscriptions are known as Named Subscrip-
tions.
5.1.6 Distributor
An MS SQL Server instance designated as the Distributor oversees replication.
The distributor maintains the replication Distribution database that is appropriate to the
publisher it represents. This distribution database contains the metadata descriptions of all
publications and subscriptions. The distributor also maintains the synchronization history of
its publishers’ publications. In merge replication scenarios, the distributor also serves as the
host for 1) the agent processes that are responsible for ongoing merge activity and for 2)
initial snapshot creation and transfer to designated subscribers. The distributor assumes
more responsibility in the case of transactional replication.
OASyS DNA has configured the main Historical server (an MS SQL Server instance) to serve
both as publisher and distributor. The term “local distributor” is applied to such a model.
MS SQL Server also allows for a publisher that is independent of the distributor (“remote
distributor”), which can be employed in the event that load balancing issues arise.
The agents are assigned attributes associated with the replication component to which they
are bound. These complex objects basically represent a batch of job steps that defines their
dedicated purpose. Once defined, replication management generally involves dealing with
(i.e. starting, stopping, or tweaking) these agent “jobs.”
Given that OASyS DNA employs Continuous Merge Replication using Push Subscriptions,
Snapshot Agents (Section 5.1.7.1) and Merge Agents (Section 5.1.7.2) will likely be the only
agents of concern. SQL Server Books Online should be consulted for descriptions of the agents
involved in Transactional Replication (Log Reader, Queue Reader, and Distribution Agents),
and for information regarding the miscellaneous database cleanup agents provided by
default.
The remaining SQL Server Agent, which hosts and schedules the replication agents, and
controls and monitors operations outside the replication domain, is generally transparent
to normal replication operations. This agent is similar to a Job Server, and is not to be
confused with the SQL Server Agent (NT) Service, whose duty it is to transport replication
and other data between MS SQL Servers.
publisher when push subscriptions are employed, they nonetheless connect to both servers
to implement their bi-directional responsibilities.
1 Start the Historical services on both MAINHS1 and BCKSVR. Refer to Starting, Stopping,
or Failing Over Historical Service (Method 1) in Historical Replication (Module 5).
2 Register both MAINHS1 and BCKSVR using the Register SQL Server Wizard.
a On MAINHS1, open the SQL Server Enterprise Manager (Figure 5-1). Refer to Open-
ing the SQL Server Enterprise Manager.
b Right click SQL Server Group, then select New SQL Server Registration. This opens
the Register SQL Server Wizard (Figure 5-2).
c Click Next.
d Select a SQL server from the next dialog box that appears. Choose MAINHS1 and
BCKSVR.
e Click Add, then click Next.
f On the Select an Authentication Mode dialog box that appears, select The Windows
account information I use to log on to my computer [Windows Authentication].
g Click Next.
h On the Select SQL Server Group dialog box that appears, select Add the SQL
Server(s) to an existing SQL Server group.
d On the SQL Query Analyzer (Figure 5-3), click File > Open.
e On the Open Query File dialog box that appears, browse or navigate to Local
Disk\Program Files\Telvent\DNA\Historical\config\applicationsql\replication, then
click ReplicationSetup.sql. This loads the replication setup SQL script into the SQL
Query Analyzer.
f Click the green arrow button on the toolbar of the SQL Query Analyzer
(Figure 5-3) to execute ReplicationSetup.sql. This finishes the replication setup.
4 Start the snapshot agents.
a Navigate to Microsof SQL Servers > SQL Server Group > MainHS1 > Replication
Monitor > Agents > Snapshot Agents. Open the Snapshot Agents folder.
b Right click PUBaccum then select Agent Properties.... This opens the Agent
Properties dialog box (Figure 5-4)
3 Expand the database of interest. You should see the publications that are involved.
4 Select the publication.
5 Click Properties and Subscriptions. This opens the Publication Properties dialog box
(Figure 5-6).
1 On the SQL Server Enterprise Manager (Figure 5-1), navigate to Microsoft SQL Servers
> SQL Server Group > MainHS1 > Replication Monitor > Agents > Snapshot Agents.
2 Open the Snapshot Agents folder.
3 Right click PUBaccum, then select Agent Properties.... This opens the Agent Properties
dialog box (Figure 5-4).
1 On the SQL Server Enterprise Manager (Figure 5-1), navigate to Microsoft SQL Servers
> SQL Server Group > MainHS1 > Replication Monitor > Agents > Merge Agents.
2 Open the Merge Agents folder.
3 Right click PUBaccum, then select Agent Properties.... This opens the Agent Properties
dialog box (Figure 5-4).
Archiving
An archive device can be installed on the archive host. The historical databases can then be
configured to be archived on the archive device. For the archiving to function properly,
define and set up the designated archive device entries, and then customize the archive
schedule.
system_service_database_table_date.archive_version
where:
service is the name of the service that contains the archived data; the default value
is Historical.
database is the name of the database that contains the archived data.
table is the name of the table that contains the archived data.
version is the version number of the archive, starting with 0. The version number is
incremented if data is re-archived.
NOTE OASyS DNA Services with SQL Servers may be scheduled for archiving data.
A log of the archive is written into the catalog table and the date of the archive is recorded
in the schedule table.
Data is automatically archived when it becomes older than the date specified by the
archive cutoff date set in the archive schedule table. The Schedule Configuration dialog
box (Figure 6-3) is used to set this value. The format of the archive cutoff: field entry
contained in this form is YY:MM:DD. YY, MM, and DD specify the number of years, months,
and days, respectively, that should elapse before the specified data can be archived.
6.1.1 xis_archive
Archiving can be done manually by executing the xis_archive command on the Historical
machine. The archive process first determines which machine is the Historical archive host
(the default is the Historical service on site), and then runs the xis_archive.pl script on that
machine. The Job Scheduler (JSH) is normally configured to execute xis_archive once per day,
usually at a time when the system is not busy. The syntax for xis_archive is:
where:
- A specifies a host where archive_host is the host where the archive device is located.
If a host is not specified, the primary Historical Service host for the local system is used.
3 Press ENTER.
All other options are passed directly onto the archive executable. The options can be
configured within the xis_archive.pl script.
Archive
Option Description
NOTE Running archive -h from a command prompt displays the options described in this
table.
c This catalogs any archives that are sent to the null device (NULL). If this
option is not set, then archives to the null device are not logged in the
catalog. For more information, refer to the description of the device entry
for the archive schedule table.
d level This sets the level of debug message detail between 0 and 6; 0 gives no
message and 6 gives maximum detail.
Archive
Option Description
i interval This speeds up the archiving of the timeline..collect and the event..sum-
mary tables. This is accomplished by doing a bulk copy (bcp) out of small
chunks of data at a time (less than one day); these chunks of data are
later combined into one large bcp file.
The interval, which defaults to 0 and is specified in minutes, indicates the
the time in between archiving batches of collect data. For example:
archive -i 60
The command in the example creates a temporary table with one hour’s
worth of data. The chunks of data that were bulk copied out hourly are
then combined into one large bcp file.
h This displays the archive options.
s service This is the service that hosts the archive database. Default value for ser-
vice is Historical.
t directory This sets the name of the directory for temporary archive files. Default
directory is C:\DOCUMEN~1\dnaAdmin\LOCAL~1\Temp.
m size This sets the maximum amount of disk device space (in megabytes) to be
reserved for operating system administration (for example, for relocating
bad blocks). The default is 10% of the disk, up to a maximum of 25
Megabytes.
It is important that the directory for temporary archive files has sufficient
disk space (10 to 20 Megabytes is recommended). However, this amount of
disk space may not always be available. If sufficient space is unavailable for
the temporary file, the data is not archived, even though space may be
available on the final destination device. Because of this, it is highly rec-
ommended that an alternative directory be provided when running the
archive program.
6.1.2 cmx_archive
Archiving can also be done by executing cmx_archive on the RealTime machine. Calling
cmx_archive runs the cmx_archive.pl script, which locates the archive host then executes
xis_archive. The syntax for cmx_archive is:
where:
- A specifies a host where archive_host is the host where the archive device is located.
If a host is not specified, the primary Historical Service host for the local system is
used.
Replace archive_host with the host where the archive device is located. If no archive
host is specified, the default (the local Historical Service) is used. For a list of options,
refer to Archive options (Table 6-1).
3 Press ENTER.
The following table lists the buttons on the Archive/Dearchive Edit dialog box (Figure 6-1)
and the corresponding actions when these are chosen.
Button Action/Reference
Schedules... Opens the Schedule Summary window
(Figure 6-2)
Add Device Opens the Add Device Editor (Figure 6-7)
Dearchive... Opens the Dearchive window (Figure 6-9)
Initialize Media... Opens the Media Initialization dialog box
(Figure 6-8)
Cleanup Initiates data cleanup; refer to Data Cleanup
(Section 6.4)
Dismiss Closes the dialog box
• Click Schedules... on the Archive/Dearchive Edit dialog box (Figure 6-1). For more
information, refer to The Archive/Dearchive Edit Dialog Box (Section 6.2).
The following table provides brief descriptions of the buttons, fields, and column headings
that appear on the Schedule Summary window (Figure 6-2).
Item Description
Column Headings
ID# Identification number of the archive schedule
Service Service containing the data to be archived
Database Database containing the data to be archived
Table Table containing the data to be archived
Device Device where the data is to be archived
Archive Cutoff Indicates how old the data must be before it is archived
Last Date Archived Date the data was last archived. If empty, archive has not been
performed.
Delete Cutoff Indicates how old the data must be before it is deleted
Enable Indicates whether or not archive is enabled.
Item Description
BCP Format Indicates whether or not data is archived in any special format.
Buttons and Fields
Service Click this to specify the service with which to filter the displayed
information.
Database Click this to specify the database with which to filter the displayed
information.
Table Click this to specify the table with which to filter the displayed
information.
Archive Device Click this to specify the archive device with which to filter the dis-
played information.
Enabled Only Click this to specify that only schedules for which archive is
enabled should be displayed.
Search Click this to search using the enabled filters.
Clear Filters Click this to clear all the field filters.
Add Schedule Click this to open the Schedule Configuration dialog box
(Figure 6-3).
1 Click:
• Service to filter using the service name
• Database to filter using the database name
• Table to filter using the table name
• Archive Device to filter using the archive device name
2 Type the appropriate filters in the enabled fields.
3 Click Search.
• Click Add Schedule on the summary window. This opens a blank Schedule Configura-
tion dialog box (Figure 6-3). Use this to add a new signal configuration.
The following table provides brief descriptions of the buttons, fields, and column headings
that appear on the Schedule Configuration dialog box (Figure 6-3).
Table 6-4 Fields and Buttons on the Schedule Configuration dialog box
Field/Button Description
Service: The service that contains the data to be archived.
Database: The database that contains the data to be archived.
Table: The table that contains the data to be archived. The table must have a
column called time (of ultimate data type int), which contains the age
of the associated data as the number of seconds since Jan. 1, 1970
GMT.
Archive Device: The device where data is to be archived. An archive device can only be
modified if it was set initially as the null device, which is the default set-
ting.
Archive Cutoff: This indicates how old the data must be before it is archived. The effect
of this field on the archive process depends on whether the table being
archived is time-based or non-time-based. For more information, refer
to Archive Cutoff (Section 6.2.1.2).
Delete Cutoff: This indicates how old the data must be before it is deleted. Refer to
Archive Intervals (Section 6.2.1.3)
Table 6-4 Fields and Buttons on the Schedule Configuration dialog box
Field/Button Description
BCP: This indicates whether the data is to be archived in a special format.
The options provided are custom and default. When changed to
custom, the name of the bcp format file must be provided.
Enable If this check box is not selected, the schedule entry appears in the
Schedule Summary window (Figure 6-2), but archive is not performed.
Modify This button is available only if modifying an existing schedule entry.
Click this to save the changes made.
Abandon This button is available only if modifying an existing schedule entry.
Click this to discard any changes made.
Delete This is available only if modifying an existing schedule entry. Click this
to delete the schedule entry.
Add This is available only if adding a new schedule entry. Click this to add
the new schedule entry.
Dismiss Click this to close the dialog box.
For non-time-based tables, the cutoff indicates the interval at which the entire table’s data
must be archived. For example, consider the cutoff specifiers and possible archiving
frequencies listed in Archive Cutoff sample (non time-based) (Table 6-6).
Note that the archive end time is not a guarantee that data is archived up to the end time.
It guarantees only that data younger than the end time is not archived. Refer to Archive
Intervals (Section 6.2.1.3) for more information.
interval of less than one month. A monthly archive is performed on data that requests
archiving at a cutoff interval of greater than or equal to one month but less than one year.
A yearly archive is performed on data that requests archiving at a cutoff interval of greater
than or equal to one year.
Daily Archive
Daily archives are performed for archive schedules with an Archive Cutoff: field value
ranging from 00:00:00 to 00:00:31.
Collected Data
Daily Archive example (Figure 6-4) shows a schedule created on January 1, 1997 with an
archive cutoff value of three days and a delete cutoff value of four days. Archiving does not
take place until the start of the day on January 5th since, until then, none of the collected
data is more than three days old. Data that have been archived once are not archived a
second time even if these have not been removed by the cleanup process. Data cleanup
does not take place until the start of the day on January 6th since, until then, none of the
collected data is more than four days old.
Monthly Archive
Monthly archives are performed for archive schedules with an Archive Cutoff: field value
ranging from 00:01:00 to 00:11:00.
Collected Data
Monthly Archive example (Figure 6-5) shows a schedule created on January 1, 1997 with
an archive cutoff value of two months and a delete cutoff value of two months. Archiving
does not take place until the start of the day, April 1, since until then, none of the
collected data is older than two months. Intervals between two and three months are
considered to be two months for purposes of monthly archiving. Data cleanup does not
take place until the start of the day, April 1, since until then, none of the collected data is
older than two months. The data must be archived before the cleanup process because
unarchived data cannot be deleted. If the execution times for these two scripts were set up
in the wrong order, the first data cleanup would not take place until May 1.
Yearly Archive
Yearly archives are performed for archive schedules with an Archive Cutoff: field value of
01:00:00 or greater.
Collected Data
Jan 1, 1993 Jan 1, 1994 Jan 1, 1995 Jan 1, 1996 Jan 1, 1997 Jan 1, 1998
Yearly Archive example (Figure 6-6) shows a schedule created on January 1, 1993 with an
archive cutoff value of one year and a delete cutoff value of one year. Archiving does not
take place until the start of the day, January 1, 1995 since, until then, none of the collected
data is older than one year. Intervals between one and two years are considered to be one
year for purposes of yearly archiving. Data cleanup does not take place until the start of the
day, January 1, 1995 since, until then, none of the collected data is older than one year.
Unarchived data cannot be deleted; hence, the data must first be archived before the
cleanup process. If the execution time for these two scripts were set up in the wrong order,
the first data cleanup would not take place until January 1, 1996.
• Click Add Device on the Archive/Dearchive Edit dialog box (Figure 6-1).
The following table lists the items that appear on the Add Device Editor (Figure 6-7).
Field/Button Description
Arch Device: This is the name of the device where the data is
to be archived.
Type: There are two options available. Choose
Removable Disk if the archive device is an
optical disk. Choose Fixed Disk if using a
mapped drive.
Physical Device: This specifies the absolute device (for example,
/dev/rsdxxx).
Create Click this button to add the device.
Dismiss Click this button to discard changes.
• Click Initialize Media... on the Archive/Dearchive Edit dialog box (Figure 6-1).
The following table provides a list of the fields, buttons, and other items that appear on
the Dearchive window (Figure 6-9).
Item Description
Column Headings
Service This is the service that contains the archived data.
Database This is the database that contains the archived data.
Table This is the table that contains the archived data.
Start Time Dearchiving is restricted to data between the Start Time and End Time.
End Time
Label This is the label of the device.
Device This is the device where the data is archived.
Buttons and Fields
Service: Click this to specify the service with which to filter the displayed informa-
tion.
Database: Click this to specify the database with which to filter the displayed informa-
tion.
Item Description
Table: Click this to specify the table with which to filter the displayed information.
Start Time: Click this to specify the start time with which to filter the displayed informa-
tion.
End Time: Click this to specify the end time with which to filter the displayed informa-
tion.
Search Click this to search using the enabled filters.
Clear Filters Click this to clear the filters.
Dearchive Data Click this to open the Dearchive Configuration dialog box (Figure 6-10).
The following table lists the fields, buttons, and other items that appear on the Dearchive
Configuration dialog box (Figure 6-10). l
Field/Button/Check
box Description
Specify alternate Click this to specify an alternate archive host.
archive host
Archive Host: This is the name of the alternate archive host.
Service: This is the service that contains the data to be dearchived.
Database: This is the database that contains the data to be dearchived.
Table: This is the table that contains the data to be dearchived.
Start Date: Dearchiving is restricted to data that lie between the specified start date
End Date: and end date.
Dearchiving data
The most recent version of the archive file is the file that is dearchived. Do not initiate the dearchive
process close to the time when the daily archive occurs; this may interfere with the archive process.
NOTE You may use the filters to find the archived file to dearchive.
3 (Optional) Click the row header that corresponds to the file to dearchive.
4 Click Dearchive Data. This opens the Dearchive Configuration dialog box
(Figure 6-10).
5 (Optional) If specifying an alternate archive host:
a Select the Specify alternate archive host check box.
b Type or select the name of the alternate archive host.
6 If you skipped step 3, do the following:
a Type or select the service, database, and table.
b Type the start date and end date.
7 Click Dearchive.
In order to delete data, it must be archived. Therefore, when using JSH to schedule
archiving and cleanup, the former must be performed first. If you do not wish to save the
data to an archive device, then archive to the null device (i.e. NUL).
The Delete Cutoff: field entry in the Schedule Configuration dialog box (Figure 6-3)
specifies how old data must be before it is deleted. Each entry follows the YY:MM:DD
format, where YY specifies how many years must elapse, MM specifies how many months
must elapse, and DD specifies how many days must elapse before deletion. The deletion
time is calculated with respect to the start of the current day.
To illustrate, if the current time is 05-Feb-96 14:35 when the cleanup process runs, the cutoff
specifier causes the archived data to be deleted as shown in Delete Cutoff sample
(Table 6-10).
Cutoff Specifier
(Delete Cutoff: field on the
Schedule Configuration dialog
box (Figure 6-3) Delete archived data up to
00:00:00 05-Feb-96 00:00
00:00:01 04-Feb-96 00:00
00:01:00 05-Jan-96 00:00
00:02:00 05-Dec-95 00:00
01:00:00 05-Feb-95 00:00
NOTE Delete dearchived data as soon as it is no longer needed; otherwise, it fills up the
database and leaves no room for the data that is currently being collected.
3 Click Yes.
6.4.1 xis_cleanup
A manual data cleanup can also be done using xis_cleanup. The process is normally
started by executing xis_cleanup on the Historical machine. The syntax is:
Option Description
c This option checks and reports if there is any dearchived data that requires
deletion. To delete dearchived data, the -f flag must be specified.
f This forces a full deletion, including manually dearchived data.
v This is used to display informational messages while executing.
6.4.2 cmx_cleanup
Manual data cleanup can also be done using cmx_cleanup. The process is started by
executing cmx_cleanup on the RealTime machine. This locates the Historical service host
and launches launches the xis_cleanup.pl script on that host. The syntax is:
Refer to Cleanup options (Table 6-11) for the options provided for cmx_cleanup.
The Job Scheduler (JSH) is normally configured to run cmx_cleanup once per day after
cmx_archive has already run.
The archive program checks the rearchive table. If there are any entries in the table, the
archive program collects information from the schedule and catalog tables as required for
rearchiving (for example, rearchive format and device). After the information is collected,
the rearchive is performed.
The archive file name for rearchived data is identical to the original file name, except that
the version number suffix is incremented each time the data is required. Therefore, the file
name with the highest version number reflects the most current state of the data.
• The data being modified or inserted has not been previously archived.
• The database and table in which the data was modified or inserted are not scheduled
for archive in the schedule table.
• The archive function is disabled in the RealTimeDB’s JSH table.
• The data being modified or inserted corresponds to the same period defined by the
start and end times of an entry already in the rearchive table, which was created previously
due to modification or insertion of the data in the same time interval in the same data-
base and table.
HistoricalDB Databases
The following table provides information on Telvent-defined data-types that are used in
HistoricalDB tables.
Data is summarized according to the procedures described in Data Summary (Section 3.3).
This data is used when plotting historical data. For details on plotting or trending historical
data, refer to the Referencia de Operación y Control.
All of the timeline tables carry a tag identifier number (in the tagId field), which is used to
obtain access to a specific record within a table. This is similar to a RealTime record
number.
Data is summarized according to the procedures described in Data Summary (Section 3.3).
Data in this table can be viewed through the Event Summary and Alarm Summary windows
in XOS (refer to the Referencia de Operación y Control).
Data in this table can be viewed through the Connection Summary window in XOS. For
more information, refer to the Referencia de Operación y Control.
Table 7-8 Fields in the ConnPeriodStats table of the CommStats database (Continued)
Table 7-11 Fields in the catalog table of the archive database (Continued)
Connection Statistics Editor ................................2-15 Filtering the information displayed on the Schedule
column headings ..............................................2-16 Summary window
fields/buttons....................................................2-16 Schedule Summary window
filtering display list...........................................2-15 filtering display information ..........................6-6
opening the ......................................................2-15 Filtering the list displayed on the Accum Hour Editor
ConnPeriodStats table ...........................................1-2 2-10
Filtering the list displayed on the Remote Communi-
cation Statistics Editor...................................2-14
D Filtering the list displayed on the Timeline Collect
Data Cleanup .......................................................6-17 Editor ...............................................................2-7
Data Collection by Exception ................................3-3
Data Dearchiving .................................................6-14
Data Rearchiving .................................................6-19 G
Data summary ........................................................3-4 Globally Unique Identifiers ...................................5-2
data summary GMTtime.................................................................2-2
enabling ..............................................................3-5 Greenwich Mean Time (GMT)
rate collect point ................................................3-5 See GMTtime
data types...............................................................7-1 Group......................................................................2-5
databases GUID .......................................................................5-2
accum ..................................................................1-2
archive.................................................................1-2
CommStats .................................................. 1-2, 2-2 H
event ...................................................................1-2 Historical Application Definition ..........................4-3
timeline ...............................................................1-3 Historical Application Install Tool.........................4-1
xosapp .................................................................1-3 Application..........................................................4-2
day table ........................................................ 1-2, 1-3 Configuration File...............................................4-2
daylight savings time.............................................2-2 Error ....................................................................4-2
Deadband full Historical installation ...................................4-5
.............................................................................3-2 Install All .............................................................4-2
Dearchive Configuration dialog box ..................6-16 Install Selected ....................................................4-2
fields/buttons/checkboxes................................6-17 Installation Configuration Directories...............4-2
opening the ......................................................6-16 items on the ........................................................4-2
Dearchive window ...............................................6-15 opening the ........................................................4-2
items on the......................................................6-15 Server Cfg............................................................4-2
opening the ......................................................6-15 Service .................................................................4-2
dearchiving ..........................................................6-14 Set Directories.....................................................4-2
Dearchiving data .................................................6-17 Historical Install Tool
Defining an archive device..................................6-13 partial/selective application installation ...........4-6
Delete Cutoff .......................................................6-18 HistoricalDB............................................................1-1
Deleting a collection entry....................................3-4 HistoricalDB Edit Menu .........................................2-1
Deleting a Schedule Summary window entry......6-9 opening the ........................................................2-1
device table ............................................................1-2 HistoricalDB in relation to other services .............1-1
Directory Structure ................................................4-5 HistoricalDB Structure ...........................................1-2
Distributor ..............................................................5-4 hour table....................................................... 1-2, 1-3
dumpSchedule table..............................................1-2
I
E Illegal Message.....................................................2-12
Enable Filters..........................................................2-5 Initializing an archive device...............................6-14
Enable Summary ....................................................3-2 Install All......................................................... 4-2, 4-6
Enabling data summary for a collect point..........3-5 Install Selected .......................................................4-2
Enabling data summary for a rate collect point ..3-5 Installation Configuration Directories..................4-2
End Time ........................................................ 2-4, 2-9
Error........................................................................4-2
event L
summary table ....................................................7-5 Line Fail ................................................................2-12
event Database ......................................................7-4 Long Message ......................................................2-12
event database ......................................................1-2
F M
Media Initialization dialog box...........................6-14
Fast Trend only ......................................................3-2 opening the ......................................................6-14
Filter ............................................................... 2-4, 2-9 Merge Agent Properties
Filtering the data displayed on the Connection Sta- viewing..............................................................5-10
tistics Editor ...................................................2-15
U
units
raw ......................................................................3-2
V
validDeviceTypes table ..........................................1-2
Value .............................................................. 2-3, 2-6
Viewing Baseline Publication Properties..............5-8
Viewing Merge Agent Properties .......................5-10
Viewing Publisher and Distributor Properties ...5-10
Viewing Snapshot Agent Properties...................5-10