NI LabVIEW For CompactRIO Developer's Guide-59-84
NI LabVIEW For CompactRIO Developer's Guide-59-84
NI LabVIEW For CompactRIO Developer's Guide-59-84
Network Communication
For embedded CompactRIO applications, communication with a remote client is often a critical part of the project.
Embedded applications typically function as “data servers” because their primary role is to report information (status,
acquired data, analyzed data, and so on) to the client. They are also usually capable of responding to commands from
the client to perform application-specific activities.
Ethernet
Communication Model
The communication models discussed in chapters 1 and 3 pertain to both interprocess communication and network
communication. These models are current value data (tags), updates, streaming, and command-/message-based
communication. Understanding the type of data transfer you need is the most important consideration when choosing
a networking mechanism. Table 4.1 provides examples of when you might use current value data (tags), updates,
streaming, or command-based data communication for networked applications.
52
Communication
Description Example
Model
The transfer of the latest values only when the latest A host PC is monitoring the heartbeat of multiple
Updates value does not equal the previous value. deployed CompactRIO systems and needs to be notified
Typically distributed to multiple processes. if one of the systems shuts down.
The low-latency data transfer from one process A user clicks the stop button in an HMI application,
Message/ that triggers a specific event on another process. and the application shuts down a conveyer belt by
Command Command-based communication is typically infrequent sending the stop command to the CompactRIO target
and requires you to not miss any data points. controlling the conveyer belt.
Network Configuration
The network configuration is another important consideration. In multiclient applications, clients may connect and
disconnect at random times. The TCP/IP protocol requires you to implement a connection manager to dynamically
accept and service any number of incoming connections in addition to tracking client requirements and servicing
each client individually. While this is possible, it requires a decent amount of work on the development side. On the
other hand, network-published shared variables feature the Shared Variable Engine with a built-in connection manager,
which handles any incoming client connections for you. Since TCP/IP Messaging (STM) and CVT Client Communication
(CCC) are both based on TCP, they also require you to develop a connection manager.
Network Streams also require some additional development when using them within a 1:N or N:1 configuration
because they are implemented as a point-to-point communication model. This means that an application based on
Network Streams that connects to 10 clients requires at least 10 streams. However, since Network Streams are the
most reliable and easy-to-use mechanism for streaming data, they are still a good choice for streaming. For
messages and commands, consider using shared variables over Network Streams.
Figure 4.2. Consider your system configuration when choosing a networking protocol.
If you have selected one of these three mechanisms but are not using a compatible version of LabVIEW, use the
mechanism recommended for the same communication model and network configuration that can communicate
with a third-party UI.
Security
If you require security features such as built-in encryption and authentication, use HTTPS-based web services to
implement communication. Web services support data communication to LabVIEW UIs, third-party UIs, and web
browsers. If you are using the web server on NI’s real-time targets to implement HTTPS communication, consider
enabling the web services API keys for added security. To learn more about security with LabVIEW web services,
refer to the Configuring Web Services Security LabVIEW Help document.
Table 4.3 lists general recommendations for choosing a data transfer mechanism based on the three factors discussed
previously: communication model, network configuration, and the type of application to which you are sending data.
Other factors discussed in this section should be taken into account, but this table provides a starting point.
54
Message or
Network Configuration Update Stream Current Value (Tag)
Command
N:1 or 1:N (CompactRIO to Network Streams Shared Variable Network Network-Published Shared
LabVIEW UI) (flushed) (blocking) Streams Variable or CCC
CompactRIO to Web Client Web Services Web Services Web Services Web Services
The next section explains how to implement each of the networking protocols described in Table 4.3 and features
instructions for downloading and installing the protocols that are not included in LabVIEW.
Figure 4.3. Network shared variables are ideal for tag communication in a 1:N or N:1
configuration.
Three important pieces make the network variable work in LabVIEW: network variable nodes, the Shared Variable
Engine, and the NI Publish-Subscribe Protocol.
55
Network Variable Nodes
You can use variable nodes to perform variable reads and writes on the block diagram. Each variable node is
considered a reference to a software item on the network (the actual network variable) hosted by the Shared Variable
Engine. Figure 4.4 shows a given network variable, its network path, and its respective item in the project tree.
To use network variables, the Shared Variable Engine must be running on at least one of the systems on the
network. Any LabVIEW device on the network can read or write to network variables that the Shared Variable Engine
publishes. Figure 4.5 shows an example of a distributed system where the Shared Variable Engine runs on a desktop
machine and where multiple real-time controllers exchange data through a network variable.
56
Publish-Subscribe Protocol (PSP)
The Shared Variable Engine uses the NI Publish-Subscribe Protocol (NI-PSP) to communicate data. The NI-PSP is a
networking protocol built using TCP that is optimized for reliable communication of many data elements across an
Ethernet network. To minimize Ethernet bandwidth usage, each client subscribes to individual data elements. The
protocol then implements an event-driven communication mechanism that transmits data to subscribed clients only
on data change. The protocol also combines multiple messages into one packet to minimize Ethernet overhead. In
addition, it provides a heartbeat to detect lost connections and features automatic reconnection if devices are added
to the network. The NI-PSP networking protocol uses psp URLs to transmit data across the network.
Buffering
Enabling the buffering option makes programming with shared variables significantly more complex, so disable this
option for the majority of your applications. If you are interested in turning on shared variable buffering, first review
the NI Developer Zone document Buffered Network-Published Shared Variables: Components and Architecture.
You can verify that buffering is disabled by right-clicking your shared variable node and launching the Shared Variable
Properties dialog shown in Figure 4.6. By default, Use Buffering is turned off.
Figure 4.6. Ensure that buffering is disabled when using shared variables for tag
communication.
Determinism
Network-published shared variables are extremely flexible and configurable. You can create a variable that features a
real-time FIFO to include a network communication task within a time-critical loop. When you do this, LabVIEW
automatically runs a background loop to copy the network data into a real-time FIFO, as shown in Figure 4.7. Keep in
mind that this prevents jitter from occurring within the time-critical loop while performing network communication,
but it does not mean that the network communication itself is deterministic.
57
Figure 4.7. When you enable the real-time FIFO for network-published shared variables, a
hidden background loop runs on the real-time target to copy the network value into a real-time
FIFO.
This feature can simplify your program, but it has some limitations:
■■ Some capabilities of network-published shared variables are not available when you enable the real-time FIFO
■■ Error management is more difficult because network errors are propagated to the individual nodes throughout
the program
■■ Future modification of the program to use different network communications is more difficult
Another option for applications that involve both network communication and a time-critical loop is to use regular
network-published shared variables for network communications and maintain a separate loop for network
communication tasks. You can communicate between these two loops using the interprocess communication
mechanisms discussed in Chapter 3: Designing a LabVIEW Real-Time Application.
Lifetime
All shared variables are part of a project library. By default, the Shared Variable Engine deploys and publishes that
entire library of shared variables as soon as you run a VI that references any of the contained variables. Stopping a VI
does not remove the variable from the network. Additionally, if you reboot a machine that hosts the shared variable,
the variable is available on the network again as soon as the machine finishes booting. If you need to remove the
shared variable from the network, you must explicitly undeploy the variable or library from the Project Explorer
window or the NI Distributed System Manager.
SCADA Features
The LabVIEW Datalogging and Supervisory Control (DSC) Module provides a suite of additional SCADA functionality
on top of the network-published shared variables including the following:
■■ Scaling
58
Network-Published Scan Engine I/O Variables and Aliases
By default, I/O variables and I/O aliases are published to the network for remote I/O monitoring using the NI-PSP
protocol. They are published by a normal priority thread associated with the Scan Engine at a rate you specify under
the properties of the controller. You can configure whether I/O variables publish their states by accessing the Shared
Variable Properties dialog.
Published I/O variables are optimized for I/O monitoring. They do not work with all network-published shared variable
features and all LabVIEW devices. For maximum flexibility when sharing data between LabVIEW applications, you
should use network-published shared variables.
Hosting
To use network-published shared variables, a Shared Variable Engine must be running on at least one of the nodes in
the distributed system. Any node on the network can read or write to shared variables that the Shared Variable
Engine publishes. All nodes can reference a variable without having the Shared Variable Engine installed, and, in the
case of real-time controllers, a small installable variable client component is required to reference variables hosted on
other systems.
You also might have multiple systems running the Shared Variable Engine simultaneously, allowing applications to
deploy shared variables to different locations as required.
You must consider the following factors when deciding from which computing device(s) to deploy and host network-
published shared variables in a distributed system.
Available resources
Hosting a large number of network variables can take considerable resources away from the CompactRIO system,
so for large distributed applications, NI recommends dedicating a system to running the Shared Variable Engine.
Required features
If the application requires LabVIEW DSC functionality, then those variables must be hosted on a Windows machine
running the Shared Variable Engine.
59
Reliability
Some of the hosted process variables may be critical to the distributed application, so they benefit from running on a
reliable embedded OS such as LabVIEW Real-Time to increase the overall reliability of the system.
Determinism
If you want to use network variables to send data directly to or from a time-critical loop on a real-time target, you
must have the RT FIFO enabled to ensure deterministic data transfer. You can only host RT FIFO-enabled network
variables on the real-time target.
The path name used to dynamically access network variables is similar to a Windows network share name, such as
//machine/myprocess/item. Below are additional examples of network variable references:
//localhost/my_process/my_variable
//test_machine/my_process/my_folder/my_variable
//192.168.1.100/my_process/my_variable
60
Monitoring Variables
The NI Distributed System Manager offers a central location for monitoring systems on the network and managing
published data. From the system manager, you can access network-published shared variables and I/O variables
without the LabVIEW development environment.
With the NI Distributed System Manager, you can write to network-published shared variables so you can remotely
tune and adjust process settings without the explicit need for a dedicated HMI. You can also use the NI Distributed
System Manager to monitor and manage controller faults and system resources on real-time targets. From LabVIEW,
select Tools»Distributed System Manager to launch the system manager.
61
Tip 2: Serialize Shared Variable Execution
Serialize the execution of network shared variable nodes using the error wire to maximize performance. When
executing shared variable nodes in parallel, thread swaps may occur and impact performance. Shared variable nodes
that are serialized execute faster than when implemented in parallel.
Figure 4.11. Initialize variables to known values and serialize variable execution.
There are also times when you may not want to serialize your variables. When multiple variables are serialized and an
error occurs within the first variable, the variables further down the chain do not execute. If you want to ensure that
every variable is processed, even if an error occurs within one variable, you should avoid serializing them.
Figure 4.12. Use a timeout to prevent reading the same value repeatedly in a loop.
Network Streams
Network Streams are similar to Queue functions in that they are FIFO based, but unlike Queue functions, Network
Streams have network scope. They were designed and optimized for lossless, high-throughput data communication
over Ethernet, and they feature enhanced connection management that automatically restores network connectivity
if a disconnection occurs due to a network outage or other system failure. Streams use a buffered, lossless
communication strategy that ensures data written to the stream is never lost, even in environments that have intermittent
network connectivity.
Because Network Streams were built with throughput characteristics comparable to those of raw TCP, they are ideal
for high-throughput applications for which the programmer wants to avoid TCP complexity. You also can use streams
for lossless, low-throughput communication such as sending and receiving commands.
Network Streams use a one-way, point-to-point buffered communication model to transmit data between applications.
This means that one of the endpoints is the writer of data and the other is the reader. Your application might require
multiple streams to communicate multiple types of data, as shown in Figure 4.13.
62
Figure 4.13. Network Streams use a one-way, point-to-point buffered communication model.
Figure 4.13 shows a basic Network Streams implementation. One process executes on the real-time target, and the
other process executes on the host computer. You can follow three basic steps to set up a Network Stream:
If a disconnection occurs and one endpoint becomes inactive, a reconnection is performed automatically by the
protocol in the background. This protocol retries forever, preserving the lossless nature of the data stream. While the
protocol is trying to reconnect, the active endpoint outputs an error message notifying the user that the endpoints
cannot resynchronize. You can also monitor whether the stream is connected by using the Network Stream Endpoint
Property Node shown in Figure 4.15.
Figure 4.15. Monitor the Network Stream Endpoint connection status with a property node.
63
Read or Write Data
When reading from or writing to a Network Stream, you can write single elements or multiple elements at a time
depending on which Network Stream function you choose. Data is never overwritten or regenerated, and no partial
data transfers occur. The read/write functions either succeed or time out.
Figure 4.16. Basic Implementation of Network Streams to Stream Data Across a Network
Figure 4.17. Use the Flush Stream.vi to shut down a Network Stream.
64
Using Network Streams to Send Messages and Commands
By default Network Streams are designed to maximize throughput, but you can easily implement them to maximize
low latency for sending commands or messages.
In the next example, a host VI running on a desktop PC displays acquired data while allowing the user to adjust the
frequency of the data as well as the window type for the Power Spectrum fast Fourier transform (FFT) analysis. The
frequency and window commands are sent to the CompactRIO controller and processed in real time.
When using Network Streams to send commands, do the following to ensure low latency:
1. Specify a small buffer size (the example uses a buffer size of 10 elements)
2. Use the Flush Stream VI to flush the buffer immediately after a command is sent
You can implement the commander architecture for events that originate on a UI by using the standard LabVIEW
UI-handling templates. Translate the UI event into the appropriate command by building the command message and
writing it to the Network Stream. Be sure to flush the buffer after the command is sent to ensure low latency.
65
Figure 4.19. Command Sender Architecture Example
66
Figure 4.21. Install Network Stream Support on the Real-Time Target
You can find more information on Network Streams in the NI Developer Zone white paper Lossless Communication
with Network Streams: Components, Architecture, and Performance.
TCP and UDP are good options if you need very low-level control of the communications protocol or if you are designing
a custom protocol. They are also recommended for streaming data to third-party applications since Network Streams
support communication only to LabVIEW applications. For sending messages to a third-party application, STM is easier
to use and offers equivalent if not better performance. For sending current values or tags to a third-party application,
CCC, web services, or Modbus, depending on your system configuration, also have easier implementations.
TCP provides point-to-point communications with error handling for guaranteed packet delivery. UDP can broadcast
messages where multiple devices can receive the same information. UDP broadcast messages may be filtered by
network switches and do not provide guaranteed packet delivery.
TCP communication follows a client/server scheme where the server listens on a particular port to which a client
opens a connection. Once you establish the connection, you can freely exchange data using basic write and read
functions. With TCP functions in LabVIEW, all data is transmitted as strings. This means you must flatten Boolean or
numeric data to string data to be written and unflattened after being read. Because messages can vary in length, it is
up to the programmer to determine how much data is contained within a given message and to read the appropriate
number of bytes. Refer to the LabVIEW examples Data Server.vi and Data Client.vi for a basic overview of client/
server communication in LabVIEW.
67
Simple TCP/IP Messaging (STM)
STM is a networking protocol that NI systems engineers designed based on TCP/IP. It is recommended for sending
commands or messages across the network if you are communicating to a third-party API or require a standard
protocol. It makes data manipulation more manageable through the use of formatted packets, and improves
throughput by minimizing the transmission of repetitive data.
Download: You can download and install the STM library from the NI Developer Zone white paper
LabVIEW Simple Messaging Reference Library (STM). The STM library is under the User Libraries palette.
Metadata
Metadata is implemented as an array of clusters. Each array element contains the data properties necessary to
package and decode one variable value. Even if you have defined only the Name property, you can use a cluster to
customize the STM by adding metaproperties (such as data type) according to your application requirements. The
metadata cluster is a typedef, so adding properties should not break your code.
Figure 4.22 shows an example of the metadata cluster configured for two variables: Iteration and RandomData.
Before each data variable is transmitted, a packet is created that includes fields for the data size, the metadata ID,
and the data itself. Figure 4.23 shows the packet format.
The metadata ID field is populated with the index of the metadata array element corresponding to the data variable.
The receiving host uses the metadata ID to index the metadata array to get the properties of the message data.
API
The STM API is shown in Figure 4.24. For basic operation, it consists of a Read VI and a Write VI. It also features two
supplemental VIs to help with the transmission of the metadata, but their use is not mandatory. Each of the main VIs
is polymorphic, which means you can use them with different transport layers. This document discusses STM
communication based on the TCP/IP protocol, but STM also works with UDP and serial as transport layers. The API
for each layer is similar.
68
Figure 4.24. STM Functions
The data must be in string format for transmission. Use the Flatten to String function to convert the message data to
a string.
This VI is usually used inside a loop. Since there is no guarantee that data will arrive at a given time, use the
“timeout” parameter to allow the loop to run periodically and use the “Timed Out?” indicator to know whether to
process the returned values.
Example
Figure 4.25 shows a basic example of STM being used to send RandomData and Iteration data across a network.
The server VI is shown in Figure 4.25, and the client VI is shown in Figure 4.26. Notice that the server VI sends the
metadata, implemented as an array of strings, to a remote host as soon as a connection is established. The example
writes two values: the iteration counter and an array of doubles. The metadata contains the description for these
two variables.
You need to wire only the variable name to the STM Write Message VI, which takes care of creating and sending the
message packet for you. Because of this abstraction, you can send data by name while hiding the underlying complexity
of the TCP/IP protocol.
69
Also note that the application flattens the data to a string before it is sent. For simple data types it is possible to use
a typecast, which is slightly faster than the Flatten to String function. However, the Flatten to String function also
works with complex data types such as clusters and waveforms.
As you can see, you can customize the protocol and expand it to fit your application requirements. As you add
variables, simply add an entry to the metadata array and a corresponding STM Write Message VI for that variable.
Receiving data is also simple. The design pattern shown in Figure 4.26 waits for the metadata when the connection
is established with the server. It then uses the STM Read Message VI to wait for incoming messages. When it
receives a message, it converts the data and assigns it to a local value according to the metadata name.
The Case structure driven by the data name provides an expandable method for handling data conversion. As you add
variables, simply create a case with the code to convert the variable to the right type and send it to the right destination.
One advantage of this design pattern is that it centralizes the code that receives data and assigns it to local values.
Another advantage is that the STM Read Message VI sleeps until it receives data (or a timeout occurs), so the loop is
driven at the rate of incoming data. This guarantees that no data is lost and no CPU time is wasted polling for
incoming data.
70
Note: Since the client has no knowledge of the metadata until run time, you must be sure that the application
handles all possible incoming variables. It is a good practice to implement a “default” case to trap any “unknown”
variables as an error condition.
For more information on STM, view the following white papers on ni.com:
If you attempt to resume the connection immediately after termination, you see an error dialog with error code 60
from the TCP Listen.vi that is similar to the error dialog in Figure 4.27. This inability to reestablish the listening TCP
socket is caused by an orphaned socket. The orphaned socket is the socket that initiated the termination. If you wait
for 60 seconds before attempting to reestablish the connection, the error message goes away. However, many
systems cannot afford to be down for 60 seconds.
Figure 4.27. Error Code 60 is generated when the client/server connection is terminated.
71
The 60 second timeout is intentional. When an orphaned socket is identified, TCP/IP makes the socket unavailable for
60 seconds so that no other sockets can communicate with it. If you compare TCP/IP to a postal service, a termination
is equivalent to a family moving out of its home. The postal service temporarily takes down that mailbox so that if new
people move in, they do not receive mail that does not belong to them. TCP/IP makes the socket unavailable intentionally
so that it can send data reliably over the network.
Design your application so that only the client can terminate the connection
In most cases, the orphaned socket issue is more severe on the server side. Typically, the client port is assigned
dynamically but the server port is fixed. When the server closes the connection, the server port becomes locked out. If
the connection is always terminated from the client side, you can significantly reduce the risk of dealing with error 60.
When designing your server application, you need to follow three rules:
1. Do not close the connection in the event of a timeout by ignoring the timeout error.
2. If you want to stop the server application, send a message to the client and let the client terminate the
connection. Wait for a non-timeout error (62 or 56) and close the server application upon error.
3. Do not close the server application upon an event. If an event occurs that should stop the application, send a
message to the client and have the client terminate the connection. Then close the server application upon error.
Figure 4.28. This example VI prevents the server from closing the network connection.
72
CVT Client Communication (CCC)
Consider using CCC if you are using the CVT discussed in Chapter 3 for interprocess communication. If you have
already created CVT tags and you want to publish this data on the network, CCC is a simple and elegant solution. It
is based on TCP/IP and works best for 1:1 system configurations. If you are working with a 1:N or N:1 system
configuration, consider binding your CVT tags to network-published shared variables when implementing network
communication.
The primary function of the client communication interface is to share information between CVT instances on the
server (CompactRIO) and the client. Do this by mirroring portions of the CVT from one side to the other and vice versa.
Step 2: Follow the instructions under the Downloads section. The CCC library appears under the User Libraries palette.
Implementation
The underlying implementation of CCC is TCP/IP. Specifically, it is an adaptation of STM, which offers a platform-
independent way of sending messages by name while maintaining the performance and throughput of raw TCP
communication. In applications involving hundreds or perhaps thousands of tags, the importance of efficient
communication is obvious.
The CCC interface is made up of two separate elements. The server portion of the interface acts as a TCP server and
consists of a command parser that processes requests for data from the client. The client portion of the interface
acts as the TCP client and initiates the communication with the server. It then sends commands to the server for
configuring and sending/receiving data.
The CCC protocol implementation emphasizes performance optimization by configuring as much as possible on the
first call, leaving the repetitive operations with less work to do. Accordingly, the protocol is implemented in such a
way that the client must first identify all of the tags of interest using the Bound Address parameter. On the first
execution, the server looks up the tags by their index in the CVT. From that point forward, only the CVT index API is
used to ensure the highest possible performance.
73
On both the client and server components, all of the repetitive operations are implemented with determinism in mind.
They allocate all of the necessary resources on the first call of each function and use a functional global variable to
store blocks of data between iterations. This ensures that no memory allocation takes place after the first iteration.
In most cases, you can use both the server and client elements of the interface as drop-in components. The server
needs only the TCP port configured (default is 54444), and the client needs the server’s IP address and port number.
Figure 4.31 shows an example of a CCC server application that includes the following steps:
1. Initialize the server-side CVT
2. Initialize the CCC server process, which executes asynchronously from the rest of the application
3. Use the CVT API functions (tags) to read and write data to and from the server-side CVT
In the corresponding client application, shown in Figure 4.32, the CCC write and read operations are implemented in
series with the rest of the HMI code. This ensures that the values for both the read and write tags are updated on
each iteration. The client application includes the following steps:
3. Use the CVT API functions (tags) to read and write data to and from the client-side CVT
4. Use the CCC Client Read and Write VIs to transfer data between the client-side CVT and the server-side CVT
74
Figure 4.32. CCC Client Example—Static Tag List
For more information on CCC, see the NI Developer Zone white paper
CVT Client Communication (CCC) Reference Library.
Web Services
LabVIEW web services, introduced in LabVIEW 8.6, offer an open and standard way to communicate with VIs over
the web. Consider a LabVIEW application deployed across a distributed system. LabVIEW provides features such as
Network Streams for establishing communication, but many developers need a way to communicate with these
applications from devices that do not have LabVIEW using standard web-based communication. With LabVIEW web
services, you can:
■■ Remotely monitor and control LabVIEW applications using custom thin clients
■■ Stream any standard MIME data types, such as text, images, and videos
75
Client Machines
Remote System
Web
Figure 4.33. You can use web services to communicate data over the web.
Web services act as a web API to any type of software, whether that software is controlling a complex embedded
system or simply a database store. To use a web service, a client sends a request to the remote system hosting the
service, which then processes the request and sends back a response (typically an XML, or eXtensible Markup
Language, message). The client can choose to display the raw XML data, but it is more common to parse the data
and display it to the user as part of a GUI.
Using this approach, you can create one or more VIs for your CompactRIO LabVIEW Real-Time target and build them
as web services. These web service VIs provide a standard interface for exchanging data between your embedded
devices and any computer connected via a network.
Web
Thin Client
Service Web Application
XML Response
Figure 4.34. You can host and execute web services on a remote system
and access them via the standard HTTP protocol.
76
Adding Communication Mechanisms to Your Design Diagram
Once you have selected an appropriate mechanism for network communication, you can add this information
to your design diagram. The diagram in Figure 4.35 represents the turbine testing application discussed in
Chapter 1: Designing a CompactRIO Software Architecture.
The bioreactor application uses Network Streams to send commands from the host PC to the CompactRIO
controller. Since this application uses the RIO Scan Interface to handle I/O, network-published I/O variables are used
to send raw I/O data to the UI update process.
77