Getting Started With SM Datum V7.2.4 2024 en
Getting Started With SM Datum V7.2.4 2024 en
Getting Started With SM Datum V7.2.4 2024 en
com
Perfect
Copyright ©2023
Hangzhou Vision Datum Technology Co., Ltd.
Tel: 86-571-86888309
Add.: No.8, Xiyuan 9th Road West Lake District, Hangzhou 310030 China
All rights reserved. The information contained herein is proprietary and is provided solely for the purpose of allowing customers to
operate and/or service Vision Datum manufactured equipment and is not to be released, reproduced, or used for any other purpose
without written permission of Vision Datum.
Throughout this manual, trademarked names might be used. We state herein that we are using the names to the benefit of the
trademark owner, with no intention of infringement.
Disclaimer
The information and specifications described in this guide are subject to change without notice.
Latest Version
For the latest version of this guide, see the Download Center on our web site at:www.visiondatum.com.
Technical Support
For technical support, e-mail: [email protected].
Introduction
SM-Datum is an application software designed for smart cameras. It is compatible with the SM2 series visual sensor and requires the
firmware of version 2.0 or later.
After logging in to cameras via the Software, you can manage the cameras' projects, including creating, editing, deleting, copying,
and switching projects. While editing projects, you can configure the camera parameters, base images, tools, and output parameters.
Meanwhile, you can configure the I/O settings, communication settings, time settings, and password of the camera as well as
upgrading its firmware. In addition, the Software supports monitoring the status of multiple cameras simultaneously.
The Software supports conducting visual detection on both the images stored in and imported into cameras.
Key Features
Key Features
● Easy installation: Requires no additional drivers.
● High compatibility: Compatible with 32/64-bit Windows 7, Windows 10, or Windows 11 operating system.
● Camera Management: Supports managing multiple cameras and monitoring their working status.
● Mode division: Displays detection results in the running mode, displays function settings in the configuration mode.
● Visual detection: Conducts visual detection on the real-time images from cameras or the images imported into cameras.
System Requirements
The SM-Datum requires that one of the following operating systems is installed on your computer:
Recommended
● Operating system: 32/64-bit Windows 7, Windows 10, or Windows 11 operating system
● CPU: Intel i3-8100T
● RAM: 8 GB or more
● Graphics card: a dedicated graphics card with 1366×768 or higher resolution, the integrated graphics card is not supported
● NIC: Intel Pro1000, I210, and I350 series gigabit network interface card
● The Software has been installed with the drivers required by the hardware. No others drivers are needed.
i ● Some anti-virus software might recognize the Software as a virus. Hence, it is recommended to add the
Software into the allowlist of the anti-virus software or exit the anti-virus software before running the Software.
To make sure that the Software can run smoothly so that data can be transmitted stably, you need to configure the network
before running it.
To make sure that the Software and SM2 Series cameras can run smoothly so that data can be transmitted stably, you need to
configure the network before running it.
Steps
1. On your PC, open Control Panel, click Network and Internet > Network and Sharing Center > Change Adapter Settings, select
the corresponding network port, and click Properties to enter the property settings page.
2. Double-click Internet Protocol Version 4 to set the IP address of your PC. It is recommended to set a static IP address for the
network port so as to accelerate device search and connection. See the figure below for details.
3. Click Settings, select Link Speed or Advanced, set Speed and Duplex to Auto-negotiation or 100 Mbps full duplex to ensure
that the network speed is above 100 Mbps.
The login page will show up after launching the Software. You need to log in to a camera before controlling it.
You can refresh the camera list to show the enumerated cameras. You can select a camera to view basic information, edit IP
address, reset password, etc.
Camera List
The camera list is on the left side of the login page. All enumerated cameras will be listed.
Right-click a camera and click Stick on Top to stick the camera on the top of the list.
Select a camera to view the basic information about the camera, including the NIC that connects the camera, camera name, MAC
address, IP Address, subnet mask, gateway, manufacturer, model, serial number, and firmware version. See the picture below.
Enumerate Camera
The Software refreshes the camera list automatically. You can also click to refresh the list manually.
To add a remote camera, click , type in the IP address, and click OK. See the picture below.
i Make sure the connection between the remote camera and the PC is established when adding remote cameras.
Camera Status
Different icons of the camera represent different camera status.
Icon Status Definition
Available The camera is available. You can log into the camera.
The camera is occupied by another process. Log out the camera from the current
Occupied
process before you can log into the camera via the Software.
The camera's IP address is not reachable on the LAN. Edit the IP address before you
Unreachable
can log into the camera.
Edit IP Address
You can edit the IP address of a camera.
Steps
1. Right-click the camera you want to edit IP address.
2. Click Edit IP Address or click beside the IP address, subnet mask, or gateway information to show the Edit IP Address window.
3. Select an IP type according to actual needs.
4. Click OK.
IP Type Description
The camera will communicate with the PC via the IP address, subnet mask, and default gateway you specify.
Static IP
Recommended.
DHCP Camera IP address is negotiated with the PC and allocated automatically. Factory default.
LLA Camera IP address is negotiated with the PC and the network segment is 169.254.
i If you change the IP type, the camera will reboot automatically to make it effective.
Before you can control a camera, you need to log in to the camera.
Steps
1.Select the camera you want to log in.
2.Select a role you want to log in as. You can choose from Administrator, Technician, Maintainer, and Operator.
3. Enter the password in .
i The default password is Abc1234. We highly recommend you to change the password after the first login to
ensure the security of your device.
The password strength of the device can be automatically checked. We highly recommend you change the
password of your own choosing (using a minimum of 8 characters, including at least three kinds of following
i
categories: upper case letters, lower case letters, numbers, and special characters) in order to increase the security
of your product. And we recommend you change your password regularly, especially in the high security system,
changing the password monthly or weekly can better protect your product.
Proper configuration of all passwords and other security settings is the responsibility of the service provider and/
or end-user.
i If you forget the password, you can click Forgot password to reset a password. See details in Reset Password.
Reset Password
If you forgot the password of a camera, you can reset the password.
Steps
1.Select the camera you want to reset password.
2.Click Forgot password to show the password reset window.
3.Contact our technical support to get the password reset file. If you choose to send an email, please include the serial number of
the camera in the content.
4.After getting the password reset file, click Import Resetting File.
5.Click Open to load the key file and reset the password.
i The camera password will be restored to Abc1234. We highly recommend you change the password after login to
ensure the security of your device.
After login, the Software will load the last running project by default. If there is no project in the camera, the Software will create a
blank project and run it. See the introduction of the main interface below.
4 Live View Panel View the image of the camera in Camera Mode or imported image in Image Mode.
5 Current Role Display the current role, modify password, switch roles, and manage roles.
View the memory usage, smart memory usage, and the CPU usage.
6 Resource Information Memory usage refers to system memory usage, and smart memory usage refers to usage of
MMZ (Media Memory Zone) for the algorithm.
7 More Check user manual and software version. Minimize, maximize, and exit the Software.
i You can run multiple instances of the Software at the same time, but each instance can log into one camera only.
Menu Bar
Before using GigE Vision cameras, you should make sure that the cameras and the PC are on the same subnet, and that the Jumbo
Frame functionality has been enabled in the Windows system.
Camera Management
Log in to the camera, and switch the camera. See details in Log Into Camera.
Project
Add, delete, import, export, and switch projects in the camera. See details in Project Management.
Communication
Add, set, or delete different communication tools for the camera. See details in Communication Settings.
Camera Settings
View camera basic information, synchronize time, etc. See details in Camera Settings.
IO Settings
Allocate IO signals, and configure project switch and project output. See details in IO Settings.
Operation Management
Search and export camera or software logs. Search, export, or delete images stored in the camera or imported to the camera. See
details in Operation Management.
Camera Monitoring
Start live view of one or multiple connected cameras in a window. And manage cameras, including displaying camera list, logging
in to the camera, switching the live view window division and so on. See details in Camera Monitoring.
Project control panel shows the running result of the current project. You can control the project here
When in Camera Mode:
Description:
OK/NG (upper-left)
Real-time running result of the current project.
Total
Total running times of the current project.
NG
Total NG times of the current project.
Reset
Reset the count to zero.
Run Once (Camera Mode)
Run the project one time.
Run/Stop
Run the project continuously or stop running.
RunAll (Image Mode)
Run the project for all imported images.
Edit
Enter the project configuration page to edit the project. Refer to Create a Project for details.
Left: tool name and OK condition; Right: tool settings and running result Statistic View:
Upper-left: tool name; Lower-left: OK condition; Upper-right: tool settings; Lower-right: running result; Center: statistics data
Different types of tools have different statistic graph:
● If the result is judged by the count or similarity, the x-axis is the judging range and the green area is OK. You can drag to
adjust the OK range.
● If the result is judged by existing or not, the statistic graph will show the count of OK and NG.
Click to edit the tool.
Click to show the Result Display window. You can select tool(s) to always display tool results in the live view panel.
The logic of saving result display configurations varies according to different series cameras. For some devices, the
i configuration will be saved, while for other devices, the configuration will be restored to defaults once you exit the
Software.
Camera Mode
When you select Camera Mode, the live view panel will show the real-time image of the camera after you run the project. Select a
tool in the tool list to show the real-time running result on the live view panel. See the picture below.
Image Mode
When you select Image Mode, the live view panel will show the imported images after you run the project. Select a tool in the tool
list to show the real-time running result on the live view panel. See the picture below.
The supported image formats are PNG, JPG, and BMP. Icon description:
● : Import a single image from PC.
● : Import all images in a folder from PC.
● : Import images from the camera. Refer to Acquired Image Management for details.
● : Clear all imported images.
● The SM2 series devices cannot store images in the camera, and thus they do not support importing images
from the camera.
i ● For the image search during importing acquired image(s), see Acquired Image Management for details.
● For the management of imported images, see Imported Image Management for details.
Different roles have different permissions for various operations. The roles include administrator, operator, technician, and maintainer.
The administrator has the biggest range of permissions, and can create other roles and set permissions for them.
Administrator
The administrator can change the password of all roles, switch roles, and configure roles. Click Administrator on the upper right of
the Client first to conduct the following operations:
Change Password
Click Change Password, enter the old password and the new password, then confirm the
new password. Click Save to finish changing the password.
Switch Roles
Click Switch Roles to switch to operator, technician, or maintainer. Enter the corresponding
password to finish switching roles.
Role Management
The administrator can create the roles of operator, technician, and maintainer, and set
corresponding passwords for them. The technician can be given the permissions of editing
the project, global settings, or both.
Technician
The operating permissions of technician is set by the administrator. The permissions of a technician include:
● Edit camera settings if the permmission is given.
● Edit communication settings if the permmission is given.
● Create, copy, import, export, and delete a project if given the permission of creating a project, which means the permission of
configuring camera parameters, base image, tools application, and output settings are also given.
● Edit a project if given any of these permissions: configuring camera parameters, base image, tools application, and output
settings. If given all these permissions, the permission of creating a project will also be given automatically.
● Edit a tool via in the tool list on the main page if given the permission of tool application.
Click Technician on the upper right of the Client to switch to other roles.
Operator
The operator can only view the monitoring page and cannot configure the settings. Click Operator on the upper right of the Client
to switch to other roles.
Maintainer
The maintainer can edit a tool via in the tool list on the main page.
● The maintainer cannot edit communication settings and camera settings.
Click to configure language and system settings, and view the user manual and software information.
Create a Project
● Software Trigger: Available when you select Software as the trigger mode. You can click Execute to command the camera to
acquire images.
● Trigger Cache: If you do not enable this, when the camera is dealing with the triggering signal of acquisition, the newly-sent
signals will not be received by the camera; if you enable this, 3 newly-sent triggering signals will be saved when the camera is
dealing with the triggering signal of acquisition, and the camera will deal with the newly-sent triggering signals after dealing with
the last signal.
● Trigger Delay (μs): The delayed duration before the camera deals with the received triggering signal.
● Filter Time (μs): Available when the Trigger Source is Button. If the duration for pressing the triggering button is shorter than the
Filter Time duration, the trigger will fail. This function helps to avoid fault triggers resulted by mistake press on the button.
● Communication String: Available when the Trigger Source is Communication. When the communication tool sends the same
communication string to the camera as the string you enter here, the camera will be triggered to acquire an image.
i ● Auto adjustment might take a while. You cannot operate the Software during auto adjustment.
● Only colored camera has the white balance feature.
Brightness Settings
Set the brightness of pictures acquired by the camera. You can adjust the image brightness manually or let the Software adjust
image brightness automatically.
● Set the Brightness Manually: Adjust the Exposure Time and Gain manually to change the image brightness.
● Auto Adjustment: After setting the Brightness Standard, click Auto Adjustment to let the Software adjust the image brightness
automatically.
Mechanical Focus
You can adjust the focal length of the camera.
● Focus Step: Set the stepping value of focal length adjustment.
● Focus Position: Adjust the focus position of the camera.
● Global Focus: Click Global Focus to let the camera find the optimum focal length.
● Regional Focus: Click Regional Focus and then draw an ROI on the right. The Software will focus in the ROI.
● Restore Default: Click Restore Default to restore the focal length to its default value.
● For users of SM3-SE series camera, you need to focus manually on the camera.
● If you select Custom as the Light Source Control mode, you need to select light sources and set parameters for
i them.
● If you select All as the Light Source Control mode, the parameters can be set uniformly.
Other Parameters
Set parameters including frame rate, Gamma, image size, mirror image, greyscale image output, etc.
● Frame Rate: Set the frame rate of real-time acquisition. The maximum frame rate is the maximum frame rate of the camera itself.
● Gamma: Gamma is a non-linear mechanism of mapping. When the value is between 0.5 and 1, the dark area of the image will
become more bright; if the value is between 1 and 4, the dark area of the image will become less bright.
● Image Size Setting: Draw ROI of image resolution. If you select Original, the whole image will be displayed with the resolution. If
you select Custom, you can click Edit and adjust the image size on the Live View window and click Finish. Or you can edit the value
in Image Width and Image Height to adjust the image size.
● Mirror Image: After enabling this, the image will be displayed like the right side of the original image appears on the left and the
left side appears on the right.
● Greyscale Image Output: The parameter for colored cameras. If you enable this, images will be output in Mono 8 format.
i If you have enabled Greyscale Image Output for a colored camera, the Color Image Parameters will be hidden,
and the tools for area of certain color and color contrast will be unavailable.
Awb Once: After enabling the White Balance Enable, the White Balance Mode is in automatic mode. In this situation, the camera
adjusts the R, G, and B value according to the real-time image color automatically.
Awb Manual: Click Edit beside the White Balance Mode. Edit the R, G, and B value and click Edit to finish.
After processed by white balance, the image may be darker than before, and some color may vary from the standard value. In this
situation, you can correct the image color by CCM to make the color more bright.
● CCM Reset: Click Reset and the camera will adjust the CCM values automatically.
● Edit CCM Parameters: Click Edit to edit the values in the following table. Click Edit again to finish.
Steps
1.Get a base image via the window on the right. You can get a base image by 3 ways.
● Current Image: Click Current Image to get the image displayed on the right as the base image.
● Historical Image: Click Historical Image to select an image stored in the camera as the base image. You can search wanted images
by setting the filtering conditions.
○ The SM2 SE series devices cannot store images in the camera, and thus they do not support importing
i historical images from the camera.
○ For the image search during importing historical image(s), see Acquired Image Management for details.
● Import: Click Import to upload an image in the PC to the Software as the base image.
Shielded Area The fixture will not be performed in the shielded area.
Polarity refers to the color transition from the template area to the background.
● When the polarity of the analyzed areas edge is different from that of the template area, set the Match
Match Polarity
Polarity as Ignored to make sure the target can be found.
● If it is not necessary to find the target, you can set the Match Polarity as Considered for a quick search.
If the analyzed object is turned and the turned angle is smaller than the value you set, the object can be
Angle Range
recognized, otherwise it cannot be recognized.
The minimum similarity between the template area and the analyzed area in the image. Only when the
Min. Score similarity is higher than the Min. Score, the target can be recognized. The Min. Score ranges from 0 to 1, and
1 indicates that the analyzed area in the image is completely the same with the template area.
Samples Sample codes of various programming languages.
Algorithm Timeout The algorithm timeout. When the actual time consumed by algorithms exceeded the configured value, the
(ms) output result is NG. If the value is set to 0, the algorithm detection time is not limited.
Configure Tools
Add tools for the project according to your actual need.
Before You Start
Make sure you have set parameters for the camera and have set the base image.
Steps
i The camera may supports multiple types of tools. The supported tool types of different camera models vary. See
the camera datasheets for details about supported tool types.
1. Click on the top left to open the window for adding tools.
2. Select a category on the left and double-click a tool to enter the configuration page of the tool. Here we take Spot Count tool as
an example.
i You can click Image Feature Selector to select the tools according to tool features. But some tools may not show.
i ● For different tools, the parameters vary. See Tool Introduction for details about different tools.
● Generally, it is not necessary to set the Extend Parameters unless you cannot get your wanted analyzing result.
4.Optional: Click Test Running to test.
5.Click Close to quit the page for setting parameters and go to the Tool Configuration page.
i Green indicates that the analyzing result is OK, while red indicates that the analyzing result is NG. You can click
to edit a tool.
6. Optional: Select a tool and click to duplicate the tool and the duplicated tool will be displayed at the bottom of the tool list.
7. Optional: Select a tool and click to remove the tool.
8. Optional: Click Clear All to remove all the tools in the list.
9. Optional: Click the name of a tool to edit its settings.
Output Settings
You can configure the parameters for the results of project running, image saving, result output, and I/O settings
i ● Make sure you have set parameters and base image for the camera, and have selected tools for the project.
● The SM2 SE series devices cannot store images in the camera.
Project Results
You can set the rules for outputting the results of a project running.
1. You can select the project results.
All Tool OK
The results output by all the tools are OK.
Any Tool OK
The result output by any tool is OK.
Custom
Customize the logic of result output.
If you select Custom as the Project Results, you need to click Edit below to customize the running logic.
2. On the Custom Logic page, you are required to configure the Logic Type and Logic Data.When the logic data meets the standard
of the logic type, the result will be OK, or the result will be NG.
Logic Type
You can select All OK, Any One OK, All NG, or Any One NG.
Logic Data
Click to subscribe the status of each module as the logic data of the project.
3. Click Test Run to test the data, and the result will be displayed on the lower left of this panel and the image on the right.
i The Test Run button functions the same as that in the I/O part.
Scheduled Output
You can enable the scheduled output and configure the output schedule of partial results.
Make sure you have switched on the device's trigger mode. You can configure the output time (Output After) as needed after
enabling the scheduled output.
When the device receives a trigger signal, it will take the signal generation time as the start point and output corresponding results
after the configured time period.
● If the project running is completed within the configured time period, the communication module will output the result string (such
as OK or NG), and the I/O module will execute the action corresponding to the result.
● If the project running is not completed within the configured time period, the communication module will output NG, and the I/O
module will execute the action corresponding to the NG condition.
Save Image
With this function, you can save images in the camera.
After enabling Save Image, click Edit to configure the related parameters.
Image Saving
The rule for saving images.
Output Condition
Subscribe to the information such as module status of different tools as the reference for saving images.
Image Saving Strategy
You can select Not Save Image, Save Image (OK), Save Image (NG), and Save All.
Saving Format
The format of saved images, supporting BMP and JPG.
Storage Method
The rule for saving newly received images when there is no more space on the camera for saving images. You can select Stop
Saving Image or Overwrite. Stop Saving Image indicates newly received images will not be saved, while Overwrite indicates the
oldest images in the camera will be overwritten by the newly received images.
Image Naming Rule
The rule for naming the saved images.
Frame No.
If you enable this, the frame No. of the image will be contained in the image name.
Result
If you enable this, the running result of the Output Condition will be contained in the image name.
Start Tag and End Tag
The start character and the end character in the name of the saved images. You can customize it. Only letters, digits, and
characters including !, @, #, ^, &, (, ), -, _, =, +, ., ,, ;, ` are allowed.
Delimiter
The separator between each option. You can customize it. Only letters, digits, and characters including !, @, #, ^, &, (, ), -, _, =, +,
., ,, ;, ` are allowed.
i By default, the time and triggering time will be contained in the image name.
Tool Results
With this function, the Software can output a result after processing the data of each module and strings.
1. After enabling Tool Results, click Edit to add data.
Separator
A separator separates different data. When the added data is a symbol, there will be no delimiter following it. You can select a
comma or a semicolon as a delimiter, or customize it.
i Data outputs only one piece of data, while Array outputs a group of data in order.
Failure Prompt
A window will pop up when a failure occurs.
Enter Failure Prompt
The information given in the pop-up window.
Condition Judge Output
When enabled, select a specific module and its corresponding status (OK or NG). The results will only be outputted when the
condition configured here is met. When disabled, all configured results will be outputted.
4. Click Test Run to test the data, and the result will be displayed on the lower left of this panel and the image on the right.
I/O
If the result is qualified, the camera can send signals to connected devices.
1. After enabling I/O, click Edit to set related parameters.
On the IO Output panel, the IO signals whose IO types are set as Output will be loaded and displayed. You can enable an IO signal
according to your need and set the related parameters.
Output Condition
You can subscribe to the module status of different tools as the reference for IO output.
Output Hold
You can set the IO signals and keep the electrical level status.
Disable
Control the IO signals output by configuring Duration and Delayed Duration.
TriggerReset
When the device receives a trigger signal, it sets the output signal to a low electrical level, and then sets the output signal to
the corresponding electrical level according to the subscribed module result and Out Type. The electrical status keeps to the
next trigger signal.
Duration Time
The duration of outputting signals.
Delayed Time
The camera outputs signals after the Delayed Duration.
Valid Electrical Level
You can select Normally Closed or Normally Open as the valid electrical level. Normally Closed is used for low electrical levels,
while Normally Open is used for high electrical levels.
Out Type
The condition for outputting IO signals. If you select ExposureOutPut, the camera outputs signals when it starts exposure, instead
of being controlled by the Output Condition you set before. If you select NgOutPut or OkOutPut, the camera outputs signals
when the subscribed condition is triggered.
Cache Enable
If you enable this, when the camera receives a new signal when outputting signals, the newly received signal will be saved
temporarily and the camera will output them after finishing the last output.
This parameter varies according to different series cameras.
2. Click Test Run to test the data, and the result will be displayed on the image on the right.
Run Project
After logging in to the Software or loading a project, you can manage the project and set its running mode on the following page.
In the upper-left area, you can manage the project, see Menu Bar for more details; in the lower-left area, you can see the tools
added to the project, see Tool List for more details; in the right area, you can select the display mode as "Camera Mode" or "Image
Mode", and view the detection result, see Live View Panel for more details.
Project Settings
You can create, import, export, edit, copy, and delete projects.
● Create a Project: Click Project → Create to enter the Edit Project page.
● Import a Project: Click Project → Import and select the to-be-imported project file and click Open.
i If the edited project is not the running one, a prompt will pop up asking whether to switch the project and start
editing. Click OK to switch the project and start editing.
● Copy a Project: Click , edit the project name and click OK. By default, the project name is "the current project name_Copy".
● Export a Project: Click , select the saving directory and click Save.
If you have set multiple projects for one camera, you can switch among them.
You can switch projects manually or enable Auto-Switch to switch projects automatically.
● Manual Switch: Click Switch in a project area to load this project.
○ A prompt window will pop up to remind you to save the project currently used before switching.
i ○ If a project is being used currently, the Switch button is changed into Current Project, and the project cannot
be switched.
● Auto-Switch: Click the button beside Switch to set auto-switch. You can select from Off, TriggerIO, and TriggerCommunication.
○ Off: Auto-Switch is disabled. The button is displayed as "Auto-Switch: Off".
○ TriggerIO: Switching project automatically by triggering the signal source via I/O. You need to set the I/O trigger source.
○ TriggerCommunication: Switching project automatically via the string sent by the communication tool. You need to set the
communication string and the return string.
TiggerIO
If the auto-switching mode of a project is set to TriggerIO, the project will be automatically set to the running project when the
received signal meets the requirements of the configured switch trigger source and is received within the IO synchronization time
range. To enable TriggerIO, you need to configure the IO synchronization time, switch trigger source, and project name. For details,
see Project Switch Settings.
TriggerCommunication
If the auto-switching mode for a project is set to TriggerCommunication, the project will be automatically set to the running project
when the communication protocol of the project sends a message "Communication String + Spacebar + Project Name".
● Communication String: The string sent to the device via a communication protocol. The default value is
i
"switch".
● Switch Return String: The value returned by the device after a successful project switch. The default value is
"ok".
i All communication protocols except for FTP can send messages to switch the project. Modbus cannot receive
the switch return string after a successful project switch.
TriggerIO
When the auto-switch mode of a project is set to TriggerIO, this projects will be set to Current Project if the received signal meets
the requirement of the configured I/O trigger source and is received within the line back time.
The I/O trigger source consists of five digits, and each digit is either 0 or 1. The digits, from the first to the last, correspond to the I/O
signal's Line4, Line3, Line2, Line1, and Line0. If the I/O type of the I/O signal is set to SolutionSwitch and the trigger type is set to
Level High, the corresponding digit is 1; if the I/O type is set to SolutionSwitch and the I/O trigger mode is set to Level Low, the
corresponding digit is 0; if the I/O type it set to Trigger or Output, the corresponding digit is 0.
Moreover, you need to set the Line Back Time in I/0 Settings. If several I/O signals are set to Level High when switching a project, the
interval between the first and the last signal received should be within the configured line back time.
If the I/O trigger source of a project is set to 10110 and the line back time is set to 3 seconds, it is necessary to ensure that the I/O
type of Line1, Line2, and Line4 is set to SolutionSwitch, and the these three signals should be sent to the camera within 3 seconds so
that the project cannot be switched automatically.
Trigger Communication
If the auto-switch mode of one project is set to "TriggerCommunication", and the communication tool used in this project sends the
required string and project name to the camera, this project will be set to "Current Project".
● Communication String: The content of the string sent by the communication tool, which is set to "Switch" by default.
● Switch Return String: The content of the string sent to the communication tool by the camera after switching the project, which is
set to "OK" by default.
i Except the FTP tool, all of the other communication tools can send command to switch projects to cameras,
while the Modbus tool cannot receive the switch return string after switching projects.
3. Select a method in the list to set the parameters. See details in Configure Communication Parameters.
4. Switch on to enable the communication method.
5. Set related parameters.
i
● Parameters vary for different communication methods.
● Before configuring FTP parameters, configure the FTP server.
FTP
FTP communication can store the qualified images on FTP in the format of JPG.
FTP Service Configuration
Before setting FTP parameters, configure the parameters for FTP service and enable FTP Service.
Sever Port
Configure the port number of FTP service, which should be the same as that communicates with FTP.
User Name
Configure the user name to log in to the FTP service.
Password
Configure the password to log in to the FTP service.
Saving Path
Set the saving path for the data of FTP service in the local PC.
Output Condition
Condition for saving images on FTP. Click to subscribe information on modules such as camera image, base image, tools, and
communication.
Host IP
IP address of the FTP server. 127.0.0.1 represents that the current PC is used as the FTP server.
Host Port
Port number of the FTP server.
Anonymous Login Enable
Switch on if the FTP server does not require a user name and password to log in.
i When anonymous login is enabled, the user name is shown as anonymous and the password is empty. Entering
user name or password is invalid.
Directory Type
Not Create
All the images will be stored in the default file folder in the server.
Create
New file folders will be created for storing each day's images.
Max file Num
the maximum number of images allowed for a single file folder.
Dir Increment Enable
if you do not enable this, new images will not be stored when the number of stored images reaches the upper limit of allowed
images. If you do not enable this, a new file folder will be created for storing new images when the number of stored images
reaches the upper limit of allowed images.
FTP Link Check
Click Execute to test the connection with the FTP server with the settings above.
UDP Server
The camera can communicate with the UDP tool.
You need to specify the parameters according to the setting of the UDP tool.
Local IP
IP address of the camera. The IP address of the client of the UDP tool should be same with the address.
Local Port
The port number of the client of the UDP tool.
Target IP
The IP address of the server of the UDP tool.
Target Port
The port number of the server of the UDP tool.
TCP Client
The camera can be used as the TCP client and communicate with a specific TCP server. Specify the IP and port number of the TCP
server.
TCP Server
The camera can be used as the TCP server and communicate with a specific TCP client. Set the local port number and then enter the
IP address and port number on the TCP client.
Serial Port
The camera can communicate with another device through the RS-232 serial ports.
i Make sure the camera is connected to the device with RS-232 serial port via a 17-pin cable. Refer to the quick
start guide of the camera for instructions on connecting serial ports.
MELSEC
The camera can read/write the soft elements of the PLC via the MELSEC protocol.
Parameter description:
Basic Information Settings
Server IP
IP address of the MELSEC PLC.
Server Port
Port No. of the MELSEC PLC.
Frame Type
The device can use ASCII and binary data in communication via MELSEC protocol, so its frame type includes BIN_1E, BIN_3E, BIN_4E,
ASCII_1E, ASCII_3E, and ASCII_4E.
i
● When the device accesses the PLC of MELSEC Q series, select 1E, 3E, or 4E as the frame type.
● When the device accesses the PLC of MELSEC FX3U series, select 1E as the frame type.
Network Number
The network number of the target station.
Node Number
The number of the target nodes.
Byte Order Enable
If it is enabled, the data will be stored in byte order.
Poll Interval
Polling frequency. The client will send requests to the server and the server will receive requests at the set time interval.
Timeout
The time that the device waiting for the response returned from the PLC.
Control Settings / Status Settings / Result Area Settings / Instruction Area Settings
The parameter of different modules are shown below.
Add Space
Type of the soft element. By default, it is D, which presents the data register.
Add Deviation
Offset value of the data.
Data Size
Max. number of data stored in the soft element.
Keyence KV
The camera can communicate with the PLC via the Keyence KV protocol.
Parameter description:
Basic Information Settings
Communication Mode
● Client: The camera can be used as the server to communicate with the Keyence KV client.
● Server: The camera can be used as the client to communicate with the Keyence KV server.
Client/Server IP
● For Client communication mode, enter the IP address of the Keyence KV client.
● For Server communication mode, enter the IP address of the Keyence KV server.
Poll Interval
Polling frequency. Theclient will send requests to the server and the server will receive requests at the set time interval.
Control Settings / Status Settings / Result Area Settings / Instruction Area Settings
The parameter of different modules are shown below.
Data Type
Type of the data transmited between the device and the Keyence KC client/server.
Soft Element Type
Type of the soft element. By default, it is D, which presents the data register.
Soft Element Address
The address of soft element in the corresponding modules (control, status, result, and instruction).
Soft Element Size
Max. number of data stored in the soft element.
Manage Cameras
Click Camera Management on the top right to open the Camera Management window.
Camera List
The Camera List supports viewing camera status and the basic information about the cameras, logging into/out cameras, and
changing IP address, and sticking a camera to the top. See Camera List and Log Into Camera for details.
Window Division Settings
The Camera Management supports setting the division of the monitoring window and selecting cameras to display images on
specified windows.
Click the icons above the window to select a division mode. No more than 9 cameras can be monitored at the same time.
After logging into a camera, drag the camera to a window to link the camera to the window. Click Stop Monitoring to unlink the
camera from the window.
If you switch the division mode, the monitoring will stop automatically and the linkage will expire.
Default Settings
In this area, you can configure default settings of Camera Monitoring, including displaying the camera monitoring page by default
after launching the Client, maximizing the monitoring page by default, reconnecting the camera after it goes offline, and setting the
time period (sec) after which the auto connecting will stop.
Monitoring Interface
Run Monitoring is to view the status of a single or multiple cameras, you can make some simple operations.
When there is no camera on the monitoring interface, you can click Camera Management to add and link cameras.
When there are linked cameras on the monitoring interface, you can view their real-time status and operate them.
The monitoring interface of the single-screen layout mode is shown below.
The functions and operations of different layout modes are basically the same.
i The project execution result (OK or NG) is displayed on the upper-left corner. Meanwhile, the linked camera
information, including the project name, OK rate, total NGs, total detections, and time cost is displayed.
In 3x3 screen mode, only total detections and total NGs will be displayed. On Run Monitoring interface, you can
make some operations as follows:
● : Execute current project once.
● : Execute current project continuously, and click to stop the execution.
● /Switch Project: Open the device's projects management window to switch the project.
● /Edit: Switch to the main interface to edit the project parameters of current device.
● : Reset the displayed detection data, including the OK rate, total NGs, total detections, and time cost.
You can view the camera's basic information and edit User ID.
The information includes model, IP Type, serial No., IP address, user ID, subnet mask, Mac address, gateway, firmware version, and
manufacturer. You can edit the user ID. Enter the User ID and click Save.
Check Time
Manually Correct
If you select the correction mode to Manually Correct, the page will be displayed as follows.
Device Time
The current time of the camera.
Corrected Time
Click to correct the camera time.
Synchronize
After checking this, the PC time will be set as camera time. Click OK to synchronize the time.
NTP
Enter the service address and NTP Port, and then set a correcting interval. Click OK. The camera's time will be synchronized with the
NTP server according to the interval you set.
i If the service address is 127.0.0.1, the NTP server is the current PC where the Software runs.
More Settings
In More Settings, you can import/export settings, upgrade the firmware, set the password, restore to factory defaults, and reboot
cameras.
Quick Settings
Quick Settings support importing and exporting parameters configured in I/O Settings and Communication and time correcting
parameters in tar.gz format.
● Click Export Settings to export the settings of the current camera to the PC.
● Click Import Settings to import the settings saved in the PC to the current camera to configure the parameters quickly.
Other Settings
In Other Settings, you can upgrade camera firmware, change password, restore to factory settings, and reboot your camera.
Firmware Upgrade
Supports upgrading the firmware of your camera.
Before You Start
Make sure you have got the firmware for this model of camera from technical support.
i If you upgrade with firmware of a different model, it will lead to upgrade failure.
Steps
1.Unzip the firmware to format in dav.
2.Click Firmware Upgrade, and a window as shown below will pop up.
i If the firmware is already in the process of being upgraded, it will automatically stop.
i During the upgrade, do not disconnect from the camera or cut off power for the camera. The device will reboot
after upgrade, and you can reconnect the camera.
Password Settings
Supports resetting the camera password.
Steps
i The default password is Abc1234, and it is highly recommended that you change your password when you first
log in.
1.Click Password Settings.
2.Enter your current password in the Old Password field.
i Click to reveal the password. When your typed text gets momentarily displayed as real instead of black dots,
you can check if there is any mistake.
3. Enter your new password in the New Password field.
i Your password must contain: 6 to 15 characters; at least 2 of the following: uppercase, lower case, numeric, or
special characters excluding (, ), =, and ^.
i You need to log in again using your new password after resetting your password.
i The I/O, IP Address, Time, NTP Correction, User Name, and Password of the camera cannot be restored.
Click Factory Reset, and then enter your password in the pop-up window, and finally click OK.
Reboot
Click Reboot and then OK to reboot your camera.
You will then be redirected to the login page, and you can reconnect the camera after it is enumerated.
Export License
The Software supports exporting the license of a camera to the PC.
Click Camera Settings → More Settings → Other Settings → Export License, and then select a saving path to download the camera
license to the PC.
IO Allocation
The Software supports setting IO types, trigger types, output types, and filter time.
IO
All IO lines are supported by the camera.
IO Type
Select Trigger, SolutionSwitch, or Output.
The IO settings vary with different device models.
● SM2-SE 06 Focal Length Cameras: It has 4 I/O lines of both inputs and outputs, which can be set to Trigger, SolutionSwitch, or
Output. The IO types of Line 0/1 or Line2/3 are associated, meaning their settings are synchronized. For example,if you set Line0 to
SolutionSwitch or Output, Line1 will be SolutionSwitch or Output; if you set Line0 to Trigger, Line1 will be SolutionSwitch.
● SM2-SE 08/12/15mm Focal Length Cameras: It has 4 I/O lines: 1 input, 1 output, 2 bidirectional configurable inputs/outputs. Line
0/1 can be set to both input and output, and their IO type can be set to Trigger, SolutionSwitch, or Output. Line2 can be set to input,
andLine3 can be set to output.
● SM2-1408DL and SM2-2368DL Cameras: It has the following 8 IO lines: 2 inputs, 3 outputs, and 3 bidirectional configurable inputs/
outputs. Line 0/1 can be set to Trigger or SolutionSwitch; Line 2/3/4 can be set to Trigger, SolutionSwitch, or Output; Line 5/6/7 can be
set to Output.
● SM2-1216DL, SM2-2048DL, and SM2-2432DL Cameras: totally 6 IOs, including 3 inputs and 3 outputs. Line 0/1/2 can be set to
Trigger or SolutionSwitch, while Line 3/4/5 can be set to Output.
Trigger Type
When the IO type is Trigger, set this parameter.
Output Type
Polarity of output signals.
i SM2-SE 06 Focal Length Cameras only supports NPN, and SM2-2048DL-M/C08 only supports PNP.
Filter Time
When the IO type is Trigger or SolutionSwitch, set this parameter. If the trigger signal duration is shorter than the filter time, the trigger
signal will not be responded to.
If you create multiple projects and set the auto-switch mode of a project to TriggerIO, this project will be selected as the running
project when the signal meets the requirements of the switch trigger source and is received within the configured IO synchronization
time.
i ○ If the IO type is SolutionSwitch and the trigger type is set to LevelLow, the number is 0.
○ If the IO type is Trigger or Output, the number is *.
Example
If an SM2-DL series camera's project trigger source is *101* and the IO sync time is 3, we can get the conclusion that the IO type of
Line0 is Trigger, the IO type of Line4 is Output, and the IO types of Line1 to Line3 are all set to SolutionSwitch. To automatically switch
to this project, the signal LevelHigh of Line1 to Line3 should be sent to the camera within 3 seconds.
You can view all devices' IO outputs and configure the relevant parameters to ensure that the devices output signals as required.
Output Condition
Subscribe to module statuses of different tools to determine the type of IO output.
Output Hold
The location of IO output signals and the electronic level.
Disable
Set the duration time and delay time to control IO output signals.
TriggerReset
When the device receives a trigger signal, the IO output signal is set to levellow. Then, based on the subscribed module results and
the set output type, the IO output signal is set to the corresponding electronic level. The level persists until the next trigger signal is
received.
ReverseReset
If the module status subscribed by device output is opposite to the configured output type,the IO output signal is set to levellow
until a model status matching the output type is received.
Duration Time
The duration of the output signal remaining at a specific electronic level.
Delay Time
The device outputs signals after the configured delay time ends.
Valid Electrical Level
Normally Close means valid levelhigh, while Normally Open means valid levellow.
Out Type
When the device meets the option you select, the IO signals will be output.
Cache Enable
If enabled,the subsequent output signal will be cached while the previous signal is still being output. Once the previous output ends,
the cached signal will be automatically output. No more than 3 signals can be cached.
This parameter varies with different series of devices.
i ● SM2-SE, SM2-1216DL, SM2-2048DL and SM2-2432DL series do not support this parameter.
● SM2-1408DL and SM2-2368DL series cache no more than 3 output signals.
i If you click Pick up, the software will search the images acquired in all periods.
6. Click Search.
7.Double-click an image to zoom it in.
8.Optional: Check one or multiple images, and click
Delete to delete the image(s).
9.Optional: Check one or multiple images, and click
Export to export the image(s) to your PC.
i The pictures are imported into cameras via the image-mode preview window, see Live View Panel for more
details.
As the steps to manage imported images are highly similar to those to manage acquired images, you can see Acquired Image
Management for more details.
The imported images can be in BMP, JPG, or PNG format.
Export Data
Save the following data of the running project to your local device: project name, total project running times, OK times, NG times,
module ID, module name, OK condition, and running results.
Select the File Saving Path, and then enable the Export Data to save the data to the PC in CSV format.
Steps
1. Set the Detection Area as needed. It it set to analyze the whole base image by default.
Click or or to draw on the base image; click to set the whole base image as the detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shield Area, and then click to draw
polygons on the base image.
3.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4.Set recognition-related parameters.
Color Eyedropper
Set the color value range to be recognized or not need to be recognized. Click and move the cursor on the base image to
select the area to be recognized. The pixels with the same color value will turn to green. You can click and select an area that
does not need to be recognized.
Recognition Range
Click / and move the cursor on the area defined by the color dropper to enlarge or narrow down the to-be-recognized
area.
5. In Judge Method, set Upper Limit and Lower Limit. If the detection result is within the configured range, the result is OK. Otherwise,
the result is NG.
The Judge Method parameters vary according to the camera type. For some cameras, the Judge Method parameter
i is the similarity range, which is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%.
The maximum similarity range is 900%. If the actual value is higher than 900%, the camera will respond based on
900%.
6. Optional: Set extra parameters in Extend Parameter if the result is not what you have expected.
L2L Angle
Find straight lines in two ROIs and calculate the angle between the two straight lines. You can configure range settings, recognition
settings, judge method, and extend parameter for the tool.
● Range Settings: You can set area-related parameters.
○ Detection Area: Click and click on a position on the two lines respectively.
i Click beside the Detection Area to display the tutorial video on the top left of the live view image.
○ Shielded Area: The shielded area will not be analyzed by the tool. Click Edit → and draw the shielded area on the base image
in the live view image.
○ Independent Fixture: When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture
configuration. This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
i that of the base image is within the range, the result will be OK, or the result will be NG. By default, the similarity
range is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%. The maximum similarity
range is 900%. If the actual value is higher than 900%, the camera will respond based on 900%.
● In Extend Parameter, you can set extra parameters for line1 and line 2 if the result is not what you have expected.
○ Straightness Accuracy: the minimum ratio of the number of points used to make up lines to the total number of points. When a
line's line rate is higher than the configured rate, it will be recognized as a line, otherwise it will not. The more points selected, the
more accurate the line will be.
○ Edge Polarity:It represents the excessive change of color.
›Black to White: The tool recognizes edges from area with higher grey scale to the area with lower grey scale.
›White to Black: The tool recognizes edges from the area with lower grey scale to the area with higher grey scale.
›All: The tool recognizes edges from area with higher grey scale to area with lower grey scale and from area with lower grey scale
to area with higher grey scale.
○ Edge Type: You can select "The Best", "The First", "The Last", and "Manual".
›The Best: The tool will find the most suitable points to make up lines.
›The First: The tool will find the points nearest to the start point to make up lines.
›The Last: The tool will find the point nearest to the end points to make up lines.
›Manual: Based on the green lines in the greyscale layout map, you can manually select points to make up lines.
i If the edge type is set to "Manual", the edge polarity is set to "All" automatically and it cannot be changed.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the communication tool
by the tool.
The data can be transmitted are as follows: module status, point measure intersection point angle, point measure intersection point
angle X, point measure intersection point angle Y, point measure distance, point measure line 1 start point X, point measure line 1
start point Y, point measure line 1 end point X, point measure line 1 end point Y, point measure line 1 angle, point measure line 2
start point X, point measure line 2 start point Y, point measure line 2 end point X, point measure line 2 end point Y, point measure
line 2 angle, result similarity, detection area center X, detection area center Y, detection area width, detection area height, detection
area angle, result quantity, and base value.
Diameter Measurement
In the detection area range, measure the diameter of the circle.
You can configure range settings, recognition settings, judge method, and extend parameter for the tool.
● Range Settings: You can set area-related parameters.
○ Detection Area: Click and click the base image to draw a circle on the base image.
i Click beside the Detection Area to display the tutorial video on the top left of the live view image.
○ Shielded Area: The shielded area will not be analyzed by the tool. Click Edit → and draw the shielded area on the base image
in the live view image.
○ Independent Fixture: When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture
configuration. This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
● In Recognition Settings, you can set recognition-related parameters. Sensitivity Adjustment: Adjust the sensitivity when measuring
the circle. When adjusting the parameter, the result will be displayed on the map located on the top right. Red indicates the
sensitivity range.
● In Judge Method, set the Upper Limit and the Lower Limit. If the detection result is within the range you set, the tool will output
OK, otherwise the tool will output NG.
For some cameras, the Similarity Range will be displayed here. When the similarity between the detected angle and
i that of the base image is within the range, the result will be OK, or the result will be NG. By default, the similarity
range is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%. The maximum similarity
range is 900%. If the actual value is higher than 900%, the camera will respond based on 900%.
● In Extend Parameter, you can set extra parameters for line1 and line 2 if the result is not what you have expected.
○ Circle Rate: The minimum ratio of the number of points used to make up circles to the total number of points. When a circle's
circle rate is higher than the configured rate, it will be recognized as a circle, otherwise it will not. The more points selected, the
more accurate the circle will be.
○ Edge Polarity:It represents the excessive change of color.
›Black to White: The tool recognizes edges from area with higher grey scale to the area with lower grey scale.
›White to Black: The tool recognizes edges from the area with lower grey scale to the area with higher grey scale.
›All: The tool recognizes edges from area with higher grey scale to area with lower grey scale and from area with lower grey scale
to area with higher grey scale.
○ Edge Type: You can select "The Best", "The Biggest", "The Smallest", or "Manual".
›The Best: The tool will find the most suitable points to make up lines.
›The Biggest: The tool will find the points that are the most distant from the center to make up circles.
›The Smallest: The tool will find the points nearest to the center to make up circles.
›Manual: Based on the green lines in the greyscale layout map, you can manually select points to make up circles.
i If the edge type is set to "Manual", the edge polarity is set to "All" automatically and it cannot be changed.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the communication tool
by the tool.
The data can be transmitted are as follows: module status, detection area center X, detection area center Y, detection area width,
detection area height, detection area angle, circle diameter, circle center X, circle center Y, circle radius, result quantity, and base
value.
Brightness Analysis
The Brightness Average Value tool can measure the brightness of an ROI and then calculate the mean value.
Before You Start
Make sure that you have configured the camera parameters and base image, and added the Brightness Average Value tool.
Steps
1.Set the Detection Area as needed. It it set to analyze the whole base image by default.Click or or to draw on the base
image, the position and size of the area can be adjusted manually; click to set the whole base image as the detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shielded Area, and then click o draw
polygons on the base image.
3.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4.In Judge Method, set Upper Limit and Lower Limit. If the detection result is within the configured range, the result is OK. Otherwise,
the result is NG.
The Judge Method parameters vary according to the camera type. For some cameras, the Judge Method parameter
i is the similarity range, which is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%.
The maximum similarity range is 900%. If the actual value is higher than 900%, the camera will respond based on
900%.
Contrast Measurement
The Contrast Measurement tool can measure the contrast of the detection area.
Before You Start
Make sure that you have configured the camera parameters and base image, and added the Contrast Measurement tool.
Contrast is a relative value, representing the ratio of brightest to the darkest of the detection area.
Steps
1.Set the Detection Area as needed. It it set to analyze the whole base image by default.
Click or or to draw on the base image; click to set the whole base image as the detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shielded Area, and then click o draw
polygons on the base image.
3.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to
Configure Base Image for details.
4.In Judge Method, set Upper Limit and Lower Limit. If the detection result is within the configured range, the result is OK. Otherwise,
the result is NG.
The Judge Method parameters vary according to the camera type. For some cameras, the Judge Method parameter
i is the similarity range, which is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%.
The maximum similarity range is 900%. If the actual value is higher than 900%, the camera will respond based on
900%.
Width Measurement
The Width Measurement tool can detect two edges in the detection area and measure the perpendicular distance between the two
edges.
The parameters of the tool are divided into four categories: range settings, recognition settings, judge method, and extend parameter.
● Range settings determine the detection area:
Detection Area
Click and click on the two endpoints on the base image to measure the distance between the two edges.
Shielded Area
Specify the shielded area within the detection area. The shielded area will not be analyzed. Click Edit and then click to draw a
polygon on then base image.
Independent Fixture
When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture configuration. This
function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
i that of the base image is within the range, the result will be OK, or the result will be NG. By default, the similarity
range is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%. The maximum similarity
range is 900%. If the actual value is higher than 900%, the camera will respond based on 900%.
● If the parameter settings above cannot achieve the expected outcome, you can adjust the Extend Parameter.
Polarity
Color transition direction. Edge polarity is related to the arrow's direction of the ROI.
Black To White
Only detect the edge formed by low grayscale to high grayscale.
White To Black
Only detect the edge formed by high grayscale to low grayscale.
All
Detect any type of edge.
Width Extraction
The Widest
Only detect the edge pair with the longest distance.
The Narrowest
Only detect the edge pair with the shortest distance.
Manual
Drag the green line in the grayscale graph in the upper-right corner to select the set of edge points and fit the edge pair.
i If you select Manual, the polarity will be set to All and cannot be changed.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the communication tool.
The output data items include: module status, detection area center X/Y, detection area width/height/angle, line start point X/Y, line
end point X/Y, result similarity, line 0 start point X/Y, line 0 end point X/Y, line 0 angle, line 1 start point X/Y, line 1 end point X/Y, line
1 angle, pixel edge spacing, and base value.
P2L Measurement
The P2L Measurement tool can detect edges and lines, and then measure the perpendicular distance between a point (on a line/
edge) and a line/edge.
Before You Start
Make sure that you have configured the camera parameters and base image, and added the P2L Measurement tool.
Steps
1. Click and click on the base image to determine the point and then click twice to draw on the line so as to measure the distance
between the point and the line.
2. Optional: If you need to shield areas in the detection area, click Edit on the right of Shielded Area, and then click o draw
polygons on the base image.
3.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4.Set the pattern recognition conditions in Recognition Settings.
Line Sensitivity Adjustment
Adjust the sensitivity of line detection. The grayscale graph in the upper-right corner shows the current effect, and the red area is
the sensitivity range.
Point Sensitivity Adjustment
Adjust the sensitivity of point detection. The grayscale graph in the upper-right corner shows the current effect, and the red area is
the sensitivity range.
5. In Judge Method, set the Upper Limit and the Lower Limit. If the detection result is within the configured range, the result is OK.
Otherwise, the result is NG.
The Judge Method parameters vary according to the camera type. For some cameras, theJudge Method parameter
i is the similarity range, which is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%.
The maximum similarity range is 900%. If the actual value is higher than 900%, the camera will respond based on
900%.
6.Set extra parameters in Extend Parameter if the result is not what you have expected.
● Line Parameters:
Line Rate
You can set the minimum value of the ratio of the fitting points to the total number of points. When the straightness of the
detected line exceeds this value, it is judged as a straight line. Otherwise, it is not judged as a straight line.
Edge Polarity
Color transition direction. Edge polarity is related to the arrow's direction of the ROI.
Black To White
Only detect the edge formed by low grayscale to high grayscale.
White To Black
Only detect the edge formed by high grayscale to low grayscale.
All
Detect any type of edge.
Edge Type
The Best
Only detect edge points with the largest gradient threshold and fit them into a line.
The First
Only detect edge points that are the nearest to the detection start point and fit them into a line.
The Last
Only detect edge points that are the nearest to the detection end point and fit them into a line.
Manual
Drag the green line in the grayscale graph in the upper-right corner to select the set of edge points and fit them into a line.
● Point Parameters:
Edge Polarity
Refer to the descriptions in Line Parameters.
7. Click Test Running to test the detection according to settings.
Greyscale Size
The Greyscale Size tool can measure the area of pixels with greyscale value that is within the range you set.
Before You Start
Make sure that you have configured the camera parameters and base image, and added the Greyscale Size tool.
Steps
1. Set the Detection Area as needed. It it set to analyze the whole base image by default.
Click or or to draw on the base image, the position and size of the area can be adjusted manually; click to set the
whole base image as the detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shielded Area, and then click o draw
polygons on the base image.
3.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4.Set the pattern recognition conditions in Recognition Settings.
Greyscale Threshold
Set the valid greyscale value range. Qualified area will be marked in green.
Reverse Range
When enabled, the unqualified area will be marked in green. It is used to measure the area that is not within the valid greyscale
range.
5. In Judge Method, set the Upper Limit and the Lower Limit. If the detection result is within the configured range, the result is OK.
Otherwise, the result is NG.
The Judge Method parameters vary according to the camera type. For some cameras, the Judge Method parameter
i is the similarity range, which is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%.
The maximum similarity range is 900%. If the actual value is higher than 900%, the camera will respond based on
900%.
6. Optional: Set extra parameters in Extend Parameter if the result is not what you have expected.
Area Range
Set the recognition area size. Unit: pixel.
7. Click Test Running to test the detection according to settings.
Line Angle
The Line Angle tool can measure the angle between a line and the X-axis.
The parameters of the tool are divided into four categories: range settings, recognition settings, judge method, and extend parameter.
● Range settings determine the detection area:
Detection Area
Click and click on two points on a line on the base image to get the starting point coordinates and intersection angle.
i
○ Click to show animated instructions on drawing ROI.
○ Click on the top of the base image window to view notes about the coordinates image.
Shielded Area
Specify the shielded area within the detection area. The shielded area will not be analyzed. Click Edit and then click to draw a
polygon on the base image.
Independent Fixture
When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture configuration. This function
is enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
i that of the base image is within the range, the result will be OK, or the result will be NG.By default, the similarity
range is from 0% to 200%. If the value cannot satisfy your need, you can switch 200% to 900%. The maximum similarity
range is 900%. If the actual value is higher than 900%, the camera will respond based on 900%.
● If the parameter settings above cannot achieve the expected outcome, you can adjust the Extend Parameter for the line.
Line Rate
You can set the minimum value of the ratio of the fitting points to the total number of points. When the straightness of the
detected line exceeds this value, it is judged as a straight line. Otherwise, it is not judged as a straight line.
Edge Polarity
Color transition direction. Edge polarity is related to the arrow's direction of the ROI.
Black To White
Only detect the edge formed by low grayscale to high grayscale.
White To Black
Only detect the edge formed by high grayscale to low grayscale.
All
Detect any type of edge.
Edge Type
The Best
Only detect edge points with the largest gradient threshold and fit them into a line.
The First
Only detect edge points that are the nearest to the detection start point and fit them into a line.
The Last
Only detect edge points that are the nearest to the detection end point and fit them into a line.
Manual
Drag the green line in the grayscale graph in the upper-right corner to select the set of edge points and fit them into a line.
If you select Manual, the polarity will be set to All and cannot be changed.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the
i communication tool.
The output data items include: module status, line start point X/Y, line end point X/Y, detection area center X/Y,
detection area width/height/angle, result similarity, and base value.
i After drawing the detection area, a histogram will show on the top right of the image displaying the greyscale
difference between adjacent pixels.
2. Click Edit to draw a shielded area in the detection area.
3. Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to
the parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the
fixture configuration of the base image.Make sure you have enabled Fixture when configuring the base image. Refer to Configure
Base Image for details.
4.Select a sensitivity for the tool.
5.Configure the Judge Method, including judge basis, range low, and range high.
6.Set the Extend Parameters.
Parameter Description
Brightness Mode The reference for identifying an object.
Find Measure Mode The method of measurement.
Width
The width of the object.
Distance
The distance between two objects.
7. Click Test Running to test the tool.
The existence tools are as follows: the circle-existence tool, line-existence tool, spot-existence tool, edge-existence tool, pattern-
existence tool, and profile-existence tool.
Circle Existence
The circle-existence tool can find multiple points in a detection area and fit them into circles. It can also detect existing circles in the
area.
The parameter settings for the circle-existence tool include range settings, recognition settings, judge method settings, and extend
parameter settings.
● Range Settings: You can set area-related parameters.
○ Detection Area: Click , hover the cursor on the circle, and the click the mouse to display the recognized circle and its center
coordinate.
i Click beside Detection Area, and the animated tutorial will pop up in the upper-left corner of the live view panel.
○ Shielded Area: The shielded area will not be analyzed by the tool. Click Edit → and draw the shielded area on the base image
in the live view image.
○ Independent Fixture: When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture
configuration. This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i ›This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
›Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
● In Recognition Settings, you can set recognition-related parameters.
○ Sensitivity: the sensitivity of circle detection. When setting this parameter, the greyscale layout map in the upper-right corner of
the live view panel will display the effect, and the red area represents the sensitivity range.
● In Judge Method, you can set parameters related to result judgment by selecting "Exist OK" or "Not Exist OK". For Not Exist OK, if
there are no circles that meets the configured parameter requirements, the result will be "OK".
● In Extend Parameter, you can set extra parameters if the result is not what you have expected.
○ Circle Rate: The minimum ratio of the number of points used to make up circles to the total number of points. When a circle's
circle rate is higher than the configured rate, it will be recognized as a circle, otherwise it will not. The more points selected, the
more accurate the circle will be.
○ Edge Polarity:It represents the excessive change of color.
›Black to White: The tool recognizes edges from area with higher grey scale to the area with lower grey scale.
›White to Black: The tool recognizes edges from the area with lower grey scale to the area with higher grey scale.
›All: The tool recognizes edges from area with higher grey scale to area with lower grey scale and from area with lower grey scale
to area with higher grey scale.
○ Edge Type: You can select "The Best", "The Biggest", "The Smallest", or "Manual".
›The Best: The tool will find the most suitable points to make up lines.
›The Biggest: The tool will find the points that are the most distant from the center to make up circles.
›The Smallest: The tool will find the points nearest to the center to make up circles.
›Manual: Based on the green lines in the greyscale layout map, you can manually select points to make up circles.
i If the edge type is set to "Manual", the edge polarity is set to "All" automatically and it cannot be changed.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the communication tool
by the circle-existence tool.
The data can be transmitted are as follows: module status, circle center X, circle center Y, circle radius, detection area center X,
detection area center Y, detection area width, detection area height, and detection area angle.
Line Existence
The line-existence tool can find points with specific features to fit them into straight lines. The parameter settings for the line-
existence tool include range settings, recognition settings, judge method settings, and extend parameter settings.
● Range Settings: You can set area-related parameters.
○ Detection Area: Click , and then click two different positions on the base image to draw a detection area.
i
›On the top left of the live view window, click next to Base Image to view descriptions of the coordinate axes.
›Click beside Detection Area, and the animated tutorial will pop up in the upper-left corner of the preview window.
○ Shielded Area: The shielded area will not be analyzed by the tool. Click Edit → and draw the shielded area on the base image
in the live view image.
○ Independent Fixture: When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture
configuration. This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i ›This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
›Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
● In Recognition Settings, you can set recognition-related parameters.
○ Line Sensitivity Adjustment: Here you can adjust the sensitivity of line detection. When adjusting this parameter, the greyscale
layout map in the upper-right corner of the preview window will display the effect of adjustment, and the red area represents the
sensitivity range.
● In Judge Method, you can set parameters related to result judgment by selecting "Exist OK" or "Not Exist OK". For Not Exist OK, if
there are no circles that meets the configured parameter requirements, the result will be "OK".
● In Extend Parameter, you can set extra parameters if the result is not what you have expected.
○ Line Rate: the minimum ratio of the number of points used to make up lines to the total number of points. When a line's line rate
is higher than the configured rate, it will be recognized as a line, otherwise it will not. The more points selected, the more accurate
the line will be.
○ Edge Polarity:It represents the excessive change of color, and defines the direction to look for the edges in a detection area.
Here the edge is the line between two areas with different greyscales.
›Black to White: The tool recognizes edges from area with higher grey scale to the area with lower grey scale.
›White to Black: The tool recognizes edges from the area with lower grey scale to the area with higher grey scale.
›All: The tool recognizes edges from area with higher grey scale to area with lower grey scale and from area with lower grey scale
to area with higher grey scale.
○ Edge Type: You can select "The Best", "The Biggest", "The Smallest", or "Manual".
›The Best: The tool will find the most suitable points to make up lines.
›The First: The tool will find the points nearest to the start point to make up lines.
›The Last: The tool will find the point nearest to the end points to make up lines.
›Manual: Based on the green lines in the greyscale layout map, you can manually select points to make up lines.
i If the edge type is set to "Manual", the edge polarity is set to "All" automatically and it cannot be changed.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the communication tool
by the circle-existence tool.
The data can be transmitted are as follows: module status, line start point X, line start point Y, line end point X, line end point Y, line
angle, detection area center X, detection area center Y, detection area width, detection area height, detection area angle.
Spot Existence
Spot Existence tool is used for analyzing whether there are areas with qualified greyscale in the analyzed area.
Before You Start
Make sure that you have configured the camera parameters and base image, and added the Spot Count tool.
Steps
1. Set the Detection Area as needed. It it set to analyze the whole base image by default.
Click or or to draw on the base image, the position and size of the area can be adjusted manually; click to set the
whole base image as the detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shield Area, and then click to draw
polygons on the base image.
3.Optional: Enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4. Set recognition-related parameters in Recognition Settings.
Greyscale Threshold
Set the greyscale range for detection area. The area with greyscale in the range will be recognized.
Reverse Range
The area with greyscale not in the range will be recognized.
5. Set the result judgment parameters in Judge Method.
Exist OK
The result is OK if the area, which meets the configured requirement is detected.
Reverse Range
The result is OK if the area, which meets the configured requirement is not detected.
6. Optional: Set extra parameters in Extend Parameter if the result is not what you have expected.
Area Range
Set the area of recognized area. The unit is pixel.
7. Click Test Running to test the detection according to settings.
Edge Existence
The tool will detect edges with qualified sensitivity in the detection area.
You can configure range settings, recognition settings, judge method, and extend parameter for the tool.
● Range Settings: You can set area-related parameters.
○ Detection Area: Click and click two positions on the detected edge.
i
›On the top left of the live view window, click next to Base Image to view descriptions of the coordinate axes.
›Click beside Detection Area, and the animated tutorial will pop up in the upper-left corner of the preview window.
○ Shielded Area: The shielded area will not be analyzed by the tool. Click Edit → and draw the shielded area on the base image
in the live view image.
○ Independent Fixture: When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture
configuration. This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i ›This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
›Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
● In Recognition Settings, you can set recognition-related parameters.
○ Sensitivity Adjustment: Adjust the sensitivity of the tool. When adjusting, a greyscale map will be displayed on the top right of the
live view image showing the adjusting result, while the red area showing the sensitivity range.
● In Judge Method, you can set the parameters related to result judgment by selecting Exist OK or Not Exist OK.
● In Extend Parameter, you can set extra parameters if the result is not what you have expected.
○ Edge Polarity: It represents the excessive change of color, defines the direction to look for the edges in a detection area. Here the
edge is the line between two areas with different greyscales.
›Black to White: The tool recognizes edges from the area with higher greyscale to the area with lower greyscale.
›White to Black: The tool recognizes edges from the area with lower greyscale to the area with higher greyscale.
›All: The tool recognizes edges from the area with higher greyscale to the area with lower greyscale and from the area with lower
greyscale to the area with higher greyscale.
After setting the above parameters, you can go to Output → Tool Results to set which data to transmit to the communication tool
by the tool.
The data can be transmitted are as follows: module status, detection area center X, detection area center Y, detection area width,
detection area height, detection area angle, edge point X, edge point Y, edge point quantity, line start point X, line start point Y, line
end point X, line end point Y, line angle.
Pattern Existence
With pattern existence tool, the Software can tell you whether the specific pattern exists in the detection area.
You can configure range settings, recognition settings, judge method, and extend parameter for the tool.
● Range Settings: You can set area-related parameters.
○ Detection Area: The area analyzed by the tool on the image. By default, the tool analyzes the whole image. Click to draw the
detection area on the base image on the right, you can change its position and size manually. Click to set the whole base image
as the detection area.
○ Shielded Area: The shielded area will not be analyzed by the tool. Click Edit → to draw the shielded area on the base image
on the right.
○ Independent Fixture: When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture
configuration. This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i ›This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
›Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
● In Search Settings, you can set search-related parameters.
○ Template Area: The same with the template area in the step of configuring base image.
○ Shielded Area: The same with the shielded area in the step of configuring base image.
○ Template Sensitivity:
›Auto: Set the sensitivity by setting the degree. The higher the sensitivity, the better the fixture quality.
›Manual: Set the sensitivity by setting the Coarse Granularity and the Greyscale Threshold.
● In Judge Method, you can set the parameters related to result judgment by selecting Exist OK or Not Exist OK.
● In Extend Parameter, you can set extra parameters if the result is not what you have expected.
○ Min. Score: The minimum similarity between the template area and the analyzed area in the image. Only when the similarity is
higher than the Min. Score, the target can be recognized. The Min. Score ranges from 0 to 1, and 1 indicates that the analyzed area
in the image is completely the same with the template area.
○ Match Polarity: When the polarity of the analyzed area's edge is different from that of the template area, set the Match Polarity as
Ignored to make sure the target can be found. If it is not necessary to find the target, you can set the Match Polarity as Considered
for a quick search.
○ Angle Range: If the analyzed object is turned and the turned angle is smaller than the value you set, the object can be
recognized, otherwise it cannot be recognized.
Contour Existence
You can use this tool to verify the presence of a contour through contour matching.
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the Contour Existence tool.
Steps
1. Set the Detection Area as needed. It is set to analyze the whole base image by default.
Click or to draw on the base image, the position and size of the area can be adjusted manually.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4. Select the scale mode, threshold mode, and chain mode.
i By default, Auto is selected as the scale mode. For the manual mode, configure the following parameters.
Velocity Scale
The higher this value, the higher the characteristic scale, and the less the picked points from the contour. But the feature
matching will be faster.
Characteristic Scale
The fineness of picking points from the contour.
i This value should be lower than the velocity scale. The lower the value, the finer the point-picking.
Threshold Mode
Used for setting the greyscale threshold of contour edges. The area with greyscale within the threshold will be detected.
i The higher this value, the less qualified number of contour edges, and the points on edges may be filtered out.
Chain Mode
During control detections, only the chain length of a contour is higher than the minimum chain, can it be identified as a valid contour.
5. Set the result judgment parameters in Judge Method.
Exist OK
If the area that meets the configured requirement is detected, the result is OK.
Reverse Range
If the area that meets the configured requirement is not detected, the result is OK.
6. Optional: Set extra parameters in Extend Parameter if the result is not what you have expected.
Minimum Score
The threshold of similarity between the feature template and the detected target. A higher score indicates a greater similarity. When
the similarity score reaches or exceeds the threshold, the target can be found.
Match Polarity
The color change from detected target to its background. When the target's polarity is different from that of the feature template,
select Ignored to make sure the target can be found. Otherwise, select Considered to shorten the finding time.
Angle Range
If the target's angle changes and the change is within this range, the target can still be recognized; or it cannot be recognized.
Algorithm Timeout
The maximum duration for running the algorithm.
● If the algorithm is still working when the working time reaches the value you set, the detection will stop and the Software will
output NG.
● If you set this parameter to 0, there will be no limitation for the algorithm's working duration.
7. Click Test Running to test the detection according to settings.
Counting tools are as follows:the spot-counting tool, edge-counting tool, pattern-counting tool, profile-counting tool, and color-
counting tool.
Spot Count
The spot-counting tool uses the Blob method to recognize and count multiple spots whose greyscales are within the configured
greyscale range.
● Blob analysis refers to the process of detecting, locating, and analyzing an target object by measuring its
i
greyscale value. It can output the information about some of the object's features, such as existence, number,
location, shape, direction, and the topological relationship among Blobs.
● Make sure that you have configured the camera parameters and base image, and added the Spot Count tool.
1. Set the Detection Area as needed. It it set to analyze the whole base image by default.
Click or or to draw on the base image, the position and size of the area can be adjusted manually; click to set the
whole base image as the detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shield Area, and then click to draw
polygons on the base image.
3.Optional: Enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
Greyscale Threshold
Here you can set the greyscale range for the detection area, so that the areas whose greyscale is within the greyscale range will be
recognized.
Reverse Range
Once enabled, the greyscale value outside the configured greyscale range will be the valid range, so that the camera will recognize
the areas whose greyscales are outside the configured greyscale range.
Identify Num
Here you can set the number of areas (whose greyscale is within the configured greyscale range) to be recognized.
i If the number of qualified areas exceeds the configured number, the camera will select the areas to output based on
there sizes.
Edge Count
The edge-counting tool can recognize and count edges based on the configured edge-counting sensitivity.
The parameter settings for the edge-counting tool include range settings, recognition settings, judge method settings, and extend
parameter settings.
● In Range Settings, you can set area-related parameters.
Detection Area
Click , and then click two different positions on the base image to draw a detection area.
i
›On the top left of the live view window, click next to Base Image to view descriptions of the coordinate axes.
›Click beside Detection Area, and the animated tutorial will pop up in the upper-left corner of the preview window.
Shielded Area
Here you can shield some areas for them not to be analyzed. Click Edit, and then click to draw polygons on the base image.
Independent Fixture
When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture configuration. This
function is enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
Pattern Count
The pattern-counting tool can recognize and count patterns via the template-matching method.
The parameter settings for the pattern-counting tool include range settings, recognition settings, judge method settings, and extend
parameter settings.
Range Settings
You can set area-related parameters.
Detection Area
Here you can set the area that the tool analyzes. It it set to analyze the base image by default. Click to draw boxes on the base
image, the position and size of the boxes can be adjusted manually. Click to set the whole base image as the detection area.
Shielded Area
Here you can shield some areas for them not to be analyzed. Click Edit, and then click to draw polygons on the base image.
Independent Fixture
When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture configuration. This function
is enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
Search Settings
You can set search-related parameters.
Template Area
Click Edit, and then click or to draw boxes or polygons on the base image, the position and size of the graphics can be
adjusted manually.
Contour Count
You can use this tool to determine the presence of contours through contour matching and to count the number of contours.
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the Contour Existence tool.
Steps
1. Set the Detection Area as needed. It is set to analyze the whole base image by default.
Click or to draw on the base image, the position and size of the area can be adjusted manually.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4. Select the scale mode, threshold mode, and chain mode.
i By default, Auto is selected as the scale mode. For the manual mode, configure the following parameters.
Velocity Scale
The higher this value, the higher the characteristic scale, and the less the picked points from the contour. But the feature
matching will be faster.
Characteristic Scale
The fineness of picking points from the contour.
You can set only integers that are not larger than the velocity scale to the characteristic scale value.When the value
i is set to 1,it is the finest scale. Generally, adjusting this value will result in a significant change in the number of
contour points. The lower the value, the finer the point-picking.
Threshold Mode
Used for setting the greyscale threshold of contour edges. The area with greyscales within the threshold will be detected.
i The higher this value, the less qualified number of contour edges, and the points on edges may be filtered out.
Chain Mode
It is used to set the minimum chain length for the detection area.Only when the chain length exceeds the minimum chain length,
the contour will be retained.
5. Set the Identify Num.
i If the number of qualified contours exceeds the value you set, you can get the best contours no more than the
value.
6. Set the quantity range.
● If the number of identified contours is within the quantity range, the detecting result will be OK, or the result will
i be NG.
● The maximum quantity range is subject to the Identify Num.
7. Optional: Set extra parameters in Extend Parameter if the result is not what you have expected.
Minimum Score
The threshold of similarity between the feature template and the detected target. A higher score indicates a greater similarity.
When the similarity score reaches or exceeds the threshold, the target can be found.
Match Polarity
The color change from detected target to its background. When the target's polarity is different from that of the feature template,
select Ignored to make sure the target can be found. Otherwise, select Considered to shorten the finding time.
Angle Range
If the detected target has angle variations (such as rotation), you can set the relative angle range. The angle can be adjusted as
needed. The default range is -180° to 180° .
Overlap Rate
The maximum allowed overlap proportion of two overlapped targets when detecting multiple targets. The higher the value, the
higher the allowed overlap rate.
Algorithm Timeout
The maximum duration for running the algorithm.
● If the actual algorithm processing time exceeds this value, the algorithm will stop detecting and output NG.
● If youset this parameter to 0, there will be no limitation for the algorithm's working duration.
8. Click Test Running to test the detection according to settings.
Color Count
This tool is used for identifying colors and counting qualified colors.
Before You Start
● Make sure the greyscale image mode of the color camera is disabled.
● Make sure that you have configured the camera parameters and the base image, and added the Color Count tool.
Steps
1.Set the Detection Area as needed. It is set to analyze the whole base image by default.Click Edit.
Click or to draw on the base image, the position and size of the area can be adjusted manually.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4. Add colors to count and configure related parameters.
1) Click in the Color List to add a color to count.
i You can configure these parameters to fine-tune the selected colors using the color eyedropper tool.
4)Optional: To select the complement of the currently set color range as the picking area, enable Color Inversion.
5)Optional: To configure both the hole filling area and the color patch area, enableUsing Global Area Parameters.
i If you disable this parameter, you should configure the hole filling area and the color patch area for each color.
i Holes refer to the eight-connected domains of non-target pixels that are completely surrounded by target pixels.
The recognition tools include the Color Contrast tool, OCR tool, Color Recognition tool, Classification Registration tool, Code
Recognition tool, and Object Detection Recognition tool.
Color Contrast
In the detection ROI range, identify color information according to color template.
This function is available only when the Greyscale Image Output function of color device is disabled. Only color
i camera supports this tool. If you enable the Greyscale Image Output function for a color camera, the color contrast
tool can still be configured but the output result will be NG.
You can configure range settings, recognition settings, judge method, and extend parameter for the tool.
● Range Settings: You can set area-related parameters.
Detection Area
The area analyzed by the tool on the image. By default, the tool analyzes the whole image.
Click or to draw the detection area on the base image on the right, you can change its position and size manually. Click to
set the whole base image as the detection area.
Shielded Area
The shielded area will not be analyzed by the tool. Click Edit → and draw the shielded area on the base image in the live view
image.
Independent Fixture
When enabled, target offset can be adjusted according to the parameter settings of the chosen fixture configuration. This function is
enabled by default and automatically subscribes to the fixture configuration of the base image.
i Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
OCR
The OCR tool can recognize the text in the detection area.
Before You Start
The camera parameters and base image are configured. The OCR tool should be added.
Steps
i The list name only allows upper and lower case letters, digits, and underline.
Character Content
When you set Judge Basis to Reference Character, you need to set the content of the string. If the specified content is recognized,
the process result is OK. Otherwise, the result is NG.
7.Set more parameters.
● Select a model type according to the camera model.
i The list name only allows upper and lower case letters, digits, and underline.
Color Recognition
The Color Recognition tool recognizes the object color based on the trained color templates. When the colors of different objects are
obviously different, the tool can implement the accurate classification of objects and output the result.
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the Color Recognition tool.
Steps
1. Set the Detection Area according to your actual needs. It it set to analyze the whole base image by default.
Click or to draw a rectangle or circle detection area on the base image, or click to set the whole base image as the
detection area.
2. Optional: If you need to shield areas in the detection area, click Edit on the right of Shielded Area and draw areas.
3. Optional: You can enable or disable Independent Fixture as needed. When enabled,target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
4.Click on the right of Template List to open the Template Training window.
1)Click Add Image to import pictures from your PC.
● The pictures to be imported should be in png, jpg, or bmp format. Up to 10 pictures can be imported.
i ● You can also click Add Current Image to add the current image in the camera.You can click
image or click / to switch the image.
to delete the
2) Add tags to a picture, the detailed operations are shown in the GIF below.
i The template file to be imported should be trained by the program, which is supported by the Software, and the file
name can only contain characters, digits, and underlines.
i The configured label name should be according to the added tags during template training.
Classification Registration
This tool is used for categorizing images after learning imported OK samples. You should train a model by following steps before the
categorization.
Before You Start
● The camera parameters and base image are configured.
● Fixture should be enabled and the template area should be drawn.
● The classification registration tool should be added.
Steps
1. Draw a Detection Area. By default, the whole image is the detection area. You can click to draw a rectangle detection area.
2. Optional: Enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
3. Configure parameters in Parameter Settings.
First K Classifications
The K refers to the classifications output by the tool. For example, if the user registers 5 classifications, and sets this parameter as 3,
the tool will output the first 3 classifications in the end.
Minimum Similarity
When classifying images by this tool, only when images of which the Min. similarity is equal to or above the value you set, can they
be displayed in the live view window.
4. Train the classification registration model.
– Click to import a model from the PC.
1. Import the training images (up to 50).
● : import the image acquired in real time.
● : import images stored in the camera. You can set the searching conditions to search images in the camera. The Software
supports importing multiple images at a time.
i See Acquired Image Management for details about searching images stored in cameras.
● : import images from the PC. The image format should be .jpg, png, or .bmp.
i Click on the right label, and view the thumbnail of the label sample.
4. After the image labeling, view the labeling records on the lower-right.
● : delete the selected labeling records.
● : clear all labeling records.
7. After the classification registration, on the Registration Information area, view the total classifications, total registered targets, and
total registered images.
5. In the Result Judgment area, enter the classification label as the judgment basis.
Min. Score
Set the Min. Score. The detection result will be OK if the detected image's minimum score reaches or exceeds the configured min
score, or it will be NG.
Type Judge
Set the Classification Label. The detection result will be OK if the detected image's label is the same as the configured one, or it will
be NG.
i The classification label should be set according to the training label and real needs.
6.On the More Parameters area, select different fundamental model file.
–Select the default system model.
–Click Import to import the model from the PC.
i ● The name of the model file should only contain digits, uppercase and lowercase letters, and underscores.
● For details about training models by deep learning modules, refer to the user manual of Vision Train.
7. Click Test Running to test the detection according to settings.
Code Recognition
The Code Recognition tool can identify 1D code and 2D code in a detection area.
Before You Start
Make sure that you have configured the camera parameters and base image, and added the Code Recognition tool.
Steps
1. Set the Detection Area as needed. It it set to analyze the whole base image by default.
Click to draw an area on the base image or click to set the whole base image as the detection area.
4.Set the criterion for identifying codes and parameters related to result judgment in Judge Method.
● If you select Code Existence as the judge basis, you can select Existence Is OK or Not Existence Is OK as the judge method.
● If you select Min. Score as the judge basis, you need to set the minimum score for the codes to be identified. The detection result
will be OK if a code's minimum score reaches or exceeds the configured min score, or it will be NG.
6. Optional: Set extra parameters if the result is not what you have expected, including the filtering rule, 1D code, 2D code, and
timeout settings in Extend Parameters. Related parameters are shown in the table below:
Parameter Category Parameter Name Description
The code length that can be analyzed. The tool only identifies the codes whose
Code Length
lengths are within the configured range.
When enabled, you need to enter characters in the box. The tool only outputs
Start With code information that starts with the entered characters, or the code information
will be filtered out.
When enabled, you need to enter characters in the box. The tool only outputs
End With code information that ends with the entered characters, or the code information
will be filtered out.
Filter Rule When enabled, you need to enter characters in the box. The tool only outputs
Include code information that contains the entered characters, or the code information
will be filtered out.
When enabled, you need to enter characters in the box. The tool only outputs
Exclude code information that does not contain the entered characters, or the code
information will be filtered out.
Digit Only When enabled, the tool identifies the codes that consist of digits only.
Letter Only When enabled, the tool identifies the codes that consist of letters only.
The type of 1D code that can be identified. You can select Black Code On White
1D Code Polarity
or White Code On Black.
The type of 2D code that can be identified. You can select Black Code On White,
Polarity White Code On Black or Arbitrarily. Arbitrarily indicates that both types of 2D
2D Code code can be identified.
If the QR code is on a bottle or crumpled paper, you need to select Distortion, or
QR Distortion
you can select Non Distortion.
The maximum time consumed by the algorithm. If the actual time consumed
Timeout Settings Algorithm timeout (ms) exceeds the configured value, the tool will stop identifying codes and output NG.
You can set the value of this parameter to 0 to disable this function.
2. Optional: Enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
3.In Model Training, register and train the model.
– Click to import on the right of Register Train Set to import a model to the device.
–Click Register Manually to register models manually.
1. Import up to 10 images for training.
● : import real-time captured images.
● : import images saved in the device. You can search for wanted images by setting the search conditions.
● : import images from the PC. The image format should be JPG, BMP, or PNG.
2. In the Label List, click to add tags.
● : rename the added classification.
● : delete the added classification.
3. Select a to-be-labeled imageand draw an ROI in the live view window.
i Hover the cursor on a tag and click to view the thumbnail of a tag.
i If you enable this, the imported images will be saved in the PC together with the model file after training. Otherwise,
only the model file will be saved.
6. Click Train.
i Ensure at least 1 valid sample per category for successful model training.
7. Check the total number of labels, registered targets, registered images in the Registered Information area.
4. Set the Judge Basis.
Judge by Quantity
ConfigureQuantity Range. If the actual recognized targets is in the configured range, the result will be OK, or the result will be NG.
Min. Score
If the detection score is high than the entered score, the result will be OK, or it will be NG.
Type Judge
If the detected type is the same with the type you set, the result will be OK, or it will be NG.
5. Via Import Fundamental Detect Model File and Import Fundamental Train Model File, enable different fundamental models. By
default, the Software will used the model provided by the system. You can click Import to import a fundamental model.
i The name of the imported model file can only contain lower and uppercase letters, digits, and underscores.
6. Optional: Set extra parameters in Extend Parameter if the result is not what you have expected.
Maximum Number to Find
The maximum number of targets that can be found and output.
Minimum Score
The threshold of similarity between the feature template and the detected target. A higher score indicates a greater similarity. When
the similarity score reaches or exceeds the threshold, the target can be found.
Overlap Rate
The maximum allowed overlap proportion of two overlapped targets when detecting multiple targets. The higher the value, the
higher the allowed overlap rate.
Sort Type
The sequence of outputting targets.
Angle Enable
If the detected target has angle variations (such as rotation), you can set the relative angle range. The angle can be adjusted as
needed. The default range is -180° to 180° .
Width/Height/Area Enable
The width/height/area range of the detected targets. The unit is pixel. If you enable this parameter, only objects within the
configured range of width/height/area can be detected. By default, the width/height range is set from 1 to the maximum horizontal/
vertical resolution of the device, but it can be customized.
Outside Filter Enable
If a small part of the target extends beyond the detection area, you can configure this parameter to decide whether the target is
recognized. When it is disabled, the target can be recognized. When it is enabled, the target cannot be recognized.
Optimal Model Size Enable
Change this value if the value generated automatically is not precise.
7. Click Test Running to test the detection according to settings.
Logic tools include the If Module tool, Condition Judge tool, Logic Judge tool, Combination Judge tool, Character Comparison tool,
and Calculator tool. This module mainly implements relative functions via the logical calculation.
If Module
In certain conditions, the Software can run the branch tools in the If module.
Before You Start
The camera parameters and base image are configured. The If module is added.
Steps
1. Click on the right of If Module.
2.Optional: Customize a name for this tool at the Custom Name field. By default, the tool will be named as If Module.
3.Set the Condition.
– Module Status: Click to select a module or tool, and then select OK or NG at the Result field.
– Communication String: Click to select a module or tool, and then enter the result.
i You can select Camera, Base Image, and tools which are not added to the branch.
● You can add all the other tools supported by the camera except If Module as branch tools.
i ● The configuration of branch tools are the same as those not in the If module.
● Select a branch tool and then click to remove it from the If module.
Condition Judge
The Condition Judge tool is used for judging whether the status of the selected module is qualified and generate a tool result.
Before You Start
The camera parameters and base image are configured. The Condition Judge tool is added.
Steps
1. Click to select a condition that can be output.
i ● You can select camera image, base image and the tools which have been added.
● You can select multiple conditions.
2.Set the Valid Range for each condition. If the output results are in the valid range, the tool outputs OK, or the tool outputs NG.
3.Select the Operation Type.
–All Match: if the output results of all the conditions are in the valid range, the tool outputs OK, or the tool outputs NG.
–Any Match: if the output result of any condition is in the valid range, the tool outputs OK, or the tool outputs NG.
4. Click Test Running to test the tool.
Logic Judge
The Logic Judge tool can make comprehensive judgments based on multiple visual tools and give the final detection result (OK or NG).
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the Logic Judge tool.
Steps
1. Click under the Operation Data and select a module to subscribe to its status. You can subscribe to the statuses of multiple
modules.
● You can subscribe to statuses of the camera image module, reference image module, and other added visual
i tools.
● You can click to delete the selected data.
Combination Judge
Combination Judge outputs specified results by judging the results output by different tools in a combination.
Before You Start
The camera parameters and base image are configured. The Combination Judge tool is added.
Steps
1. Click Edit to open the Edit window.
2. Click on the right of Conditions to add a condition.
i You can select the data output by camera images, base images, and tools which have been added.
3. Click on the top right of the window to add a valid range and output result.
4. Set the valid ranges
i If there are multiple duplicated valid ranges, the software will only execute the output results of the valid range
which is set in the earliest time.
String Compare
The Character Comparison tool can check whether the string of subscribed module meets the configured matching condition and
output the comparison result according to the judge method.
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the String Compare tool.
Steps
1.Select the Subscription Condition in Data Source to define the data source to be compared.
2.In the text box of Sub String Index String, set the string(s) you subscribe to. You can enter the No. of the string(s) you subscribe to.
3.Set the matching conditions.
–When selecting Subscribe as the contrast method, you should set the data source to be compared in Subscription Condition.
–When selecting Custom as the contrast method, you should set relative parameters as shown in the table below:
Parameter Description
When it is enabled, you can set the Length Range to define the string length condition that the
Length Enable
subscribed data source should meet.
All Digits When it is enabled, the data source should be the combination of digits.
All Lowercase Letters When it is enabled, the data source should be the combination of lowercase letters.
All Uppercase Letters When it is enabled, the data source should be the combination of uppercase letters.
When it is enabled, you can enter the special characters in Special Character. The data source
Special Character Enable
should contain one or more configured special characters.
i ● When multiple parameters are enabled, the data source should meet multiple conditions.For example, when All
Digits, All Uppercase Letters, and All Uppercase Letters are enabled, the data source should be the combination of
digits, uppercase letters, and lowercase letters.
– When selecting Date_Time as the contrast method, you should set matching conditions based on the current system time of the
camera. The relative parameters are shown in the table below:
Parameter Description
When it is enabled, you can set the Truncation Range to define the positions in the string of
Truncate String
characters to be compared.
Set the date and time format, such as Year-Month-Day-Hour-Minute-Second, Year-Month-Day-
Date Sort Type
Hour-Minute, Year-Month-Day-Hour.
Date Order Set the date and time sorting type, such as normal order and reversed order.
Special Character Filter Set the characters, which will not be compared.
Set the offset of data source from system time. When subscribed data - system time = time offset,
Time Offset
the data source meets the requirement.
– When selecting Fixed String as the contrast method, the Software will compare the strings you subscribed to with the fixed strings.
If they are the same, the Software outputs OK, otherwise NG.
– When selecting Regular Expression as the contrast method, you can customize, import,and export the regular expression rules. If
the data you subscribed to meets the rules you set, the Software outputs OK, otherwise NG.
Click Edit → Add to open the Regular Expression Filter Rules window and set the parameters for the rules.
Calculator
The Calculator supports customizing variable names and functions.
Before You Start
The camera parameters and base image are configured. The Calculator is added.
Steps
1. Click to add a variable.
4. Select the data type for the variable behind the function.
i If you select float, you need to set the decimal digits which range from 1 to 6.
Defect detection tools include the Exception Detection tool, which can detect the exceptions after training.
Exception Detection
Exception detection tool tests whether there are defects on images after learning the imported OK model. It is mainly used for scenarios
where there are no NG images or the number of NG images is small.
Before You Start
The camera parameters and base image are configured. Fixture should be enabled and the template area should be drawn. The
exception detection tool should be added.
Steps
1. Draw a Detection Area. By default, the whole image is the detection area. You can click to draw a rectangle detection area.
2. Optional: Enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration.
i ● This function is enabled by default and automatically subscribes to the fixture configuration of the base image.
● Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
3.In the Model area, select a registration method.
–Click Import to import the models to the devices.
–Click Manually Register to manually register the image.
1.No more than 20 models can be imported.
● : import real-time captured images.
● : import images stored in the camera. You can set the searching conditions to search images in the camera. The Software
supports importing multiple images at a time.
i See Acquired Image Management for details about searching images stored in cameras.
● : import images from the PC. The image format should be .jpg or .bmp.
2.Select OK or NG to classify the imported images.
3.Click OK to finish importing images.
4.Optional: View, delete the imported images in the list below.
● Click an image to view preview it.
● Click on the left to clear all the imported images.
● Put the mouse cursor on the upper-right of the image, and click to delete this image.
● Select All, OK, NG in the drop-down list to filter images.
5.In the classification list, click Switch to switch the classification type of the current image.
6.Go back to the parameter configuration page and view the registered information.
i ● If you enable this, the imported images and models will be kept after training.
● If not, only the models will be kept after training.
5. Select Size Mode. Auto is recommended.
i If you select Manual Select, you will be required to set the Size. Size indicates the size of the smallest unit to be
detected. The smaller the size, the higher the sensitivity; the bigger the size, the lower the sensitivity.
6. Click the image of the imported OK or NG models to view details of the images.
7.Optional: Adjust Down Sample Rate to change the image size. By default, it's 100.
8.Click Train to start learning the imported models. The learning results will show after finishing training.
i
● Make sure at least 1 valid OK model is imported. Otherwise, the training will fail.
● You can click in the upper-right corner to train again if needed.
9.Select OK or NG as the basis of judging running results.
10.Optional: Set Score Threshold. When the detected score is within this range, the detecting result will be OK, or the detecting result
will be NG.
11.Click Test Running to test the configured parameters based on the real-time acquired images.
Location tools include the Match Calibration tool and Match Location tool. This module implements the detection via locating.
Match Calibration
This tool is used for setting the transferring relation between the camera coordinate and the world coordinate. Using multiple-point
calibration, a calibration file can be generated. At least 4 points are required for the calibration.
Before You Start
The camera parameters and base image are configured. The Match Calibration tool is added.
Steps
1.Select a method for acquiring calibration points in the Get Calibration Point field.
2.Set the Translation Number, e.g. the times of translation for acquiring calibration points. Only the translation in X/Y direction is
supported. The minimum value is 4. Generally, it is set as 9.
3.If the block and the image do not share the same axis, set the Rotation Number.
4.If you select Trigger Acquisition at the Get Calibration Point field, you should set the template area and configure the following
parameters for a better recognition quality.
Parameter Description
3 options are supported.
Upper Camera Position
The camera position does not change, and it is above the detecting object.
Camera Mode Lower Camera Position
The camera position does not change, and it is below the detecting object.
Dynamic Camera
The camera moves with the mechanical arm.
3 options are supported.
○ Scale, Rotation, Aspect Ratio, Tilt, Translation and Transmission.
DOF
○ Scale, Rotation, Aspect Ratio, Tilt and Translation.
○ Scale, Rotation and Translation.
We recommend you to use the default values.
○ If you select Huber or Tukey as the weighting function, you need to set the weighting coefficient.
Weighting
Function ○ If you select Ransac as the weighting function, you need to set the distance threshold and sampling rate.
The distance threshold refers to the distance threshold of discarding mistake points. The lower the distance
threshold, the more strict the selection of points.
Parameter Description
Reference Point X/Y The physical coordinates representing the origin. You can set it as (0,0).
Offset Point X/Y The offset of each movement to X or Y direction. It can be positive or negative.
Movement Priority The direction of movement in priority.
Communation Number The movement times before the mechanical arm changes moving direction.
Reference Angle The original angle before rotation.
Angle Offset The angle of each rotation.
Calibration Origin Generally, it should be set as 4.
7.Get calibration points. The number of calibration points is subject to the times of rotations and offsets.
● If you select Manual Input at the Get Calibration Point, click Edit below to edit the calibration points or import the data from the PC.
● If you select Trigger Acquisition at the Get Calibration Point, click Test Running Execute to get the calibration points.
○ If you click Test Running, the camera will acquire images continuously and start calibration; if you click Execute,
the camera will acquire one image and start calibration.
i ○ After calibration, you can click Edit to view the data, or edit the calibration points or import calibration points
from the PC.
○ You can clear the current calibration points and reset them, or click Export to save the calibration points to the
PC.
8. Click Test Running Execute to generate the calibration file.
9. Click Export on the right of Calibration Parameters to save the generated calibration file to the PC.
Match Location
Match Locate helps get the exact position of the object in the image coordinate, and the position in the image coordinate corresponds
with the position in the physical coordinate.
Before You Start
The camera parameters and base image are configured. The Match Locate tool is added.
Steps
1. Click next to Run Point X, Run Point Y, and Run Point Angle and select a parameter to subscribe to for each of them respectively.
2.If you need to transform the image coordinate of the object into the physical coordinate, switchCalibration Transformation
Enableto on and set the related parameters.
1)Click Import on the right of Calibration List to import the calibration file to the Software.
i You can generate the calibration file by the Match Calibration tool. See Match Calibration.
2) If you need to calculate the offset of the physical points and reference points,switchContraposition Enableto on and set the
related parameters.
Fixture
The fixture tool is used to perform position fixture based on the reference created by subscribing to the output result information (including
the x-coordinate, y-coordinate, and angle) of each module.
Steps
1. Click next to Basic Point X, Basic Point Y, and Basic Point Angle and select a module output result to subscribe to for each of them
respectively.
2. Click Create to output result information based on the information subscribed above.
The result information being outputted can be subscribed by other modules.
3. Click Test Running to test the tool.
Deep learning tools include the DL Classification tool and DL Object Detection tool. They can implement the visual detection via deep
learning algorithms.
DL Object Detection
The DL Object Detection tool is an image segmentation based on target geometric and statistical features. It combines target
segmentation and recognition. Its accuracy and real-time performance are an important capability of the whole system.
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the DL Object Detection tool.
Steps
1.Set the Detection Area according to your actual needs. It it set to analyze the whole base image by default.
You can click to draw a rectangle detection area.
2.Optional: If you need to shield areas in the detection area, click Edit on the right of Shielded Area and draw areas.
3.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to
Configure Base Image for details.
4.Click Import of Model List to import the trained model file.
● The name of model files to be imported can only contain characters, digits, and underlines.
i ● The model is trained by the deep learning training tool, the target platform is SM-Datum, and the training type is
target detection.
● You can refer to the user manual of relative tools for detailed training operations.
5. Set the Judge Basis and relative parameters to define the result judgment rule.
Judge by Quantity
You need to set the quantity range of detected targets. The result will be OK if the number of detected targets is within the
configured quantity range, or it will be NG.
Min. Score
You need to set the minimum score for the detection result. The result will be OK if the detection result score reaches or exceeds the
configured min score, or it will be NG.
Judge Type
You need to set the type. The result will be OK if the detected target type is same as the configured type, or it will be NG.
i The configured type name should be according to the added types during model training.
6. Optional: Set extra parameters if the result is not what you have expected.The parameters are shown in the table below.
Parameter
Description
Name
Max. Number
The maximum quantity of objects to be searched.
to Find
The minimum similarity between the model and the target, i.e., the similarity threshold. The higher the value,
Min. Score the higher the confidence. The object whose similarity reached or exceeded the configured threshold can be
searched.
Max. Overlap
The maximum percentage of target can be tampered.
Rate
The order of results displayed.
● X Coordinate Ascending: Sort the displayed results according to the X-coordinate value in ascending order.
Sort Type
● Y Coordinate Ascending: Sort the displayed results according to the Y-coordinate value in descending order.
● Confidence Descending: Sort the displayed results according to the target score in descending order.
Defines the relative angle range tolerance for target objects. To search an object with rotation change, you need
Angle Enable
to set it accordingly, and angle range is from -180° to 180° .
Width Enable If it is enabled, target objects whose width is within the configured range can be detected.
Height Enable If it is enabled, target objects whose height is within the configured range can be detected.
If it is enabled, the outside filter score can be set. When a part of the target is not in the detection area, and the
Outside Filter
proportion of the part within the edge of the search target in the whole is less than the filter score, the target
Enable
cannot be searched.
DL Classification
The DL Classification tool can distinguish different types of targets according to the different features reflected in the image. It can
classify the objects in the image or area of an image via deep learning algorithms, and has a wide range of applications in object
recognition and sorting.
Before You Start
Make sure that you have configured the camera parameters and the base image, and added the DL Classification tool.
Steps
1.Optional: You can enable or disable Independent Fixture as needed. When enabled, target offset can be adjusted according to the
parameter settings of the chosen fixture configuration. This function is enabled by default and automatically subscribes to the fixture
configuration of the base image.
Make sure you have enabled Fixture when configuring the base image. Refer to Configure Base Image for details.
2.Click Import in Model List to import the model file.
● The name of model files to be imported can only contain characters, digits, and underlines.
i ● The model is trained by the deep learning training tool, the target platform is SM-Datum, and the training type is
image classification.
● You can refer to the user manual of relative tools for detailed training operations.
3. Set the Judge Basis and relative parameters to define the result judgment rule.
Min. Score
You need to set the minimum score for the detection result. The result will be OK if the detection result score reaches or exceeds
the configured min score, or it will be NG.
Judge Type
You need to set the type name. The result will be OK if the detected target type is same as the configured type name, or it will be
NG.
i The configured type name should be according to the added tags during model training.
ROI Types
The ROI types include template area, detection area, and shielded area drawn inside the template area and detection area.
For different ROI types, the supported number of ROIs and shapes vary.
ROI Type Shape Numbers Example
i ● When drawing multiple ROIs, digit icons indicating the number of ROIs will be displayed in the down-right corner
of the drawing area.
● Some tools do not support drawing an ROI. The supported shapes are as follows.
i The results output varies according to different series cameras and firmware versions.
Question:
Why the Software cannot enumerate my camera?
Possible Cause:
1.The camera is not powered on.
2.The network connection is abnormal.
Solution:
1.Check the PWR light at the top of the camera. If the camera is powered on, the light will remain green.
2.Check the LNK light at the top of the camera. If the network connection is normal, the light is green and keeps flashing. You
should also make sure that your PC's network port is in the same network segment with the camera.
Question:
Why is the image completely black or too dark when previewing?
Possible Cause:
1.The light is not strong enough.
2.The value of exposure, gain, and other settings is too low.
Solution:
1.Enhance the brightness or use a stronger light.
2.Increase the value of exposure and gain.
Why is the image stuck / in low frame / separated in the live view panel?
Question:
Why is the image stuck / in low frame / separated in the live view panel?
Possible Cause:
The network transmission speed is below 100Mbps.
Solution:
Make sure that the network transmission speed is 100Mbps or above.
Question:
Why is the image failed to be displayed in the preview window?
Possible Cause:
The trigger mode is enabled, but no trigger signal is sent.
Solution:
Send a trigger signal to the camera or disable the trigger mode.
No.8 Xiyuan 9th Road, West Lake District Hangzhou Zhejiang 310030 China
Tel: 86-571-86888309
www.visiondatum.com
For Research Use Only ©2023 Hangzhou Vision Datum Technology Co., Ltd.
All rights reserved. All trademarks are the property of Hangzhou Vision Datum Technology Co., Ltd.