Book Speos 2023 R2 Users Guide
Book Speos 2023 R2 Users Guide
Book Speos 2023 R2 Users Guide
Contents
1: Welcome!.......................................................................................................................................14
2: Ansys Product Improvement Program..............................................................................................15
3: Speos Software Overview................................................................................................................19
3.1. Graphical User Interface..............................................................................................................................19
3.2. Launching Speos Using Command Line......................................................................................................21
3.3. Setting Speos Preferences...........................................................................................................................22
3.4. Geometry Modeler........................................................................................................................................25
3.4.1. Geometry Modeler Overview........................................................................................................25
3.4.2. Activating the Parasolid or ACIS Modeler.....................................................................................26
3.4.3. Parasolid Must-Know....................................................................................................................26
3.5. Using the Feature Contextual Menu............................................................................................................31
3.6. Useful Commands and Design Tools...........................................................................................................32
3.7. Extensions and Units....................................................................................................................................38
3.8. Beta Features................................................................................................................................................40
3.9. Speos Files Analysis......................................................................................................................................41
3.9.1. Speos Files Analysis Overview......................................................................................................41
3.9.2. Using the Speos File Analysis........................................................................................................42
3.10. Presets.........................................................................................................................................................44
3.10.1. Presets Overview.........................................................................................................................44
3.10.2. Customizing the Preset Repository............................................................................................45
3.10.3. Creating a Preset.........................................................................................................................46
3.10.4. Exporting a Speos Object to a Preset.........................................................................................47
3.10.5. Setting a Preset as Default..........................................................................................................47
3.10.6. Creating a Speos Object from a Default Preset..........................................................................48
3.10.7. Applying a Preset to an Existing Speos Object...........................................................................48
3.10.8. Accessing the Quick Preset Menu...............................................................................................49
4: Imports..........................................................................................................................................50
4.1. Important Information on Import Format..................................................................................................50
4.2. STL Files Import............................................................................................................................................50
4.3. Geometry Update Tool.................................................................................................................................51
4.3.1. Geometry Update Overview..........................................................................................................51
4.3.2. Importing an External CAD Part....................................................................................................52
4.3.3. Updating an External CAD Part.....................................................................................................52
4.4. Lightweight/Heavyweight Import...............................................................................................................53
4.4.1. Lightweight/Heavyweight Import Overview................................................................................54
4.4.2. Deactivating the Lightweight Import...........................................................................................55
4.4.2.1. Configuring the SpaceClaim Reader.............................................................................55
4.4.2.2. Configuring the Workbench Reader..............................................................................56
4.4.3. Switching a Body from Lightweight to Heavyweight..................................................................57
4.5. Incorrect Imports - Solutions.......................................................................................................................58
5: Materials........................................................................................................................................60
5.1. Optical Properties........................................................................................................................................60
5.1.1. Optical Properties Overview.........................................................................................................60
5.1.2. Non-Homogeneous Material.........................................................................................................61
5.1.2.1. Understanding Non-Homogeneous Material................................................................61
5.1.2.2. Graded Material File.......................................................................................................64
5.1.2.3. List Of Methods...............................................................................................................66
5.1.3. Surface State Plugin......................................................................................................................69
5.1.3.1. Surface State Plugin Examples......................................................................................69
5.1.3.2. Creating and Testing a Surface State Plugin in Windows............................................69
5.1.3.3. Creating and Testing a Surface State Plugin in Linux...................................................70
5.1.4. Optical Properties Creation..........................................................................................................71
5.1.4.1. Creating Optical Properties...........................................................................................71
5.1.4.2. Creating Face Optical Properties...................................................................................73
5.1.5. Optical Properties Management...................................................................................................75
5.1.5.1. Creating a Material Library.............................................................................................75
5.1.5.2. Opening a Material Library............................................................................................76
5.1.5.3. Applying a Material from a Material Library..................................................................77
5.1.5.4. Applying a Material from the Tree.................................................................................78
5.1.5.5. Replacing a Material on Geometries.............................................................................79
5.1.5.6. Converting Face Optical Properties..............................................................................79
5.1.6. Locate Material Tool......................................................................................................................80
5.1.6.1. Understanding the Locate Material Tool......................................................................80
5.1.6.2. Using the Locate Material Tool......................................................................................82
5.1.6.3. Applying a Visualization Color to a Material.................................................................83
5.1.6.4. Defining the Visualization Options................................................................................84
5.2. Texture Mapping...........................................................................................................................................85
5.2.1. Texture Mapping Overview...........................................................................................................85
5.2.2. Understanding UV Mapping..........................................................................................................87
5.2.3. Understanding Texture Mapping..................................................................................................88
5.2.4. Texture Mapping Process Overview.............................................................................................89
5.2.5. Texture Mapping Preview.............................................................................................................91
5.2.6. Creating a Texture Mapping..........................................................................................................94
5.2.6.1. Creating the UV Mapping...............................................................................................95
5.2.6.2. Applying Textures...........................................................................................................99
5.2.7. Activating the Texture Mapping Preview...................................................................................102
5.2.8. Texture Normalization................................................................................................................104
5.2.8.1. Understanding Texture Normalization.......................................................................104
5.2.8.2. Setting the Texture Normalization..............................................................................105
5.3. Polarization.................................................................................................................................................105
5.3.1. Understanding Polarization........................................................................................................105
5.3.2. Creating a Polarizer.....................................................................................................................106
6: Local Meshing...............................................................................................................................108
6.1. Understanding Meshing Properties...........................................................................................................108
12.12.2.2. Modifying a Post Processed Optical Part Design Geometry with Post
Processing............................................................................................................................601
13: Head Up Display..........................................................................................................................603
13.1. Head Up Display Overview.......................................................................................................................603
13.2. Design........................................................................................................................................................604
13.2.1. HUD System Overview...............................................................................................................604
13.2.2. Understanding the HUD Optical Design Parameters...............................................................605
13.2.2.1. General........................................................................................................................605
13.2.2.2. Eyebox.........................................................................................................................605
13.2.2.3. Target Image...............................................................................................................606
13.2.2.4. Projector.....................................................................................................................607
13.2.2.5. Manufacturing............................................................................................................608
13.2.2.6. Advanced Parameters................................................................................................609
13.2.3. Defining a HUD System with HUD Optical Design....................................................................612
13.2.4. CNC Export (Surface Export File)..............................................................................................616
13.3. Analysis.....................................................................................................................................................616
13.3.1. HUD Optical Analysis.................................................................................................................616
13.3.1.1. Setting the HUD Optical Analysis...............................................................................616
13.3.1.2. Exporting a HUD Optical Analysis Simulation...........................................................630
13.3.1.3. Speos Plugin...............................................................................................................630
13.3.1.4. Speos Plugin Examples..............................................................................................667
13.3.2. Results........................................................................................................................................673
13.3.2.1. Eyebox.........................................................................................................................673
13.3.2.2. Target Image...............................................................................................................674
13.3.2.3. Optical Axis.................................................................................................................674
13.3.2.4. Best Focus Virtual Image............................................................................................675
13.3.2.5. Tangential Virtual Image............................................................................................677
13.3.2.6. Sagittal Virtual Image.................................................................................................677
13.3.2.7. Best Focus Spot..........................................................................................................678
13.3.2.8. Tangential Spot..........................................................................................................678
13.3.2.9. Sagittal Spot...............................................................................................................679
13.3.2.10. Astigmatism..............................................................................................................679
13.3.2.11. Static Distortion........................................................................................................679
13.3.2.12. Dynamic Distortion..................................................................................................680
13.3.2.13. Optical Volume.........................................................................................................680
13.3.2.14. Pixel Image...............................................................................................................680
13.3.2.15. Ghost Image Optical Axis.........................................................................................681
13.3.2.16. Ghost Image..............................................................................................................681
13.3.2.17. PGU...........................................................................................................................681
13.3.2.18. Warping.....................................................................................................................682
13.3.2.19. Visualizing a Speos360 Result .................................................................................682
13.3.3. HOA Tests APIs...........................................................................................................................683
13.3.3.1. Test APIs......................................................................................................................683
13.3.3.2. Image Warping APIs....................................................................................................708
18: Troubleshooting..........................................................................................................................819
18.1. Known Issues............................................................................................................................................819
18.1.1. Materials....................................................................................................................................819
18.1.2. Sources......................................................................................................................................819
18.1.3. Sensors.......................................................................................................................................819
18.1.4. Components..............................................................................................................................820
18.1.5. Simulation.................................................................................................................................820
18.1.6. Optical Part Design....................................................................................................................821
18.1.7. Head-Up Display........................................................................................................................822
18.1.8. Results........................................................................................................................................822
18.1.9. Automation................................................................................................................................823
18.1.10. Miscellaneous..........................................................................................................................823
18.2. Error Messages..........................................................................................................................................825
18.2.1. Not enough Speos HPC Licenses..............................................................................................825
18.2.2. Proportional to Body size STEP and SAG parameters are not respected...............................826
18.2.3. Surface Extrapolated.................................................................................................................826
18.2.4. Invalid Support: Offset Support Is Not Possible......................................................................827
19.1.1. Copyright and Trademark Information...........................................................828
This document provides you with conceptual information and detailed procedures to get the best out of Speos.
Speos lets you design and optimize light and systems. Validate the ergonomics of your product and take a virtual picture
of it to review designs collaboratively.
Refer to the Release Note to see what's new in the latest version.
Main Features:
• Optical Properties Optical properties define how light rays interact with geometries.
• Components Components can be used for data exchange between suppliers and customers. It is compatible with
multi-CAD platform where Ansys software are integrated.
• Sources Sources are light sources propagating rays in an optical system.
• Sensors Sensors integrate rays coming from the source to analyze the optical result in the optical system.
• Simulations Simulations give life to the optical system to generate the results, by propagating rays between sources
and sensors.
• Optical Part Design Optical Part Design provides geometrical modeling capabilities dedicated to optical and lighting
systems.
• Head Up Display Head Up Display is a system that allows you to present data on a transparent display, usually a
windshield, without having to look away from the initial viewpoint.
• Optimization Optimization helps find the best solution for your optical system according to an expected result and
parameters to be varied.
• Automation Automation allows you to control and automate actions in Speos with routines created thanks to the
provided APIs.
2: Ansys Product Improvement Program
This product is covered by the Ansys Product Improvement Program, which enables Ansys, Inc., to collect and analyze
anonymous usage data reported by our software without affecting your work or product performance. Analyzing product
usage data helps us to understand customer usage trends and patterns, interests, and quality or performance issues. The
data enable us to develop or enhance product features that better address your needs.
How to Participate
The program is voluntary. To participate, select Yes when the Product Improvement Program dialog appears. Only then
will collection of data for this product begin.
Data We Collect
The data we collect under the Ansys Product Improvement Program are limited. The types and amounts of collected data
vary from product to product. Typically, the data fall into the categories listed here:
Hardware: Information about the hardware on which the product is running, such as the:
• brand and type of CPU
• number of processors available
• amount of memory available
• brand and type of graphics card
System: Configuration information about the system the product is running on, such as the:
• operating system and version
• country code
• time zone
• language used
• values of environment variables used by the product
Session: Characteristics of the session, such as the:
• interactive or batch setting
• time duration
• total CPU time used
• product license and license settings being used
• product version and build identifiers
• command line options used
• number of processors used
• amount of memory used
• errors and warnings issued
Session Actions: Counts of certain user actions during a session, such as the number of:
• project saves
• restarts
• meshing, solving, postprocessing, etc., actions
• times the Help system is used
• times wizards are used
• toolbar selections
Model: Statistics of the model used in the simulation, such as the:
• number and types of entities used, such as nodes, elements, cells, surfaces, primitives, etc.
• number of material types, loading types, boundary conditions, species, etc.
• number and types of coordinate systems used
• system of units used
• dimensionality (1-D, 2-D, 3-D)
Analysis: Characteristics of the analysis, such as the:
• physics types used
• linear and nonlinear behaviors
• time and frequency domains (static, steady-state, transient, modal, harmonic, etc.)
• analysis options used
Solution: Characteristics of the solution performed, including:
• the choice of solvers and solver options
• the solution controls used, such as convergence criteria, precision settings, and tuning options
• solver statistics such as the number of equations, number of load steps, number of design points, etc.
Specialty: Special options or features used, such as:
• user-provided plug-ins and routines
• coupling of analyses with other Ansys products
This section presents an overview of Speos software (interface, settings, preferences, navigation and tools).
General Interface
Interface Highlights
• From the Simulation panel, you can visualize the features' state and control their visibility in the 3D view.
A feature is:
º bolded and underlined when it is in edition. The feature is then considered as active.
º in error when an exclamation point appears after the feature's name.
Note: A feature appears in error if it is not defined or has not been correctly defined.
Tip: You can drag and drop Speos objects in and out of the folders created and to reorganise them inside
a folder (except for Materials).
Tip: All panels of the interface can be moved. Simply drag a panel away from a dockable location to drop
it where you want to place it.
To restore the original layout at any time, go to File > Speos Options and in Appearance, click Reset
Docking Layout.
Note: Modifying the Speos tree panels position is not kept when reopening Speos after opening a
SpaceClaim session without Speos.
Related tasks
Using the Feature Contextual Menu on page 31
This page lists all the operations that can be performed from the features' contextual menu.
Related information
Useful Commands and Design Tools on page 32
This page describes Speos selection behavior and provides some useful commands, shortcuts and design tools that
are frequently used during feature definition.
Command Lines
Command Line Description
Note: All SpaceClaim arguments are supported. For the full arguments list, refer to the following page in
the SpaceClaim documentation.
• Define how many decimals need to be available to define the length of rays.
• Define how many decimals need to be available to define angular values.
• Check Automatic launch at end of simulation if you want the simulations' results to be automatically opened
with their associated viewer at the end of the simulation.
• Deselect Draw results in 3D if you do not want the results to be displayed in the 3D view at the end of the
simulation.
• Check Sound when long process finishes to be warned when a feature has finished its computation.
Note: This options only concerns certain processes: Direct and Inverse simulations, 3D Textures,
Optical Surface, Optical Lens and Light Guide.
• Define the Number of thread to use for direct and inverse simulations.
• Check VR Sensor Memory Management to activate the memory management.
Note: Disabling this option can greatly improve simulation performance, but in that case, make sure
that you have enough memory on your computer to generate Speos360 files.
You must have more than 6 GB available for immersive sensor and more than X GB available for an
observer sensor (X corresponding to the number of positions defined in the sensor).
• Check Automatic "Save All" before running a simulation if you want to trigger a backup of the project before
the simulation is launched.
Note: This option does not apply to interactive simulations because they are automatically updated.
6. In Colorimetry, select the default Colorimetric Standard to be used for all simulations.
• CIE 1931: 2 degrees CIE Standard Colorimetric Observer Data. Only one fovea, which covers about a 2-degree
angle of vision, was used during the experiments leading to the 1931 standard observer.
• CIE 1964: 10 degrees CIE Standard Colorimetric Observer Data. The 1964 standard observer was based on
color-matching experiments using a 10-degree area on the retina.
Note: For The CIE 1964, the luminous level is not correct whatever the unit. Only use the CIE 1964 for
color display and color analysis with colorimetric and spectral map.
7. In Feature Edition, select the intensity result viewing direction to use for the sensors:
• Select From source looking at sensor to position the observer point from where light is emitted.
• Select From sensor looking at source to position the observer in the opposite of light direction.
8. In File Management, check Automatically copy selected files under document's folder to embed any input
file selected outside the SPEOS Input Files directory in the current project.
This option ensures the project's portability as it copies any imported file back into the SPEOS Input Files
directory.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 23
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Speos Software Overview
If you do not check the option to automatically copy the files, you can still click Copy under document to manually
perform the copy to the Speos Input Files directory of the current project.
Note: Both options do not apply to *.SPEOSLightBox files, 3D Textures, and CAD parts imported using
the Geometry Update tool.
9. In Data import/export, deactivate Import/Export geometries without interoperator healing if you do not
want to apply, during import or export, the healing operations that you can find in the Repair tab.
10. In Preset, if you want to customize the preset repository, check Use custom folder path and browse one.
For more information, refer to Customizing the Preset Repository on page 45.
11. In Modeler Options, select the modeler used to generate the geometries between ACIS and Parasolid.
For more information, refer to Geometry Modeler on page 25.
12. In Modeler Options, deactivate Lightweight Import if you want to import CAD file (IGES, STEP, CATIA files) to
heavyweight and/or use the block recording tool.
Fore more information, refer to Deactivating the Lightweight Import on page 55.
13. In the GPU tab, you can:
• define the GPUs to use to run the simulations with the GPU Compute option.
Note: If you select multiple GPUs, simulations will consume the sum of the equivalent cores per GPU.
If you select no GPU, Speos will automatically select the most powerful available.
• Allow XMP generation from Simulation Preview in order to export as XMP or as picture the current live
preview result when running the Live Preview.
Note: When Allow XMP generation from Simulation Preview is activated, the Live Preview consumes
the sum of equivalent core per GPU and uses all selected GPUs (rather than just the most powerful of
the list).
14. From the Warnings tab, check or clear the check boxes to activate or deactivate specific warnings.
Speos preferences are set.
Related tasks
Using the Feature Contextual Menu on page 31
This page lists all the operations that can be performed from the features' contextual menu.
Related information
Useful Commands and Design Tools on page 32
This page describes Speos selection behavior and provides some useful commands, shortcuts and design tools that
are frequently used during feature definition.
Note: The ACIS modeler will be removed in version 2024 R1. The conversion from a *.scdoc to a *.scdocx
will still be working.
Note: Projects saved with the Parasolid modeler BETA in version 2021 R2 might not be supported in version
2022 R1 and future releases.
Note: As SpaceClaim (opened via SpaceClaim and not Speos) does not warn you about the current modeler
used, make sure that the environment variable is set according to the modeler you want to use.
Note: As the extension changes from ACIS to Parasolid, the next save of the file will prompt you to save
as.
• A *.scdocx file generated with Parasolid can be opened and converted in ACIS.
• When converting to Parasolid a *.scdoc file created in ACIS, the conversion can take time. Once converted, the file
will open normally, without taking time.
Note: If you import a *.scdoc file in Parasolid which references external *.scdoc file(s), after opening the
root *.scdoc in Parasolid make sure to internalize all the referenced documents into the root assembly and
then save the root assembly as *.scdocx file. Then, you can use the saved single *.scdocx file (which will be
a translated and combined version of all the input scdocs).
4. If you selected Parasolid, you can activate the Lightweight import to import and load a lighter level of detail
of data than a full load, reducing the conversion time for CAD files (CATIA, IGES, STEP files) into Parasolid.
For more information, refer to Lightweight Import.
5. Click OK to validate.
6. Restart Speos to apply the modeler and options.
Speos now uses Parasolid or ACIS as modeler and projects are saved in *.scdocx for Parasolid and in *.scdox for
ACIS. "Parasolid" or "ACIS" is now indicated in the Speos title.
ACIS/Parasolid Conversion
For more information on ACIS/Parasolid Conversion rules, refer to Geometry Modeler Overview.
Conversion Time
When converting to Parasolid a *.scdoc file created in ACIS, the conversion can take time. Once converted, the file
will open normally, without taking time.
Workbench Project
As a project cannot mix ACIS data and Parasolid data, a Workbench project created in ACIS should be recreated after
the *.scdoc file conversion to Parasolid.
1. Convert the *.scdoc file from ACIS to Parasolid.
2. Save the converted file.
The file is saved as into the *.scdocx Parasolid file format.
3. From the saved *.scdocx file, recreate the Workbench project.
Note: If you do not see the *.scdocx file in the Windows Explorer, type *.* in the File name field to
display all files.
Important: Make sure to import your external CAD parts into a Parasolid project directly. Avoid importing
them into an ACIS project and convert the project from ACIS to Parasolid. This may lead to geometries issues.
Note: Bodies from CATIA files imported into Speos and saved as *.dsco in version 2021 R2 cannot be meshed
during a Speos simulation. (*.dsco file are not supported from version 2022 R1)
Tip: Instead of importing a CATIA geometry made of disjoint bodies, split the disjoint bodies in different
geometries in CATIA, and import each geometry individually.
Meshing in Parasolid
In case of a thin body, make sure to apply a fixed Meshing sag mode and a Meshing sag value smaller than the
thickness of the body. Otherwise you may generate incorrect results.
For more information on Meshing, refer to Meshing Properties.
If you set a length smaller than Millimeters via the SpaceClaim Options, only the SpaceClaim environment will be
set to this scale and not the Speos one.
• The Small and Large Length scales are not support by Speos.
Only use the Standard Length scale.
Note: Any source, sensor or simulation can be copied. When you create a feature's copy, you inherit from
its definition.
To delete a feature, make sure this feature has no dependencies with other features (a source selected in
a simulation for example), otherwise the feature will not be deleted from the Tree.
Note: Advanced simulation settings are only available from the simulation feature contextual menu.
Related information
Graphical User Interface on page 19
This page gives a general overview of Speos interface and helps to better understand the working environment.
Useful Commands and Design Tools on page 32
This page describes Speos selection behavior and provides some useful commands, shortcuts and design tools that
are frequently used during feature definition.
Computing Simulations on page 341
This page describes the different ways to run a simulation inSpeos.
3D view Icons
It may appear sometimes that the icons in the 3D view disappear due to a wrong move.
To display them again:
1. Select File > Speos Options.
2. Select the Popular tab.
3. In the Control Options section, set Tool guide position to another value than Not shown.
Each selection must be validated for the objects to be correctly imported in the Speos objects' list.
• To check the status of primary and secondary selection, mouse-over the bottom tool bar as follows:
As the Primary Selection, you can use the Secondary Selection with the Speos commands to fill selections such as
the Validation command : Hold the ALT key and click Validate .
Disable Preselection
The Disable preselection command removes the preselection (temporary highlight) of 3d view elements in order
to gain performance on the interface reactivity.
To disable the preselection, right-click anywhere in the 3D view, and check Disable preselection.
Revert Selection
In case you lose your selection or want to go back to a previous selection, use the Revert Selection command
(bottom right of SpaceClaim interface) to restore the state of your previous or lost selection.
Grouping
Groups or Named Selections allow you to better organize your project and save time during element selection.
They also allow you to save memory when working with data separation by layer as groups represent one layer in
the simulation result.
The grouping function is available for any Speos object (sources, sensors, geometries, OPD face groups etc.).
Note: A group should contain a unique set of items. We recommend grouping the same type of objects to
ease group management.
Important: You cannot select a component to embed its geometries into a Named Selection. You must
select directly the geometries to add them into the Named Selection.
A named selection of the folder content is created in the Groups panel with the name of the folder.
Note: If you modify the content of the folder, the named selection group created from the folder is not
automatically updated. You have to recreate the named selection group.
After Grouping
Once created, a named selection can be:
• Selected for a Speos feature (a simulation for example) by either using the 3D view selection tools or the
contextual menu of the definition panel.
Note: Hidden geometries belonging to a group are not highlighter in the tree when selecting the group.
Design Tools
When designing the optical system, you might need to create points or axis systems to place the geometries or
features in the scene.
Points
You can use the point creation tool (available from the Design tab) to place points in the scene.
To place the point on the right plane/object, you can sketch on a plane or directly in 3D mode:
• To sketch on a plane, place your cursor on a line, edge or surface to automatically change the plane axis.
• Press D to switch to 3D mode and compute the point on a geometry.
Axis Systems
When working with Speos features, it might be useful to create and compute axis systems on specific points of
interest.
Axis systems allow to better visualize the position and orientation of pieces of a system. It can also be used during
feature definition to select directions and origins.
Use the origin creation tool to compute a point and its associated axis system (available from the Design tab).
Move
With the Move option (available from the Design tab), you can move and rotate sensors on any of their axes.
You can also move geometries, even when a source is defined on it.
If you want more information, see Move option from SpaceClaim documentation.
Related tasks
Using the Feature Contextual Menu on page 31
This page lists all the operations that can be performed from the features' contextual menu.
Related information
Graphical User Interface on page 19
This page gives a general overview of Speos interface and helps to better understand the working environment.
Extensions
A system includes different kinds of specific files (spectrum, ray file, material, etc.).
Chinese
The primary purpose of "beta" labeling is to allow an early access to specific features that are close to being finalized.
Important: When the beta option is activated in a previous session, if you open a project in a session with
the beta option deactivated, you must reactive the beta option to access beta parameters and beta features.
The Speos Files Analysis tool opens and lists all the input references of the project.
3. According to your needs on a file, you can right-click a file reference line and:
Important: The Speos Files Analysis tool only takes action on the features used in the project. It does
not replace/erase/delete files on the FileSystem.
• Refresh
Refresh action allows to update the file reference line selected if a modification has been done on it (file
replacement in the definition, filename modification, etc.).
• Copy selected path(s)
Copy select path(s) action allows you to multi-select and copy several lines of the list and paste them in a text
editor or else of your choosing.
This is particularly useful if you want to export the list of all project dependencies, to create a report, to make
impact management, a script processing, etc.
• Replace file path with
Replace file path with action replaces the file path (meaning the string of characters "C:\...\...`file.ext") in the
reference of the object. The Speos object points to another file.
No file has been replaced in the FileSystem.
• Replace folder path with
Replace folder path with action replaces the folder path (meaning the string of characters "C:\...\foldername")
in the reference of the object.
Note: When performing this action, make sure that the file referenced in the object (or a file with the
same name and extension) is present in the folder that will replace the previous one. Otherwise an
error will be raised.
• Clear
Clear action erases the field filled by the input reference in the Speos feature. The Speos feature has no longer
input reference for this field.
No file has been erased on the FileSystem.
Note: If the Copy input files option is activated in the Speos options, the file of the new file path will
be copied in the Speos input files folder.
4. Click Close when you are done with the Speos Files Analysis tool.
3.10. Presets
Presets allow to create predefined sets of parameters and apply them to new or existing Speos objects.
Preset Management
The Preset panel provides you with a list of all created Presets contained in the default or custom Preset repository.
From this panel you can manage your Presets.
Presets can be organized in sub-folder in the Preset repository. The sub-folder hierarchy appears as a prefix in the
Name column of the panel.
The default Presets repository is C:\ProgramData\ANSYS\v2XX\Optical Products
Note: The Presets repository (default or custom) must only contain preset files.
Default Preset
Presets can be defined as default which means that the Preset defined as default will apply its values to every new
object of the underlying type.
Only one Preset can be set as default for a given object type. Thus, when setting a Preset as default, the previous
Default Preset is unset.
The new created object:
• has the same values as defined in the Default Preset.
• is named upon the Preset file name instead of the standard object type name, and the object name is incremented
with an index suffix.
Example: if the Preset file is named "My_Custom_Source.preset", the new object will be named
"My_Custom_Source.1".
Note: When the Preset that is renamed is set as default, the internal link between the object type and the
associated Preset file path is updated as well.
Note: All previously created preset are not moved automatically. You have to transfer them manually
in the custom presets repository.
All new presets are now created in the custom presets repository.
To create a preset:
3. In the Presets panel, right-click anywhere and click Create Preset from 'Select Speos Object'.
Tip: You can drag and drop the Speos object directly in the Presets panel to create the preset.
The new created preset inherits the values of the selected Speos object.
Now you can set this preset as default or apply it to a Speos object in the Simulation tree.
Note: Only one Preset can be set as default for a given object type. Thus, when setting a Preset as default,
the previous Default Preset is unset.
Tip: You can change the default preset directly from the Quick Preset menu.
The Preset is set as default and appears in bold with the Default mention in the Preset panel.
Now you can create a Speos object from the default preset.
Tip: You can change the default preset directly from the Quick Preset menu.
The Speos object is created in the Simulation tree and inherits the values of the default preset.
3. In the Presets panel, right-click the preset you want to apply on the Speos object and click Apply Preset onto
'Select Speos Object'.
Tip: You can drag and drop the preset directly on the Speos object to apply the preset.
4: Imports
Speos allows you to import and update all kinds of geometries, from CAD geometries or projects to mesh (*.stl) files.
CATIA V6 Import
When importing a project from CATIA V6, only surfaces and bodies are imported. Speos does not support points,
lines and curves import.
CAUTION: CATIA V6 project can be imported, but are not compatible with the Geometry Update - Update
External CAD Part option.
Bypass: Export the CATIA V6 file as a CATIA V5 file. Then, you can update the external CAD part using the
Update External CAD Part option.
• Connected faceted body is the default and recommended option to use. It allows you to import the mesh as a
closed faceted body.
Note: With this option, you cannot visualize the Speos meshing as the meshing used for simulation is
directly inherited from the geometry itself.
• Solid/surface body can be used if the first import option failed to work. This option allows you to import the mesh
as a surface body. The surface body then must be converted into a closed faceted surface by using the Convert
option.
Related information
Geometry Update Tool on page 51
This page introduces the Geometry Update tool which allows you to update geometries while maintaining their
relationship to Speos features.
Related information
STL Files Import on page 50
This page describes the different options available to import mesh (*.stl) files in Speos and the specificities of each
of them. Once imported, mesh (*.stl) files can be selected for Speos simulation as any other CAD geometry.
This tool offers a design and testing flexibility as it takes better consideration of the standard design process.
General Workflow
1. The Design team creates geometries in a CAD software.
2. The Speos team uses the Geometry Update tool to import the geometries from the CAD software to Speos.
Files that can be imported: CATIA V5 files (*.CATPart, *.CATProduct), CREO Parametric files (*.prt, *.xpr, *.asm,
*.xas), NX files (*.prt), SolidWorks files (*.sldprt, *.sldasm).
3. The Speos team defines the optical properties thanks to the Speos Light Simulation features.
4. The Design team modifies the geometry in the CAD software.
5. The Speos team uses Geometry Update tool to update the project to work on the latest data.
When geometries are updated, the link is maintained between Speos features and the newly imported geometries.
Materials are still applied on the geometries, sources are adjusted to the faces of the new geometries etc.
• Import External CAD Part allows you to import a new external part. The part is by default imported in the
active part of the document.
• Select all Imported Parts allows you to select specific parts you want to replace with another model.
• Update External CAD Part allows you to select the newer version of a part previously imported in Speos.
Note: You cannot use the Geometry Update tool if you have imported your external CAD part use the Open
file command.
Tip: The import/replace option is also accessible with a right-click the part you want to update from the
structure tree.
2. In the Options tab, in the Update Options section, you can define the behavior of the updates:
• Update from last known file location directly takes the last known file path used for the import/update.
• Automatically skip parts without known file paths skips the update of the parts whose file is not found.
• Skip Unmodified Files skips the files for which the reference file's modified date has not change since the last
import.
Important: Skip Unmodified Files is activated by default. Unmodified parts are skipped only if the CAD
project and Structure trees are identical.
3. In the 3D view or in the Structure tree, select the parts you want to update or click to Select all Imported
Parts.
Important: From version 2023 R2, the Modeler Option Use heavyweight mesh with simulations has been
removed from the interface as now it is always activated and so hidden.
Lightweight Characteristics
• The Lightweight import applies to CATIA files, IGES files, and STEP files.
• The object names (default or custom) from CATIA are lost after importing the file into Speos.
• The geometrical set names from CATIA are lost after importing the file into Speos.
• The Lightweight import requires the Parasolid modeler, and so uses the *.scdocx extension.
• The Lightweight import uses the SpaceClaim Reader.
The SpaceClaim Reader allows you to import CATIA files in lightweight or heavyweight using the SpaceClaim
importer.
For more information on the SpaceClaim Reader, refer to the section Workbench Options of the File Import and
Export Options page of the SpaceClaim documentation.
• Bodies imported in lightweight are not editable.
To edit a lightweight body, you have to switch it from lightweight to heavyweight using Toggle to heavyweight,
which will load all the data of the body.
• When data from bodies are switched from lightweight to heavyweight, the initialization of simulations can take
more time than usual.
• You cannot Save as lightweight bodies as they are not editable. Only heavyweight bodies can be saved as.
Warning: Do not confuse the Lightweight import which applies to bodies with the Lightweight function
from SpaceClaim which applies to the root component.
Select the import configuration to use: SpaceClaim Reader, Workbench Reader, Workbench Associative Interface.
The option Always use SpaceClaim's reader when possible is activated by default.
The option Always use SpaceClaim's reader when possible is activated by default.
4. From Start, open Ansys 20XX RX > CAD Configuration Manager 20XX RX
5. In the CAD Selection tab, check Catia V5 and select Reader (CAD installation not required).
6. Click Next.
7. In the CAD Configuration tab, click Configure Selected CAD Interfaces.
8. Click Exit.
CATIA files are ready to be imported in heavyweight.
Note: When data from bodies are switched from lightweight to heavyweight, the initialization of
simulations can take more time than usual.
The body is loaded in heavyweight and now you can edit it.
5: Materials
Materials are the entry point of optical properties and texture creation/application. The following section describes both
processes.
set VOP from the interface or build more complex materials with the User Material Editor .
• Surface Optical Properties (SOP) define the behavior of light rays when they hit the surface of a body. You can
set SOP from the interface or build more complex materials with Surface Optical Property Editors like the Simple
Priority rule
Optical Properties are applied on geometries through a multilayer system. The last layer always prevails during
simulation.
A face optical property always overwrites the surface optical property of an object.
Related tasks
Creating Optical Properties on page 71
Creating Volume Optical Properties (VOP) and Surface Optical Properties (SOP) on solids allows you to determine
the light rays' behavior when they hit or are propagated in the geometry.
Creating Face Optical Properties on page 73
Creating Face Optical Properties (FOP) allows you to isolate certain faces of a geometry to assign specific optical
properties to these faces.
Graded Material
The Graded Material file describes the spectral variations of refractive index and/or absorption regarding the position
in space.
Note: The data for the refractive index variation and the absorption variation generally come from Fluent.
As each wavelength propagates differently in a medium according to the refractive index and/or the absorption,
you need for each wavelength:
• a 3D table representing the refractive index variation in space (V list)
• a 3D table representing the absorption variation in space (W list)
Each n in a table represents a 3D zone with its own refractive index. So, a wavelength propagates according to the
refractive index of the 3D zone n. The same goes for absorption.
Figure 2. Direct simulation showing the propagation result at different location in the graded
material
GradedIndexMaterial
Speos provides you with a python script file speos_GradedIndexMaterial.py library which includes the
GradedIndexMaterial generic class to:
• access the content of an existing material (openFile, parseFile)
• create the data model and save a new file (createDataModel, SaveFile)
• GradientIndexMaterial class is an example that fills the data model with a gradient index refractive index variation.
Download the script file .speos_GradedIndexMaterial.zip
GradientRefractiveIndexTable
Speos provides you with a python script file speos_GradientRefractiveIndexTable.py library which includes the
GradientRefractiveIndexTable class to:
• generate the table with the gradient refractive index variation (GetGradiantRefractiveIndexTable)
• get the refractive indexes with their position (GetRefractiveIndex)
Download the script file .speos_GradientRefractiveIndexTable.zip
General Workflow
To create a graded material file:
1. You need the refractive index variation and/or the absorption variation that directly comes from Fluent. To be
processed, the variation must be set as a list.
2. Once you have your list of refractive index and/or absorption, create the python script to generate the graded
material thanks to the GradedIndexMaterial class and the GradientIndexMaterial class if you want a gradient
material.
Note: When defining the sampling of refractive index or absorption, the value must be strictly greater than
1.
Gradient material is a graded material whose variation is constant. Thus, the script uses the GradientIndexMaterial
class and the GradientRefractiveIndexTable class to generate the graded material file.
Download the example file .GradientRefractiveIndexMaterial_example.zip
Workflow Example
1. Define the dimensions of the material and the sampling.
# Dimensions
xSize = 1 # size along X direction in mm
ySize = 1 # size along Y direction in mm
zSize = 10 # size along Z direction in mm
xSampling = int(xSize / 0.1)
ySampling = int(ySize / 0.1)
zSampling = int(zSize / 0.1)
3. For a gradient material, to create the refractive index table, use the dedicated GradientRefractiveIndexTable
class from speos_GradientRefractiveIndexTable.py.
• As you can see in the GetGradientRefractiveIndexTable function, the table must be sized:
dimensions = IllumineCore.Extent_uint_3()
dimensions.Set(0, nb_x) # sampling value along X direction
dimensions.Set(1, nb_y) # sampling value along Y direction
dimensions.Set(2, nb_z) # sampling value along Z direction
4. Since the refractive index and absorption do not vary with the wavelengths, duplicate the different tables:
5. Use GradedIndexMaterial class to create the data model and save the new file:
materialTest = speos_GradedIndexMaterial.GradedIndexMaterial()
materialTest.createDataModel(refractiveIndexDimensions, absorptionDimensions,
spectrumTable, spectralRefractiveIndexTable, spectralAbsorptionTable)
materialTest.saveFile(filepath)
Basic Methods
Name Description Syntax
OpenFile Open the file GradedIndexMaterialFile.OpenFile(Optis::IO::Path)
and fill the data
• Optis::IO::Path : path and filename
model
Should end by .gradedmaterial
Refractive Index
Name Description Syntax
GetRefIndSizeX Get the dimension in double GradedMaterialFile.GetRefIndSizeX()
mm along X
direction for
refractive index data
Absorption
Name Description Syntax
GetAbsorptionSizeX Get the dimension in double GradedMaterialFile.GetAbsorptionSizeX()
mm along X
direction for
absorption data
GetAbsorptionSizeY Get the dimension in double GradedMaterialFile.GetAbsorptionSizeY()
mm along Y direction
for absorption data
GetAbsorptionSizeZ Get the dimension in double GradedMaterialFile.GetAbsorptionSizeZ()
mm along Z
direction for
absorption data
GetAbsorptionSamplingX Get the sampling uint GradedMaterialFile.GetAbsorptionSamplingX()
value along X
direction for
absorption data
GetAbsorptionSamplingY Get the sampling uint GradedMaterialFile.GetAbsorptionSamplingY()
value along Y
direction for
absorption data
GetAbsorptionSamplingZ Get the sampling uint GradedMaterialFile.GetAbsorptionSamplingZ()
value along Z
direction for
absorption data
GetAbsorptionNbSamples Return the number uint GradedMaterialFile.GetAbsorptionNbSamples()
of samples in the file
for absorption data
GetAbsorptionTable Get the absorption Optis::Table<double, 3>=GetAbsorptionTable(uiSample)
table of a specific
• Optis::Table<double, 3>: refractive index table
sample for
absorption data • uiSample: index of the sample
Important: The version v2 is in BETA mode for the current release (filenames with v2 suffix).
The provided sample codes have been tested and validated on:
• Windows 10 with Visual Studio 2019
• CentOS 7 with Gcc 8.2.
Note: We tried to be compiler agnostic so it should work as well with other compilers.
C++ Example
The first example in the example-plugin folder is developed in C++ and represents a surface with lambertian
reflexion and transmission.
In the example-plugin.cpp file, you can find useful comments to help you create your plugin and to understand
how Speos simulations use it.
Python Example
The second example in the python-plugin folder is developed in C/C++/Python. It is a proof-of-concept to explain
how to implement a bridge between C and Python to be able to develop the surface state plugin in Python.
Note: Python multithreading is highly impacted by the GlobalInterpreterLock which can reduce drastically
the scalability of the Speos simulations. But the great advantage of this Python-based plugin is that you do
not need to rebuild the plugin when prototyping.
Test Application
Speos provides in the test folder an application that mimics the way Speos load the plugin and deliver statistics
about the plugin.
Note: When creating the plugin, make sure to use the speos-plugin.h. This is the header corresponding
to the plugin interface. It defines the structures and functions that must be exported by the surface state
shared library, and that are necessary for the plugin and Speos to communicate.
A *.sop file has been compiled. This file is a zip file containing a *.dll file. You can use this *.sop file as input in a
simulation to be run with Windows only.
Once you created and tested your plugin, you can use it to create Optical Properties, Face Optical Properties, or
Thermic Sources.
Note: When creating the plugin, make sure to use the speos-plugin.h. This is the header corresponding
to the plugin interface. It defines the structures and functions that must be exported by the surface state
shared library, and that are necessary for the plugin and Speos to communicate.
3. In the 3D view, click , select the geometries on which to apply optical properties and click to validate.
Tip: Right-click a Material and click Select associated geometry to highlight in the 3D view the geometries
on which the material is applied.
Note: For more information about texture application, see Texture Mapping.
Note: Defining the Volume properties to None means that the body is considered as a surface and not
a volume.
• Select Mirror for a perfect specular surface and adjust the Reflectance if needed.
• Select Optical polished for a transparent or perfectly polished material (glass, plastic).
• Select Library and click Browse to select and load a SOP file.
If you want to modify the SOP file, click Open file to open the Surface Optical Property Editor.
Tip: To define a surface which is polarized, select a .polarizer file instead of a .coated file as coated
surfaces are isotropic.
• Select Plug-in, and click Browse to select a custom made *.sop plug-in as File and the Parameters file for the
plug-in.
The optical properties are now created on the solid(s). You can edit these properties at any time.
Related concepts
Optical Properties Overview on page 60
Optical Properties define how light rays interact with geometries in the CAD. Objects have volume and surface optical
properties.
Related tasks
Creating Face Optical Properties on page 73
Creating Face Optical Properties (FOP) allows you to isolate certain faces of a geometry to assign specific optical
properties to these faces.
Note: For more information about texture application, see Texture Mapping.
5. In the 3D view, click , select the faces on which to apply optical properties and click to validate.
Tip: Right-click a Material and click Select associated geometry to highlight in the 3D view the geometries
on which the material is applied.
Note: The selection of faces from an imported *.obj file is not compatible with the Face Optical Properties.
The selection of faces from a faceted geometry is not compatible with the Face Optical Properties.
• Select Mirror for a perfect specular surface and adjust the Reflectance if needed.
• Select Optical polished for a transparent or perfectly polished material (glass, plastic).
• Select Library and click Browse to select and load a SOP file.
If you want to modify the SOP file, click Open file to open the Surface Optical Property Editor.
Tip: To define a surface which is polarized, select a .polarizer file instead of a .coated file as coated
surfaces are isotropic.
• Select Plug-in, and click Browse to select a custom made *.sop plug-in as File and the Parameters file for the
plug-in.
The optical properties are now created on the face(s). You can edit these properties at any time.
Related concepts
Optical Properties Overview on page 60
Optical Properties define how light rays interact with geometries in the CAD. Objects have volume and surface optical
properties.
Related tasks
Creating Optical Properties on page 71
Creating Volume Optical Properties (VOP) and Surface Optical Properties (SOP) on solids allows you to determine
the light rays' behavior when they hit or are propagated in the geometry.
3. In the sml file field, click and save the material library *.sml file in a dedicated directory.
4. If you want to save all related inputs in a dedicated folder at the location of the library or at a custom location:
a) Check Copy input files
Note: Disabling Copy input files keeps the relative path to input files as described in their respective
material definitions.
5. If you want to keep the structure of the Material node of the Speos tree, check Keep material folder structure.
Warning: Keep material folder structure is an option available from the version 2023 R1. A *.sml file
created in 2023 R1 or after cannot be opened in a release prior to 2023 R1. However, *.sml files created
before 2023 R1 can still be opened in 2023 R1 or subsequent versions.
6. Click OK.
The Material Library is saved and can now be opened in the interface via the libraries tab.
1. In the Light Simulation tab, click Material Libraries to show (or hide) the Libraries panel.
2. In the Speos interface, from the Libraries tab, click Open a library file .
Note: You can open several library that will appear in the drop-down list, and switch from one library to
another.
Tip: Right-click a Material and click Select associated geometry to highlight in the 3D view the geometries
on which the material is applied.
2. In the Libraries panel, from the opened material library, right-click a corresponding material (VOP/SOP or FOP)
and click Apply material to geometry.
3. If a material is already applied on the element(s) on which you want to apply the selected material, a prompt
message asks you if you want to replace it. Click OK to replace the material.
A new material is created under the Materials list in the Simulation tree and associated to the bodies or faces selected.
Note: You can also drag and drop a material in the tree. This material is not associated to a geometry, and
Tip: Right-click a Material and click Select associated geometry to highlight in the 3D view the geometries
on which the material is applied.
2. In the Speos Tree, use [CTRL + left-click] to select the material to apply, then right-click the material.
[CTRL + left-click] on the material avoids losing the selection of the element(s) on which to apply the material.
3. Click Apply material to geometry.
4. If a material is already applied on the element(s) on which you want to apply the selected material, a prompt
message asks you if you want to replace it. Click OK to replace the material.
To replace a material:
Tip: Right-click a Material and click Select associated geometry to highlight in the 3D view the geometries
on which the material is applied.
1. Select a material from the Library panel or the Material list in the Simulation tree.
2. Drag the material X and drop it on a material Y that you want to replace.
The geometries that were using the material Y now use the material X. The material X retrieves the geometries in its
definition, and the material Y becomes unused.
To convert a FOP:
Tip: Right-click a Material and click Select associated geometry to highlight in the 3D view the geometries
on which the material is applied.
1. Select a SOP/VOP material from the Library panel or the Material list in the Simulation tree.
2. Drag the SOP/VOP material on the FOP material to convert.
The Create new FOP window appears.
3. Click OK to create the new FOP.
The new FOP is created based on the same material properties as the SOP/VOP material used, and is assigned to
the faces on which the converted FOP was applied. The converted FOP becomes unused.
Note: The Locate Material tool is in BETA mode for the current release.
Note: The Locate Material tool is in BETA mode for the current release.
Important: The visualization of the material tool and the selection of geometries are independent and
should not be mixed up. When you highlight materials, geometries are not selected. Specific options are
dedicated to the selection of geometries.
Material Highlight
Materials applied on bodies appear highlighted in the 3D view according to the color
visualization selected.
Material applied on faces appear highlighted with black edges in the 3D view according
to the color visualization selected.
Faceted geometries with no material appear with red circles highlighted on their vertices.
Geometry Selection
Besides highlighting the materials, the Locate Material tool allows you to quickly select the geometries on which
materials are applied, or select the geometries on which no materials are applied.
Note: The Locate Material tool is in BETA mode for the current release.
1. In the Light Simulation tab, click to open the Locate Material tool.
2. In the 3D view, click Search for applied material to activate the Locate Material tool.
According to the Visualization options defined, either all materials applied on geometries are highlighted in the
scene and/or geometries with no material are highlighted, or nothing is highlighted.
For more information on the color code, refer to Understanding the Locate Material Tool.
Tip: If you want to highlight the materials applied on a geometry without using/opening the Locate
Material tool, right-click a geometry and click Locate Material.
• Select one or several materials in the Speos tree and click to select their associated geometries.
The associated geometries are selected and highlighted in the 3D view and in the Structure tree.
Tip: If you want to select geometries without using/opening the Locate Material tool, right-click a
material (or a selection of materials) and click Select associated geometries.
Tip: Create a Named Selection to group these geometries and apply adequate materials.
• Select a geometry, then use [CTRL + left-click] on a material in the tree, then click Apply material to geometry
in the 3D view.
If a material is already applied on the element(s) on which you want to apply the selected material, a prompt
message asks you if you want to replace it. Click OK to replace the material.
Note: The Locate Material tool is in BETA mode for the current release.
Tip: You can directly right-click a material and click Set visualization color to open the Color tool, or
you can right-click a geometry when the Locate Material tool is activated and click Material visualization
color.
Note: The Locate Material tool is in BETA mode for the current release.
4. To apply the modifications, in the 3D view, click Search for applied material .
The modifications defined in the Visualization options are visible in the 3D view.
Note: A typical workflow of a texture mapping creation is described in the following mapping process
overview.
UV Mapping Process
The Mapping is projected on the geometry using the UV mapping process. UV mapping is the process of wrapping
a 2D image on a 3D mesh. U and V are used to denote the axes of the plane because X,Y and Z are already used for
the coordinates of the 3D object. "U" denotes the horizontal axis of the mapping projection and "V" denotes the
vertical axis of the mapping projection.
The mapping can be planar, spherical, cylindrical or cubic.
Related concepts
Texture Mapping Process Overview on page 89
This page illustrates a standard texture mapping process.
Understanding UV Mapping on page 87
This page describes the UV mapping principle along with the different types of mapping available in Speos.
In Speos, a UV mapping feature corresponds to one mapping type and is linked to a unique set of geometries that
you define. A geometry cannot be included in different UV mapping features. It is necessarily stored in one feature.
If you want to apply two textures on a same surface and want them to be projected or rendered differently, you can
create new UV maps under the same UV mapping feature.
For example, if you want to create a texture mapping on a surface with a texture image with a planar mapping type
and a normal map with a specific projection, you need to create a UV mapping feature containing your surface and
two UV maps under it containing your two mappings.
Related concepts
Understanding Texture Mapping on page 88
This page describes how the texture mapping multi layer system works and how the rays interact with it.
Texture Mapping Overview on page 85
Texture mapping is a process that allows you to simulate material texture to improve realism. A texture mapping
can be applied on a surface, a face, an outer surface of bodies or on geometry groups.
Texture Mapping Process Overview on page 89
This page illustrates a standard texture mapping process.
• If α = 255, the pixel of the texture file is opaque and the ray interacts only with the Surface State of the layer Ln.
• If α = 0, the pixel is transparent and the ray does not consider optical properties from the Layer Ln, it goes to the
layer Ln-1.
Related concepts
Understanding UV Mapping on page 87
This page describes the UV mapping principle along with the different types of mapping available in Speos.
Texture Mapping Process Overview on page 89
This page illustrates a standard texture mapping process.
Related information
Creating a Texture Mapping on page 94
This section shows how to create a texture mapping over one or several geometries.
UV Mapping Creation
UV mapping consists in assigning a mapping type to the geometries you want to apply a texture to.
The "feature" level allows you to select the geometries on which to apply
the mapping.
One UV mapping feature can only contain one unique set of geometries. A
geometry cannot be selected into two different UV Mapping features.
The UV map level allows you to define the mapping type/technique used
to apply the texture (planar, spherical, etc.).
You can create as many UV maps as you need under a UV Mapping feature.
Each UV map has an index that depends on its position under the feature.
For example if you need to superimpose an image texture and a normal
map on a same surface but you want to apply them differently, you can
create two UV maps under the same UV Mapping feature.
Material Creation
Materials allow to define texture.
The material level allows you to select the geometries on which to apply the texture.
Then in each material created, you can activate the Use texture option.
Texture creation
Activating the texture creates another Surface layer on which you can define an image texture, normal map and/or
apply specific optical properties.
In the surface layer, you need to define the texture properties and a UV map index. This index allows you to indicate
which UV map layer should be used to apply the texture.
Algorithm Check
Once the UV mapping and materials are created, the algorithm confronts the geometries contained in the material
and the ones selected in the UV mapping features.
When a geometry is detected in both elements, the UV map index defined in the surface layer is used to select the
correct UV map to apply the texture.
Related concepts
Understanding UV Mapping on page 87
This page describes the UV mapping principle along with the different types of mapping available in Speos.
Understanding Texture Mapping on page 88
This page describes how the texture mapping multi layer system works and how the rays interact with it.
Related information
Creating a Texture Mapping on page 94
This section shows how to create a texture mapping over one or several geometries.
Description
Texture mapping preview allows you to:
• immediately access to texture preview on geometries while editing the UV mapping properties. This way you can
have a dynamic overview of the mapping and understand if a textured material is well defined.
• See the texture size on geometries while setting the texture parameters.
• See the alignments between textures on multiple objects when setting UV mapping.
• See the UV orientation when setting anisotropic properties.
Texture Mapping Preview:
• can be activated permanently.
• is automatically displayed when you are in the UV map definition or in the Surface Layer definition, until you exit
the definition.
The preview of the textures corresponds to the simulation results and the Live Preview.
If no texture or material is defined, a default rendering (checker texture) is displayed for the texture or the normal
map to help you set the UV map:
• The arrow indicates the vertical axis to help you orientate anisotropic materials.
• The checker texture measures 100x100 mm, and each colored square measures 10x10 mm. This can help you
assess the distortions due to the projection on the geometries.
Example
The following example is a combination of two textured materials:
• Textured Material 1 is applied on the solid and composed of one Surface Layer
• Textured Material 2 is applied on a face and composed of two Surface Layers
Note: The Texture image of both Surface Layers 1 is a real image that looks like the default rendering when
no texture is applied.
To create a UV Mapping:
1. From the Light Simulation tab, click UV Mapping .
2. In the 3D view, click , select the geometries on which to apply the current mapping and click to validate.
During UV Mapping edition, the texture set or the checker texture is displayed in the 3D view on the associated
geometries and is updated upon modifications.
Tip: If you need a certain mapping type (a planar mapping for example) for several geometries, select
them all and create only one UV mapping that will be used for each geometry.
• Planar
and select the middle point of the image texture you want to apply.
• For Projection direction, click
and select a line defining the direction in which the image texture should be projected on the plane.
• For Top direction, click and select a line defining the orientation of the image texture on the plane.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to
the axis in the 3D view. Please refer to the axis in the 3D view.
• or click
• Cubic
and select the middle point of the image texture you want to apply.
• For Projection direction, click
and select a line defining the direction in which the image should be projected on the plane.
• For Top direction, click and select a line defining the orientation of the image texture on the plane.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to
the axis in the 3D view. Please refer to the axis in the 3D view.
• or click
• Spherical
and select a line defining the direction in which the texture should be projected.
• For Top direction, click and select a line defining the orientation of the image texture on the plane.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to
the axis in the 3D view. Please refer to the axis in the 3D view.
• or click
Tip: If the geometry was build with SpaceClaim modeler, you can obtain the perimeter by clicking
• Cylindrical
and select a line defining the direction in which the texture should be projected.
• For Top direction, click and select a line defining the orientation of the image texture on the plane.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to
the axis in the 3D view. Please refer to the axis in the 3D view.
• or click
Tip: If the geometry was build with SpaceClaim modeler, you can obtain the perimeter by clicking
CAUTION: When a cylindrical mapping is applied on a cylindrical geometry, the preview of the texture
mapping may be inconsistent. To avoid such an issue:
a. Open the Properties of the body.
b. Set Tessalation Quality Level to Custom.
c. Set Max edge length to a value consistent with the body size.
4. In U and V sections, respectively denoting the horizontal and vertical axes of the mapping projection:
a) Define the horizontal and vertical positioning of the texture using the U/V Offset.
b) If you want to define a specific scale on U and/or V, adjust the Scale factor.
c) Activate/deactivate the repeatability of the texture on U and/or V axes.
Note: This option is activated by default. Repeating the texture on the geometry ensures that the
integrality of the surface is covered by the texture.
a) Right-click the UV mapping feature and click Insert a new UV map below.
b) Repeat the steps 3 and 4 for the newly created UV map.
Once all your UV mappings are created for your geometries, move on to the texture application .
Related tasks
Applying Textures on page 99
This section gathers the three types of texture application that can be performed in Speos.
2. In the 3D view, click , select the geometries on which to apply the texture and click to validate.
3. In General, set Use Texture to True.
A surface layer is created under the material in the simulation panel and allows you to define the surface properties
of the
geom
.yret
4. Define the volume properties of the geometry.
5. In the surface layer, from the Texture imageType drop-down list, select From file to activate the image texture.
• In File, double-click in the field to browse and load a .jpeg or .png file.
• Define the Image width of the image in mm.
The Image size is calculated proportionally on U and V.
• Define the UV map indexyou want to use to apply this texture.
The UV map index determines which UV mapping should be used to apply the texture. This index refers to the
UV map or "layer" that should be selected within a UV mapping feature.
Related concepts
Understanding UV Mapping on page 87
This page describes the UV mapping principle along with the different types of mapping available in Speos.
Texture Mapping Process Overview on page 89
This page illustrates a standard texture mapping process.
2. In the 3D view, click , select the geometries on which to apply the texture and click to validate.
3. In General, set Use texture to True.
A surface layer is created under the material in the simulation panel.
• From texture image to create a normal map from a previously defined texture image and define the Roughness
ratio of the normal.
• From image if you do not have a normal map and want to generate a normal map from a .jpeg or .png file.
• From normal map to generate a normal map from a .bmp file.
Related concepts
Understanding UV Mapping on page 87
This page describes the UV mapping principle along with the different types of mapping available in Speos.
2. In the 3D view, click , select the geometries on which to apply the texture and click to validate.
3. In General, set Use texture to True.
A surface layer is created under the material in the simulation panel.
4. In the surface layer, define the surface properties to be applied on the geometry:
a) From the Type drop-down list, select Library.
b) In File, double-click in the field to browse and load an .anisotropicbsdf material.
Related concepts
Understanding UV Mapping on page 87
This page describes the UV mapping principle along with the different types of mapping available in Speos.
Texture Mapping Process Overview on page 89
This page illustrates a standard texture mapping process.
Note: Do not confuse the permanent preview with the preview displayed when you are in the UV map
definition or in the Surface Layer definition which is displayed only until you exit the definition.
The Surface Layer must be selected, otherwise you cannot activate the texture mapping preview.
2. Right-click the Surface Layer and select the texture to preview:
Note: The preview never deactivates itself, you need to see it to none if you no longer want to the
preview to be displayed.
• Texture
When Texture is activated, the Surface Layer is flagged with a black dot in the tree.
• Normal map
When Normal map is activated, the Surface Layer is flagged with a purple dot in the tree.
Note: If a textured material has several surface layers, you can only preview one surface layer at a time.
Note: If no texture or material is defined, a default rendering is displayed for the texture or the normal
map to help you set the UV map.
Note: If you have not deactivated a preview and you saved the project, the preview is displayed at the
next opening of the project.
The preview of the surface layer textures are now permanently activated until you deactivate them.
None
With None, the simulation results uses both the image texture and the texture mapping
optical properties.
The simulation result also takes into account the grey scale color lightness of the normal
map.
Related tasks
Setting the Texture Normalization on page 105
Texture application can have an impact on the simulations results. To control what is taken into account for simulation,
a texture normalization mode must be selected.
Note: You can only select one texture normalization mode for all texture mappings created in the entire
assembly.
5. Click Close.
The texture normalization is selected and will be taken into account for simulation to determine the rendering of
the texture.
Related concepts
Understanding Texture Normalization on page 104
This page helps to understand the texture normalization and how texture mapping modifies the interaction between
light and faces for all kind of interactions.
Related information
Simulations Overview on page 332
Simulations allow you to give life to the optical system in order to generate results.
5.3. Polarization
Overview
Polarization is an electromagnetic wave like radio, radar, X rays or gamma rays. The difference is a question of
wavelength. A wave is something vibrating, in the case of a piano or a guitar, it is a cord. When talking about
polarization, it is the orientation of the electromagnetic field of a propagating light wave (an electric and a magnetic
field which are vibrating together).
The software only considers the electric field, as the magnetic field can be deduced from it in materials used for
light propagation.
A polarization state is the geometrical trajectory followed by the electric field vector while propagating.
As the polarization state is elliptical, the polarization is defined by the azimuth angle of the ellipse, the ellipticity
and its rotation sense.
Note: Birefringent materials, polarizer surface or optical polished surfaces (Fresnel) use the polarization.
Lambertian reflection is depolarizing. It converts polarized light into unpolarized light by changing randomly
its polarization while processing the reflection. However, for Gaussian scattering, the model has been
designed not to depolarize light.
Application
The polarization is the main physical property LCDs are working with. LCDs are used together with polarizer.
According to the applied voltage, they rotate or not the polarization axis of the light by 90°. So when this light tries
to cross the polarizer, it is stopped (black state) in case of a 90° angle of the polarization axis with the easy axis of
the polarizer or transmitted (white state) if this angle is 0°.
We saw that even the simplest surface quality (optical polished) has an effect on polarization. This is the surface
quality used each time one deal with a light guide in automotive (dashboards) or in telephony (to lighten the keypad
for example).
Since such devices are using multiple reflections inside their light guides, it is important to have an accurate model
to describe the light behavior on this surface.
It is possible to build a light guide with a birefringent material to build a special function for the polarization. For
example, a backlight for a LCD without any polarizer between the backlight and the LCD reducing the losses due to
the polarizer.
To create a polarizer:
1. From the Design tab, create the surface and the origin of the polarizer.
2. From the Light Simulation tab, create a material:
a) click Material .
b) In the 3D view, click , select the geometries on which to apply optical properties and click to validate.
The selection appears in Geometries as linked objects.
c) In General, set the Type to Volume & Surface properties.
d) Set Use Texture to True.
A surface layer is created under the material in the simulation panel and allows you to define the surface
properties of the geometry.
Note: You can create a *.polarizer file with the Polarizer Surface Editor.
5. Create a UV mapping:
a) click UV Mapping .
b) In the 3D view, click , select the geometry on which to apply the current mapping and click to validate.
c) In the first UV map, select the mapping Type you need.
d) Define the Origin, Projection Direction and Top Direction according to the mapping type selected.
6: Local Meshing
Local meshing properties allow you to lighten memory resources by creating specific areas of focus on body parts.
Note: For same values of meshing, meshing results can be different between the CAD platforms in which
Speos is integrated.
Note: In Parasolid mode, in case of a thin body, make sure to apply a fixed meshing sag mode and a meshing
sag value smaller than the thickness of the body. Otherwise you may generate incorrect results.
Creating a meshing on an object, a face or a surface allows you to mobilize and concentrate computing power on
one or certain areas of a geometry to obtain a better level of detail in your results. In a CAD software, meshing helps
you to subdivide your model into simpler blocks. By breaking an object down into smaller and simpler pieces such
as triangular shapes, you can concentrate more computing power on them, and therefore improve the quality of
your results. During a simulation, it will no longer be one single object that interprets the incoming rays but a
multitude of small objects.
Warning: if you created a file in version 2021 R1, then migrated to 2021 R2 and changed the values for Sag
/ Step type (when it became Proportional to Body size), these values may not be good in 2022 R2 when
the document is migrated back to Proportional to Face size. You cannot know that the values were changed
over the versions.
• Fixed means that the tolerance will remain unchanged no matter the size or shape of the object. The mesh of
triangles will be forced on the object. The sag and maximum step size is, therefore, equal to the tolerance you
entered in the settings.
Note: From 2022R2, the new default value is Proportional to Face size. Selecting between Proportional
to Face size and Proportional to Body size may slightly affect the result according to the elements meshed.
Note: When setting the meshing to Proportional to Face size, the results may return more faces than
Proportional to Body size. These additional faces should be really small and they should not influence the
ray propagation.
Note: When running a simulation for the first time, Speos caches meshing information if the Meshing mode
is Fixed or Proportional to Body size. This way, when you run a subsequent simulation and you have not
modified the Meshing mode, the initialization time may be a bit faster than the first simulation run.
Sag Tolerance
The sag tolerance defines the maximum distance between the geometry and the meshing.
By setting the sag tolerance, the distance between the meshing and the surface changes. A small sag tolerance
creates triangles that are smaller in size and generated closer to the surface. This will increase the number of triangles
and potentially computation time. A large sag tolerance will generate looser triangles that are placed farther from
the surface. A looser meshing can be used on objects that do not require a great level of detail.
Note: If the Meshing sag value is too large compared to the body size, Speos recalculate with a Meshing
sag value 128 to better correspond to the body size.
Note: In Parasolid modeler, for a Heavyweight body, the Meshing step value precision decreases when
applying a value below 0.01mm.
A small maximum step size generates triangles with smaller edge lengths. This usually increases the accuracy of the
results.
A greater maximum step size generates triangles with bigger edge lengths.
Angle Tolerance
The angle tolerance defines the maximum angle tolerated between the normal of the tangent formed at each end
of the segments.
Related tasks
Creating a Local Meshing on page 110
Creating a local meshing allows you to identify areas requiring a high level of detail and optimize simulation time
by creating a fine meshing on specific areas only.
Note: For same values of meshing, meshing results can be different between the CAD platforms in which
Speos is integrated.
2. In the 3D view, click , select the geometries on which to apply optical properties and click to validate.
The selection appears in the Geometries list as linked objects.
3. From the Sag Mode drop-down list:
• Select Proportional to Face size to create a mesh of triangles that are proportional to the size of each face of
the object. The sag and step value therefore depend on the size of each face.
• Select Proportional to Body size to create a mesh of triangles that are proportional to the size of the object.
The sag and step value therefore depend on the size of the body.
• Select Fixed to create a mesh of triangles fixed in size regardless of the size of the body or faces.
4. In Sag Value, define the maximum distance between a segment and the object to mesh.
5. From the Step Mode drop-down list:
• Select Proportional to Body size to create a mesh of triangles that are proportional to the size of the object.
• Select Fixed to create a mesh of triangles fixed in size regardless of the size of the body.
The local meshing is created and applied to the selected geometries. The local meshing prevails over the simulation
meshing properties for the selected geometries.
Related concepts
Understanding Meshing Properties on page 108
This page describes the different parameters to set when creating a Meshing and helps you understand how meshing
properties impact performance and result quality.
7: Sources
Sources allow you to virtually create and generate all kinds of light sources. The sources can be used to directly simulate
and validate any lighting system or to simulate unwanted induced effects of light (glare for example).
The light sources are a key component of optical simulation. Speos feature allows you to model these light sources
and their interaction with an optical system.
A wide variety of sources are available to cover different needs and configurations. Speos allows you to model natural
light or artificial light, LED's, streetlights, indoor lighting, back lighting, displays etc.
The Sources can be used to test, adjust or validate lighting systems themselves or can be used as a tool to analyze
the induced effects of light.
In Speos
You can manage all the characteristics of the light source: its power, spectrum, emission pattern etc. To create a
light source you have to define several parameters: (The parameters vary based on the type of source.)
• The power of the source or flux can be defined from the interface or inherited from a file (e.g from a ray file).
• The spectrum of the light source can either be defined from the interface or downloaded from the Ansys Optical
Library or created with the Spectrum Editor.
• The direction of the emission or intensity distribution can be defined from the interface thanks to standard
emission patterns or inherited from an intensity distribution file that contains information about the intensity
profile of the source, that is to say how the source redistributes light in space. These files can be downloaded from
the Ansys Optical Library.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Creation on page 116
You can create all kinds of light sources thanks to different sources types.
The Flux
The flux corresponds to the total energy emitted by a light source.
In photometry, as the light emits within the visible spectrum (between 390 and 700 nanometers), then the flux is
referred to as a luminous flux (expressed in lumens).
In radiometry, the energy emitted by the light source is referred to as the radiant flux (expressed in watts).
The Intensity
The intensity is the power emitted by a light source in a particular direction per unit solid angle.
The unit of measure is the candela. 1cd = 11m/sr.
The radiant intensity is expressed in watts per steradian (W.sr-1.)
The luminous intensity is expressed in lumens per steradian (lm.sr-1.)
Solid Angle = Field of view from a specific point covered by an observer or an object.
• A lambertian emission ensures that the source has a uniform distribution. The source theoretically distributes the
same amount of light in every direction. The source has therefore the same luminance whatever the observation
angle is.
• With a Cos distribution, the intensity follows the cosine law. The higher the intensity, the narrower the intensity
diagram will appear. You can modify the order of the law to make the rays converge or diverge.
• A gaussian distribution follows a gaussian function and can be symmetric or asymmetric.
• Intensity files are data measured files that provide an accurate intensity profile.
Spectrum
Luminance / Illuminance
Luminance or radiance is the amount of light coming from a surface that the human eye perceives.
Illuminance or irradiance is the amount of visible energy falling upon an object's surface.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Sources Creation on page 116
You can create all kinds of light sources thanks to different sources types.
Note: The purpose of the interactive source is not to model the emission of a real light source (like a LED
or a filament), but to generate specific light rays that can help dimension an optical system but never validate
it. The Interactive Sources are generally created to be used in an Interactive Simulation.
2. From the Type drop-down list, select the start geometry's type.
4. From the Type drop-down list, select the end geometry's type.
6. Edit the Wavelength value according to the light you want to simulate.
The interactive source is created and appears in the Simulation tree and in the 3D view.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
®
Note: Standard ray file format (.ray), standard IES TM25 ray file format (.tm25ray), as well as LightTools
®
and TracePro ray file formats (.ray) are compatible with the Ray File Source and can be used to describe
the emission of a light source.
The file is imported and the flux is inherited from the file.
3. If you do not want to inherit the flux values from the ray file:
Note: Flux expressed in Watt (w) is the radiant energy or radiant power of a light-emitting source.
Flux expressed in Lumen (lm) is the luminous flux or luminous power of a light-emitting source.
c) In Value, specify the luminous flux (lumens) or electric power (watts) of the source.
Note: If only one flux type (radiometric or photometric) is available in the ray file, you cannot select
another flux type.
If you define an old ray file that does not contain values in lumen, you cannot change the flux unit.
To convert the file to a more recent file format, use the Ray File Editor to get values in lumen.
When loading a ray file, it may not be optimized. In this case, click Optimize ray file for Speos
simulation. If the ray file does not contain any spectrum information, the option will not display until
you define the spectrum of the source.
Ray files generated during a Direct Simulation are automatically optimized.
4. In the 3D view, set the Axis System of the source by clicking to sequentially select one point for the origin
and two lines for X and Y axes or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
If you need to adjust the ray's propagation direction, set Reverse direction to True.
5. If the ray file does not contain any spectrum information, define the Spectrum of the source:
• Select Blackbody to set the temperature of the source in kelvins.
• Select Library and from the drop-down list, click Browse to load a .spectrum file.
If you want to see the file properties or edit the file, from the drop-down list, click Open file to open the
Spectrum Editor.
6. If you want to associate geometries to the ray file source, in the 3D view click the face(s) to be considered as the
exit geometry of the source.
7. In Optional or advanced settings, adjust the Number of Rays and Length of rays to display in the 3D view.
The Ray File Source is created and visible both in the Simulation panel and in the 3D view.
Tip: Sometimes, to save simulation time, it is useful to split a simulation in two parts. The first simulation
can be dedicated to simulate the light propagation in parts with a definitive design (for instance the filament,
the bulb and the socket of a lamp). The second simulation can be dedicated to simulate the light propagation
in parts currently in the design process (for instance a reflector). You can create a Ray File source with a ray
file generated by the first simulation. Then, you can use the ray file source to replace the first part of the
optical system in the second simulation. At each simulation done to optimize the second part of the optical
system, the simulation time dedicated to the ray propagation in the first part is saved. Generally, with this
tip, you can save between 20% and 80% of the simulation time.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Note: The Light Field feature is in BETA mode for the current release.
Note: The Light Field feature is in BETA mode for the current release.
Optical systems can be composed of sub-optical systems. When you focus on the main optical system, recalculating
every time the propagation inside those sub optical systems can be time-consuming. This time can be optimized:
the goal is to speed up the simulation by pre-computing the propagation of the sub-optical systems.
To proceed to the pre-calculation of those sub-optical systems, the Light Field feature generates a *.olf (Optical
Light Field) file format thanks to a Light Field sensor, that is then used as a Light Field source in the main optical
system. Thus, the simulation does not have to compute the propagation of the sub-optical system, reducing the
simulation time.
LED including chips with a lens on top Original Radiance Simulation of the Radiance Simulation of the Light Field
LED representing the LED
Reference time = T Simulation time = 0.43 * T
General Workflow
1. Create a Local Meshing of the surface to be integrated in the Optical Light Field file.
-Or-
At Step 3, use the Meshing Options of the Direct Simulation to generate the meshing.
Note: As no optical properties are defined on a Light Field meshing, the Light Field is fully absorbing.
2. Create a Light Field Sensor to define how the *.olf file will be generated.
3. Create a Direct Simulation, and if no Local Meshing is applied on the Light Field surface, define the Meshing
Options.
4. Run the Direct Simulation to generate the *.olf file.
5. Create a Light Field Source that uses the generated *.olf file as input.
6. Create and run an Interactive, Direct or Inverse Simulation of the main optical system, using the Light Field Source
as input.
Note: The Light Field feature is in BETA mode for the current release.
None: does not display information. Meshing: displays the meshing Bounding Box: displays the bounding
Only rays are visible. according to the Meshing settings box of the source without the
defined. meshing.
Note: The Light Field feature is in BETA mode for the current release.
• Click
to select an origin point.
• Click
to select a line defining the horizontal direction.
• Click
to select a line defining the vertical direction.
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
3. From the Light Field file drop-down list, click Browse to load an optical light field file *.olf.
4. If the selected optical light field file contains radiometric or photometric data, in Wavelength, select the a
spectrum file.
• adjust the Number of rays and Length of rays to display in the 3D view.
• define the Display meshing mode to display in the 3D view.
The Light Field Source is created and appears in the Simulation panel and in the 3D view.
Create and run an Interactive, Direct or Inverse Simulation containing the Light Field Source to benefit from the
Light Field.
Intensity Distribution
The Intensity Distribution describes the emission pattern of a light source. You can choose among different distribution
profiles:
• A Lambertian emission ensures that the source has a uniform distribution. The source theoretically distributes
the same amount of light in every direction and has, therefore, the same luminance whatever the observation
angle is.
• With a Cos distribution, the intensity follows the cosine law. The higher the intensity, the narrower the intensity
diagram will appear. You can modify the order of the law to make the rays converge or diverge.
• A gaussian distribution follows a gaussian function and can be symmetric or asymmetric.
• Intensity files are data measured files that provide an accurate intensity profile. The supported formats are:
º iesna (.ies);
º eulumdat (.ldt);
º XMP maps with conoscopic intensity (.xmp).
Lambertian Distribution
A lambertian source evenly distributes light in every direction of the half space. The deflection angle(q) corresponds
to the total angle of emission of the light source.
I = A* cos(q)
A: Intensity in propagation axis q: Deflection angle
Radiation laws and relative intensity diagram, characteristic of a lambertian source emitting on a half sphere.
A source with a lambertian distribution has the same luminance whatever the observation angle is, as illustrated
below:
Set-up of emissive source with three radiance sensors. Radiance map of the lambertian source set-up above.
The Luminance is constant no matter the angle of
observation.
I = A* cosn(q)
A: Intensity in propagation axis q: Deflection angle n:
Order of cos law
Radiation laws of cos function. Radiation diagram at 2nd, 3rd, 4th and 5th order compared
to a lambertian distribution. The higher the intensity, the
narrower the intensity diagram will appear.
A source with cos distribution has a luminance varying according to the observation angle, as illustrated below:
Gaussian
The intensity distribution of a source can follow a gaussian distribution.
The Total Angle defines the angle of emission of the light source.
Gaussian distribution laws and relative radiations distribution of gaussian compared to a lambertian distribution.
FWHM
The Full Width At Half Maximum (FWHM Angle) is used to describe the width of a curve at half its maximum amplitude.
It means that the source reaches half its power potential between (0°) the normal of the emitting surface and the
FWHM.
It allows you to alter the emission profile of the light source.
As illustrated below, a small FWHM value tends to restrain and concentrate the light beam. A large FWHM value
results in a broader, more widespread light emission. If the source is symmetric, then the FWHM Angle is the same
on both axes.
If the source is asymmetric, the FWHM Angle can be edited on X and Y.
Important: For the Normal to UV map to work you need to create a Texture Mapping on the emitting face.
Refer to the Surface Source procedure for more information on how to create a Texture Mapping.
Exit Geometries
The exit geometries represent the geometries present during source measurement (the bulb of a light bulb or the
case of a LED) that could potentially influence the optical behavior or intensity distribution of the source.
Selecting exit geometries allows you to define a new emissive geometry to avoid recalculation of the geometry's
effect on the source.
For example, if you selected an iesna file (.ies) corresponding to a light bulb, the bulb geometry is taken into account
in the data of the iesna file. To avoid the recalculation of the light bulb's effect, the bulb must be selected as the exit
geometry.
Intensity distribution without specific exit geometry. Intensity distribution with lens defined as exit geometry.
Related tasks
Creating a Surface Source on page 128
The Surface Source models the light emission of a source taking into account its physical properties as the flux, the
spectrum, the emittance and the intensity.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
2. In the 3D view, click , select the exitance/emissive face and click to validate.
3. According to the flux you want to define, from the Type drop-down list:
• Leave the Variable exitance set as False to have a constant ray energy over the surface source and select the
emissive face(s) in the 3D view.
The face(s) appear in the List of Selected Objects. A preview of the ray's distribution and color appears in the
3D view.
If you need to adjust the ray's propagation direction, check Reverse normal.
Note: The selection of faces from an imported *.obj file is not compatible with the Surface Source.
• Set the Variable exitance as True to have a variable ray energy over the surface source depending on the xmp
energy distribution.
• Click in the File field and click Browse to load an xmp file.
If you want to see the file properties or edit the xmp map, click the file's drop-down list and click Open file.
• If you need to adjust the ray's propagation direction, set Reverse Direction to True.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 128
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Sources
• Set the coordinate system of the XMP map by clicking one point for the origin point and two lines for X and
Y axes or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
Note: If you selected a spectral map, you cannot set another spectrum.
If you selected a non-spectral map, you need to specify a spectrum type.
5. In Intensity, set the intensity distribution of the light source. From the Type drop-down list:
• Select Lambertian for a uniform distribution and set the total angle of the surface source's emission.
Note: By default Total angle is set to 180° so that the source emits on a hemisphere. Output light is
set to 0 cd, for deflection angles (q) bigger than half the total angle.
• Select Cos for a distribution that follows a cosine law at nth order and set the total angle of the surface source's
emission.
• In N, set the order of the cosine law.
• In Total Angle set the angle of emission of the source.
• Select Symmetric Gaussian:
• Set the total angle of emission of the source.
• Set the FWHM angle.
FWHMAngle has the same value for x and y and is computed on both axes.
• If you want to define different FWHM values on X and Y, select Asymmetric Gaussian:
• Set the total angle of emission of the source.
• Set the FWHM Angle for X and Y.
• In the 3D view, click two lines to define X direction and Y direction.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
6. If you selected Library, set the orientation of the source intensity distribution:
• select Axis system and click two lines to define X and Y direction.
• select Normal to surface to define the intensity distribution as normal to the selected surface.
• select Normal to UV map to define the intensity distribution as normal to the selected emissive surface and
its orientation on the emissive surface.
Normal to UV map is particularly useful in case of an asymmetrical intensity distribution as it allows you to
define accurately its orientation on the surface.
Note: When Normal to UV map is used, the intensity distribution preview is defined as Normal to
Surface due to performance issue.
Important: For the Normal to UV map to work you need to create a Texture Mapping on the emitting
face:
a. Create a UV map to apply on the emitting face of the surface source.
b. Create and apply a FOP or a SOP material with Use Texture set to True on the emitting face.
c. As Use Texture is activated, define at least one Surface Layer with a Texture Image on the emitting
face.
Any image is appropriate. The image is required only to consider the UV map on the surface.
d. In the Simulation options (Interactive, Direct, Inverse), in the Geometry tab, check Texture.
7. If you selected Library, you can select exit geometries by clicking them in the 3D view.
• Select Library and from the drop-down list click Browse to load a .spectrum file.
If you want to see the file properties or edit the file, click Open file to open the Spectrum Editor .
Note: If you select a XMP map with spectral conoscopic intensity, the spectral information of the map
is displayed in the Spectrum group box.
10. If you are using a variable exitance, click Compute to apply the XMP file to the surface source.
The Surface Source is created and should appear in the 3D view and in the Simulation panel.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Understanding the Parameters of a Surface Source on page 122
This page describes the parameters to set when creating a Surface Source.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Contrast Ratio
The Contrast Ratio is a characteristic of any display. It corresponds to the ratio of the luminance of the brightest
pixel (white color) to that of the darkest pixel (black color).
The higher the contrast ratio, the better the colors will appear.
An Infinite Contrast Ratio considers the brightest pixel at 255 255 255 and the darkest pixel at 0 0 0.
Standard contrast ratios range from 500:1 to 1000:1 for a LCD.
Intensity Distributions
The Intensity Distribution describes the emission pattern of a light source. You can choose among different distribution
profiles.
The following image shows the intensity diagram for a Lambertian law (blue curve), a Cosnθ law (purple curve) and
a Gaussian law (yellow curve).
Lambertian
The simplest model of light distribution is the lambertian model. This model ensures that the source has a uniform
distribution and equal probabilities of distribution in every direction.
The Total angle of emission of the source is by default set to 180° so that the source emits on a hemisphere.
The intensity formula for Lambertian is I = cos(theta). Cos: I = cosn(theta).
Cos
A Cos distribution follows a cosines law (Lambert's Cosine law). With a Cos distribution, you can modify N (order of
the cosine law) to alter the intensity so that the rays converge or diverge.
Gaussian
A gaussian distribution follows a gaussian function and can be symmetric or asymmetric.
The intensity formula for Gaussian is I = exp(-(theta/a)²). a is calculated in a way that the FWHM (Full Width at Half
Maximum) angle of the Gaussian is the one given by the user.
The Full Width At Half Maximum (FWHM Angle) is used to describe the width of a curve at half its maximum amplitude.
It means that the source reaches half its power potential between 0° and the FWHM you define.
It allows you to alter the emission profile of the light source.
A small FWHM value tends to restrain and concentrate the light beam. A large FWHM value results in a broader, more
widespread light emission.
If the source is symmetric, then the FWHM Angle is the same on both axes.
If the source is asymmetric, the FWHM Angle can be edited on X and Y and an axis can be defined. Defining the axis
is optional for an asymmetric gaussian when FWHM values on X and Y are identical.
The axis can be global or local:
• Global axis: The orientation of the intensity diagram is related to the axis system.
• Local axis: The orientation of the intensity diagram is related to the normal at the surface.
Related tasks
Creating a Display Source on page 135
The Display Source allows you to model the light emission of a display (LCD, control panel etc.) taking into account
its physical properties such as the flux, the spectrum, the emittance and the intensity.
Related information
Sources Overview on page 112
Release 2023 R2 - © Ansys, Inc. All rights reserved. 134
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Sources
The Sources correspond to the light sources propagating rays in an optical system.
Important: This feature is only available under Speos Premium or Enterprise license.
Note: If you do not see the image correctly, adjust the axis system. The image must be projected on Z
(normal to the display). This ensures that what you see in the 3D view corresponds to the XMP result.
a) If you want light from all space, set Mirror extent to True to link the start and end values.
b) Edit the X and Y coordinates of the start and end points of the display either by entering the values or by using
the manipulators in the 3D view.
4. In Flux, specify the luminance of the brightest pixel (white pixel). The luminance is calculated according to this
reference pixel.
5. If you want to edit the Contrast Ratio of the display, set Infinite contrast ratio to False.
Tip: The contrast ratio is a property of display systems. It is the measurement of the ratio between the
darkest blacks and the brightest whites.
Standard values range from 500:1 to 1000:1 for a LCD.
6. In Intensity, set the intensity distribution of the display source. From the Type drop-down list:
• Select Lambertian for a uniform distribution and in Max Angle set the angle of emission of the display source.
Note: By default MaxAngle is set to 180° so that the source emits on a hemisphere.
• Select Cos for a distribution that follows a cosine law at nth order and set the total angle of the surface source's
emission.
In N, set the order of the cosine law.
• Select Symmetric Gaussian and set the FWHM Angle.
FWHM Angle has the same value for x and y and is computed on both axes.
• If you want to define different FWHM values on X and Y, select Asymmetric Gaussian:
• Set the FWHM angle for X and Y.
• In the 3D view, click two lines to define X direction and Y direction.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
7. In Color Space, from the Type drop-down list, select which color space based model to use according to your
needs and screen's capacities.
• Select sRGB to use the standard and most commonly used RGB based model.
• Select D65 to use a standard daylight illuminant that provides accurate color perception and evaluation.
• Select D50 to use a natural, horizon light.
• Select C to use an average daylight illuminant.
• Select E to use an illuminant that gives equal weight to all wavelengths.
• Select User defined if you want to edit the x and y coordinates of the white point (the reference point of the
model).
Note: For more information about color models or white points of standard illuminants, see
Colorimetric illuminants .
8. If you selected User Defined RGB from the Color Space drop-down list, load a spectrum file for each primary
color.
If you want to modify or create a .spectrum file, click Open file to open the Spectrum Editor.
Tip: You can also download spectrum files from the Optical Library.
9. To orientate the image, set its Axis system by clicking one point for the origin point and two lines for X and Y
directions or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
If you need to adjust the propagation direction of the rays, use Reverse direction.
The Display Source is created and appears in the Simulation panel and in the 3D view.
Related concepts
Understanding the Parameters of a Display Source on page 131
This page describes the parameters to set when creating a Display Source.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
3. If you do not want to inherit the flux from the file, set Flux from intensity file to False and edit the flux in lumens,
watts or candelas.
4. From the SpectrumType drop-down list:
5. Set the Axis System of the source by clicking in the 3D view and select one point for the origin and two lines
for X and Y axes or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
If you need to adjust the ray's propagation direction, use Reverse Direction on X and / or Y.
• Adjust the Number of Rays and Ray Length to display in the 3D view.
• If you want to display the intensity diagram in the 3D view, set the option to True.
This is a visualization parameter. It displays the intensity diagram of the intensity distribution file used to define
the source. If the default size is not big enough, you can increase it to observe the 3D diagram.
The Luminaire Source is created and appears in the Simulation panel and in the 3D view.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Note: The selection of faces from an imported *.obj file is not compatible with the Thermic Source.
Note: You cannot edit the flux values, they are automatically computed.
The flux depends on the blackbody temperature and the absorption of the surface optical properties
and is determined by calculating the emittance's integral on the geometry of the source.
6. From the Intensity Type drop-down list, select the intensity distribution of the source:
7. In Optional or advanced settings , adjust the Number of rays and Length of rays (in mm) to display in the
3D view.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related tasks
Creating a Thermic Source using a Temperature Field File on page 142
A thermic surface can define a source for which the total flux and the spectrum are defined by the source's temperature
and the optical properties of the support geometry. This page shows how to create a Thermic Source using a
temperature field file.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Note: You cannot edit the flux values, they are automatically computed.
The flux value depends on the blackbody temperature and the surface optical properties and is
determined by calculating the emittance's integral on the geometry of the source.
• Double click in the file field to browse and load an .OPTTemperatureField file.
Note: The .OPTTemperatureField file format includes description line, number of summits (Ns),
number of triangles (Nt), coordinates x,y,z of summits (x Ns), coordinates l,m,n of normals (x Ns), index
of summits of each triangle (x Nt), temperature of each triangle (x Nt).
• To orientate the file, set its axis system by clicking in the 3D view one point for the origin point and two lines
for X and Y directions or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis
in the 3D view. Please refer to the axis in the 3D view.
If you need to adjust the direction of the vectors, use the Reverse direction option.
• Select Mirror for a perfect specular surface and edit the Reflectance value if needed.
• Select Optical Polished for a transparent or perfect polished material (glass, plastic).
• Select Library and double-click the file field to browse and load a SOP file.
If you want to edit the file, click the file and click Open file to open it with a surface optical property editor.
• Select Plug-in, and click Browse to select a custom made *.sop plug-in as File and the Parameters file for the
plug-in.
5. From the Intensity Type drop-down list, select the intensity distribution of the source:
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related tasks
Creating a Thermic Source on page 140
A thermic surface can define a source for which the total flux and the spectrum are defined by the source's temperature
and the optical properties of the support geometry. This page shows how to define a Thermic Surface Source using
the faces of a geometry.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Note: Ambient Sources can be used in inverse or direct simulations. However, in direct simulations, only
2D and 3D illuminance/irradiance sensors are taken into account.
Conventions
• For all the different north selected types, Zenith defines the main direction for the ambient source.
• If the North is not perpendicular to the Zenith, it is projected in the perpendicular plan.
Related tasks
Creating an Environment Source on page 148
An Environment Source generates light from an hdr image (HDRI) according to the RGB components of each pixel.
2. In the 3D view:
• click a line (normal to the ground) to set the Zenith direction.
• click a line corresponding to the X axis to set the North direction.
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
Note: The luminance set here is the floating point representation of the reference white color (1,1,1).
The luminance of each pixel is calculated according to its RGB components and the reference pixel. The
luminance usually varies between 1000 and 20000 cd/m2.
Note: HDRIs have relative luminance values. If you set the luminance to 1000 cd/m2, all the (1,1,1) pixels
will have 1000 cd/m2. The other colors are defined relatively to this one.
5. In Color Space, from the Type drop-down list, select which color space based model to use according to your
needs and to the image file's own color space:
• Select sRGB to use the standard and most commonly used RGB based model.
• Select Adobe RGB to use a larger gamut.
• Select User Defined RGB to manually define the white point of the standard illuminant. From the White Point
Type drop-down list:
• Select D65 to use a standard daylight illuminant that provides accurate color perception and evaluation.
• Select D50 to use a natural, horizon light.
• Select C to use an average daylight illuminant.
• Select E to use an illuminant that gives equal weight to all wavelengths.
• Select User defined if you want to edit the x and y coordinates of the white point (the reference point of the
model).
Note: For more information about color models or white points of standard illuminants, see
Colorimetric illuminants.
6. If you selected User Defined RGB from the Color Space drop-down list, load a spectrum file for each primary
color.
If you want to modify or create a .spectrum file, click Open file to open the Spectrum Editor.
Tip: You can also download spectrum files from the Ansys Optical Library.
7. If you selected an HDR image and want to define a ground plane, click a point in the 3D view to determine the
ground origin and type the Height of the environment shooting.
The HDR image is displayed on the ground plane. Thus, the ground plane acts like a textured geometry that can
reflect/absorb light from other sources.
Tip: To maintain the scale, use the real height of the environment shooting. If you do not have that
information, the standard height is 1m.
The Environment Source is created and appears in the Simulation panel. The luminance is calculated according to
the reference pixel. If a ground plane is defined, it is taken into account for simulation.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
2. In the 3D view, click a line (normal to the ground) to set the Zenith direction.
3. Set the Luminance of the entire sky.
Note: If you want to use the sun only in simulations, set the luminance to 0 cd/m²
4. If you want to generate the ambient light from all space and not only from the upper half space, set Mirrored
Extent to true.
5. To add the sun to the ambient source, set Activate Sun to True.
The sun is represented in the 3D view.
• In the 3D view, select a line to set the Sun Direction.
• If you need to adjust the sun direction, use Reverse direction.
• If you need to adjust the sun position, use the manipulators in the 3D view
7. In Optional or advanced settings , adjust the size of the rays displayed in the 3D view.
The Ambient Source is created and appears in the Simulation panel and in the 3D view. The four cardinal points are
displayed in the 3D view to visualize the orientation of the source. If you activated the sun in the scene, a sun is represented
in the 3D view. The sun of the Uniform Ambient Source changes of power and color according to its orientation.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Important: This feature is only available under Speos Premium or Enterprise license.
This sky model is based on the publication of the spatial distribution of daylight - CIE standard general sky ISO
15469:2004(E)/CIE S 011/E:2003 .
2. In the 3D view:
• If you want to modify the Zenith direction (corresponding by default to the Z axis), in the 3D view click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
• If you want to set the coordinates manually (longitude and latitude), from the Location drop-down list, select
User.
• If you want to use the night sky model, adjust the Date and time.
7. In Optional or advanced settings , adjust the size of the rays displayed in the 3D view.
The Ambient Source appears in the Simulation panel. In the 3D view, the four cardinal points and a representation of
the sun are displayed.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Important: This feature is only available under Speos Premium or Enterprise license.
2. If you want to modify the Zenith direction, in the 3D view, select a line (normal to the ground).
3. Set the Luminance of the entire sky in cd/m2.
5. If you want to see the file's properties or edit the file, click Open file to open the Spectrum Editor .
6. In Optional or advanced settings , adjust the size of the rays displayed in the 3D view.
The Ambient Source appears in the Simulation panel. In the 3D view, the four cardinal points are displayed.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
Important: This feature is only available under Speos Premium or Enterprise license.
A preview of the sun and the 4 cardinal points appear in the 3D view. The sun's position is computed according
to the timezone and location set.
2. In the 3D view:
• If you want to modify the Zenith direction, click
Note: The North direction corresponds to the rotation axis of the earth projected on the ground.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
4. Set the Turbidity of the sky, that is the cloudiness of the environment. The lower the turbidity, the clearer the
environment.
The luminance is automatically calculated according to the level of turbidity of the sky.
Note: When turbidity is higher than 6, a part of the luminance distribution of the sky is computed using
the overcast sky luminance distribution formula. We recommend you not using a turbidity higher than
6 with natural light ambient source. Otherwise, limit the altitude of sun to reduce the error.
5. If you want to use the sun only in simulations, uncheck With Sky.
6. Set the Time zone and location:
• If you want to set the coordinates manually (longitude and latitude), from the Location drop-down list, select
User.
• If you want to use the night sky model, adjust the Date and time.
7. In Optional or advanced settings , adjust the size of the rays displayed in the 3D view.
The Ambient Source appears in the Simulation panel. In the 3D view, the four cardinal points and a representation
of the sun are displayed.
Related concepts
Introduction to Optics on page 113
This page describes core optical principles.
Using Turbidity for a Natural Light Ambient Source on page 157
This page describes the impact of the turbidity on the simulations' results.
Related information
Sources Overview on page 112
The Sources correspond to the light sources propagating rays in an optical system.
3.1
5.5
Note: When turbidity is higher than 6, a part of the luminance distribution of the sky is computed using the
overcast sky luminance distribution formula. We recommend you not using a turbidity higher than 6 with
natural light ambient source. Otherwise, limit the altitude of sun to reduce the error.
Related tasks
Creating a Natural Light Ambient Source on page 155
A Natural Light Ambient Source generates light from the sky according to the time, location and the turbidity. The
luminance varies according to where you look.
The chart below shows the radiance of the sky for a given sun position according to the solar zenith angle.
A preview of the sun and the 4 cardinal points appear in the 3D view. The sun's position is computed according
to the timezone and location set.
2. In the 3D view:
• If you want to modify the Zenith direction, click
Note: The North direction corresponds to the rotation axis of the earth projected on the ground.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
• If you want to set the coordinates manually (longitude and latitude), from the Location drop-down list, select
User.
• If you want to use the night sky model, adjust the Date and time.
5. In Optional or advanced settings , adjust the size of the rays displayed in the 3D view.
The Ambient Source appears in the Simulation panel. Four cardinal points and a representation of the sun are
displayed are displayed in the 3D view.
8: Sensors
Sensors allow to render the optical result of an optical system by integrating rays coming from the sources.
Sensors allow to integrate and analyze rays coming from the sources contained in an optical system.
A wide variety of sensors are available to cover different needs and configurations.
The sensors can be used to compute power and analyze how a source is emitting and what is its intensity/emission
pattern or to create perspective and viewpoints to see how a system is perceived by an eye, an observer.
Types of Sensors
• The Radiance Sensor allows you to compute radiance (in watt/sr/m2) or luminance (in candela/m2).
• The Irradiance Sensor allows you to compute irradiance (in watt/m2) or illuminance (in Lux).
• The Intensity Sensor allows you to compute radiant intensity (in watt/sr) or luminance intensity (in candela).
• The 3D Irradiance Sensor allows you to compute irradiance of volume bodies or faces.
• The Human Eye Sensor allows you to accurately simulate human vision by considering the physiological
characteristics of the eye.
• The Immersive Sensor allows you to observe all the objects surrounding a defined point of view. A point is placed
in the scene and the sensor restitutes what is viewed from the scene from this specific point.
• The 3D Energy Density Sensor allows you to compute the energy density carried out by the light in Lumen/m3 or
Watt/m3 which can be useful when working with highly diffusive materials, wanting to track some hot spots or
wanting to visualize the rays' distribution inside the volume itself.
• The Observer Sensor allows you to create an observer point in the scene.
• The Light Field Sensor allows you to measure the distribution of light hitting a surface and to generate a Light
Field file storing light distribution on this selected surface.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 164
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Sensors
• The Camera Sensor allows you to simulate rays integration as in a real camera.
• The LiDAR Sensor allows you to create a LiDAR source and sensor that can be used for LiDAR simulation.
• The Geometric Rotating LiDAR sensor allows you to perform field of view studies to quickly identify how to optimize
your LiDAR system.
In Speos
You can manage all the characteristics of the sensors: its size, position, orientation, wavelength sensitivity etc.
To create a sensor you have to define several parameters: (The parameters vary based on the type of sensor.)
Related information
Sensor Creation on page 173
This section gathers all procedures allowing to create a sensor. At least one sensor must be created in an optical
system to gather and interpret rays.
The Integration Angle is an approximation of an integration zone represented by an integration cone. This
approximation is used by the sensor to solve the specular contribution of the scene.
The integration cone is calculated thanks to the Integration Angle defined, rotating around the Virtual Ray.
In a Direct Simulation, the rays come from the source and are propagated according to physic law.
The probability that a ray coming from a specular interaction has exactly the same direction as the integration
direction (pixel on sensor plan to focal point), is almost null and the specular contribution is almost unsolvable. To
solve the issue, the sensors Radiance, Eye sensor, Observer, Immersive use an internal algorithm (non-visible for
users) called Gathering which basically forces/helps specular rays to find the sensors. The Integration Angle
approximation will help the Gathering algorithm into avoiding the forced rays to be too deflected compared to its
original direction (after last impact), allowing rays to be integrated into the solid angle (defined thanks to the
Integration Angle).
Thanks to this approximation, rays are integrated if the angular deflection between their propagation direction and
the sensor integration direction is smaller than the integration angle.
Virtual Ray: The integration direction (nominal direction) for one pixel to reach the focal point of the sensor
(represented by the eye in the picture).
Real Ray: The ray computed by the simulation.
A blurry result means the integration A noisy result means the integration A balanced result is obtained when
angle is too big. angle is too small. the integration angle is well
adjusted to the scene.
Too many rays are integrated by the Not enough rays are integrated by the
sensor. sensor.
Definition Recommendations
The following list provides pieces of advice to help you define a correct integration angle according to your
configuration:
The smaller the Integration Angle, the better. There is no formula to find the correct Integration Angle. You may have
to find it in an empirical way (running several time the direct simulation with a different integration angle).
Note: Integration Angle is an approximation. If you have difficulties to find a good approximation, you can
use an Inverse Simulation. Integration Angle is not used in Inverse Simulation.
• In case of an observed scene or object with a lot of diffusion in all directions, you can pick a high value for the
integration angle.
• The Integration Angle has to be included between Illumination Angle and Illumination Angle/ 2, but not superior
to this Illumination Angle.
• You must always have the pixel size smaller than the scene visible surface (the "scene" being sources, geometries,
etc.).
• You must take into account the influence of the pixel illumination angle, especially if the pixel size is small.
• Too small integration angles values tend to generate noisy results because the sensor is not able to gather enough
rays. In case of noisy results, you can:
º Try to use more rays so that more photons are integrated by the sensor. The longer the simulation, the better
the results.
º Try defining a larger integration angle to allow more rays to be gathered by the sensor.
Related concepts
Scene Size Influence on page 167
This page describes how the size of a scene can impact the sensor's perception of the scene's illumination.
Illumination Angle Influence on page 168
This page describes the relation between the integration angle and illumination angle and their impact on the scene's
illumination.
Gaussian Intensity Distribution Influence on page 169
This page describes the influence a scene with a gaussian intensity distribution has on the illuminance interpretation.
Visible surface of the scene is smaller Visible surface of the scene is bigger than the observed area
than the observed area
When the visible surface of the scene When the visible surface of the scene is bigger than the area observed through
is smaller than the area observed the pixel aperture, the luminance is integrated over the visible surface of
through the pixel aperture, the the scene. We observe the luminance variation as a function of the integration
luminance is integrated both over the angle and the visible surface of the scene.
visible scene and the unilluminated
area.
The average luminance value is then
lower than the average luminance
value of the scene because the
average is made on the illuminated
and unilluminated areas.
Related concepts
Integration Angle Overview on page 165
This page gives an overview of the Integration Angle parameter and provides recommendations of use.
Illumination Angle Influence on page 168
This page describes the relation between the integration angle and illumination angle and their impact on the scene's
illumination.
Gaussian Intensity Distribution Influence on page 169
This page describes the influence a scene with a gaussian intensity distribution has on the illuminance interpretation.
The integration angle is too low The integration angle is lower than or The integration angle is greater than
compared with the pixel illumination equal to the pixel illumination angle. the pixel illumination angle.
angle.
When the integration angle is lower When the integration angle is greater
When the integration angle is too low than or equal to the pixel illumination than the pixel illumination angle, the
compared with the pixel illumination angle, the luminance value is luminance value decreases as the
angle, too few photons are integrated supposed to stay constant, as the integration angle increases, as the
by the pixel, meaning that the photons are integrated under the number of photons does not change,
luminance value is noisy and same cone as the emission cone of but is integrated under a bigger cone
therefore not reliable. the scene. than the emission cone of the scene.
One way to solve this problem is to
increase the number of photons, by
setting more rays in the simulation.
Related concepts
Integration Angle Overview on page 165
This page gives an overview of the Integration Angle parameter and provides recommendations of use.
Scene Size Influence on page 167
This page describes how the size of a scene can impact the sensor's perception of the scene's illumination.
Gaussian Intensity Distribution Influence on page 169
This page describes the influence a scene with a gaussian intensity distribution has on the illuminance interpretation.
We need to consider at least two cases if we want to study the influence of the integration angle of the sensor and
the FWHM angle of the scene on the luminance value:
Integration angle smaller than the FWHM angle of the Integration angle bigger than the FWHM angle of the
scene scene
When the integration angle is smaller than the FWHM When the integration angle is greater than the FWHM
angle of the scene, the luminance value is supposed to angle of the scene, the luminance value decreases as
stay constant, as the photons are integrated under the the integration angle increases, as the number of
same cone as the emission cone of the scene. photons does not change, but is integrated under a
bigger cone than the emission cone of the scene.
There is a limitation when the integration angle is too
small. Too few photons are integrated by the pixel,
meaning that the luminance value is noisy and therefore
not reliable.
Luminance Variation
For a too small angle (1), the luminance is too noisy, therefore not reliable. Then the luminance starts to decrease
as the integration angle increases (2).
As the value of the FWHM angle increases, the integration angle ranges for which the luminance stays constant
raises.
On this example, the luminance starts to decrease even if the incident angle is lower than the FWHM angle as the
luminance calculation is also influenced by other parameters such as the scene size, the distance between the scene
and the sensor.
Related concepts
Integration Angle Overview on page 165
This page gives an overview of the Integration Angle parameter and provides recommendations of use.
Scene Size Influence on page 167
This page describes how the size of a scene can impact the sensor's perception of the scene's illumination.
Illumination Angle Influence on page 168
This page describes the relation between the integration angle and illumination angle and their impact on the scene's
illumination.
Note: Automatic framing is only available for Radiance, Human Eye, Camera, LIDAR, Observer and Immersive
sensors.
Automatic framing is a tool used to visualize the point of view of a sensor to see what this sensor will capture during
simulation. It is useful to identify an incorrect framing or test different fields of view before computing a simulation.
The automatic framing viewing mode can be activated at any time during a sensor edition from the 3D view or
from the sensor's contextual menu. To deactivate this view, just click back either on the icon or on the option in the
contextual menu.
Note: Deactivating the Automatic Framing does not return to the view displayed before activating the
Automatic Framing.
This tool also offers a dynamic visualization as the view automatically updates the sensor's field of view when editing
sensor parameters like the sampling, definition type, active vision field etc
Figure 25. Example of Automatic Framing and Camera View for a Radiance Sensor
Visualization of the sensor's active vision field Automatic Framing option activated
using the Camera options
Related tasks
Creating an Observer Sensor on page 216
Creating an observer sensor allows you to create a specific view point from where you can observe the system. This
sensor is useful to visualize the optical system from user defined points of view.
Creating an Immersive Sensor on page 211
This page shows how to create an Immersive Sensor to visualize what is viewed from a specific point of view of a
scene. An immersive sensor allows you to generate an .speos360 file that can be visualized in Virtual Reality Lab.
Related information
Sensor Creation on page 173
This section gathers all procedures allowing to create a sensor. At least one sensor must be created in an optical
system to gather and interpret rays.
2. In the 3D view, set the Axis System of the sensor by clicking to select one point for the origin, to select
a line for the X axis, to select a line for the Y axis or click and select a coordinate system to autofill the
Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
Three bold arrows indicate the axis system of the sensor. The blue arrow corresponds to the Z axis and indicates
the integration direction of the sensor.
Tip: To adjust the sensor position and orientation dynamically from the 3D view, you can also use the
Move option (Design tab).
3. From the Integration type drop-down list, select how the light should be integrated to the sensor.
• Select Planar for an integration that is made orthogonally with the sensor plan.
• Select Radial, Hemispherical, Cylindrical, Semi-cylindrical if you need to follow specific street lighting
illumination regulations.
A XMP Template is a *.xml file generated from a XMP result. It contains data and information related to the options
of the XMP result (dimensions, type, wavelength and display properties).
When using a XMP Template, measures are then automatically created in the new *.xmp generated during the
simulation based on the data contained in the template file.
• If you want to inherit the axis system of the sensor from the XMP Template file, set Dimensions from file to
True.
The dimensions are inherited from the file and cannot be edited from the definition panel.
• If you want to define the radiance sensor according to display settings (grid, scale etc.) of the XMP Template,
set Display properties from file to True.
• Select Photometric if you want the sensor to consider the visible spectrum and get the results in lm/m2 or lx.
• Select Radiometric if you want the sensor to consider the entire spectrum and get the results in W/m2.
Note: With both Photometric and Radiometric types, the illuminance levels are displayed with a false
color and you cannot make any spectral or color analysis on the results.
• Select Colorimetric to get the color results without any spectral data or layer separation (in lx or W/m2).
• Select Spectral to get the color results and spectral data separated by wavelength (in lx or W/m2).
Note: Spectral results take more time to compute as they contain more information.
7. If you want to generate a ray file containing the rays that will be integrated to the sensor, from the Ray file
drop-down list, select the ray file type:
• Select SPEOS without polarization to generate a ray file without polarization data.
• Select SPEOS with polarization to generate a ray file with the polarization data for each ray.
• Select IES TM-25 with polarization to generate a .tm25ray file with polarization data for each ray.
• Select IES TM-25 without polarization to generate a .tm25ray file without polarization data.
Note: The size of a ray file is approximately 30MB per million of rays. Consider freeing space on your
computer prior to launching the simulation.
Note: You can change the source's power or spectrum with the Virtual Lighting Controller in the Virtual
Photometric Lab or in the Virtual Human Vision Lab.
• Select Face to include one layer per surface selected in the result.
Tip: Separating the result by face is useful when working on a reflector analysis.
• In the 3D view click and select the contributing faces you want to include for layer separation.
Tip: Select a group (Named Selection) to separate the result with one layer for all the faces contained
in the group.
• Last Impact: with this mode, the ray is integrated in the layer of the last hit surface before hitting the sensor.
• Intersected one time: with this mode, the ray is integrated in the layer of the last hit selected surface if the
surface has been selected as a contributing face or the ray intersects it at least one time.
• Select Sequence to include one layer per sequence in the result.
• Define the Maximum number of sequences to calculate.
• Define the sequences per Geometries or Faces.
Note: Separating the result by sequence is useful if you want to make a Stray Light Analysis. For more
information, refer to Stray Light Analysis.
• Select Polarization to include one layer per Stokes parameter using the polarization parameter.
Stokes parameters are displayed using the layers of the Virtual Photometric Lab.
• Select Incident angles to include one layer per range of incident angles, and define the Sampling.
For more information on the data separated by Incident angles, refer to Understanding the Incident Angles
Layer Type.
• If you want to symmetrize the sensor by linking Start and End values, set Mirrored extent to True.
• Edit the Start and End positions of the sensor on X and Y axes.
Tip: You can either use the manipulators of the 3D view to adjust the sensor or directly edit the values
from the definition panel.
• Adjust the sampling of the sensor. The sampling corresponds to the number of pixels of the XMP map.
10. If you selected Spectral or Colorimetric as sensor type, set the spectral range to use for simulation.
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
The Resolution is automatically computed according to the sampling and wavelength start and end values.
11. If you intend to use the sensor for an inverse simulation, define the Output faces that the rays generated from
the sensor will aim at during the simulation to improve performance:
Important:
In simulation:
• In CPU, for each pixel per pass, if the ray emitted by the CPU does not intersect an output face, the CPU
will emit again a ray until the ray intersects an output face.
• In GPU, for each pixel per pass, the GPU emits one ray whatever it intersects or not an output face. GPU
does not emit again if the ray does not intersect an output face.
That means, for a same number of pass, CPU does converge better than GPU. To get the same result on
GPU, you need to increase the number of pass.
12. If you want to display sensor grid in the 3D view, in Optional or advanced settings set Show grid to True.
The Irradiance Sensor is created and appears both in Speos tree and in the 3D view.
Related concepts
Understanding Integration Types on page 178
This section gathers and describes the different integration types available in Speos. An integration type allows you
to define how light is going to be integrated to a sensor.
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
The illuminance on a point is calculated by the cosine on the angle of incidence ε. The formula is .
Figure 26. Planar mode with an integration direction normal to the sensor plan
Note: The source can also be an extended light source like luminaire, ambient, surface source, etc.
The pixel is only sensitive on one side. Its sensitivity is lambertian.
Horizontal plan
The horizontal illuminance is the most common way to calculate illuminance.
• The integration direction is perpendicular to the horizontal plan and the surface sensor.
• The normal illuminance follows the Bouguer law.
• The formula is
Vertical plan
When the surface sensor is applied vertically, the lateral orientation becomes an important parameter to determinate
the illuminance.
The integration direction is perpendicular to the vertical plan and parallel to the surface sensor (that is, a wall on
the road).
The formula is
Tip: In the specific case where α is equal to 0, the illuminance calculation is the same as for the horizontal
type. It does not depend on the α factor. Only the mechanical plan is different, so the two coordinates systems
General case
In the general case, you must define the integration direction.
The same integration direction is applied on each pixel of the sensor.
On the figure below, the integration direction is perpendicular to the blue mechanical plans.
Related tasks
Creating an Irradiance Sensor on page 173
This page shows how to create an Irradiance Sensor that computes and analyzes irradiance and illuminance
distribution. The irradiance sensor can be created with different integration types allowing you to integrate specific
light directions in the sensor.
Creating an Irradiance Sensor
Release 2023 R2 - © Ansys, Inc. All rights reserved. 180
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Sensors
Note: The source can also be an extended light source like luminaire, ambient, surface source, etc.
The sensitivity of the pixel does not depend on where the rays are coming from.
The calculation is based on the standard EN-13201, which gives mathematical formulas equivalent to different types
of illuminance. Compared to the EN-13201 standard, several parameters are simplified.
The integration direction is the incident flux. This direction is on the vertical plan at right-angle to the surface. Then,
the angle of incident ε is equal to 0° and cos ε = 1.
Related tasks
Creating an Irradiance Sensor on page 173
This page shows how to create an Irradiance Sensor that computes and analyzes irradiance and illuminance
distribution. The irradiance sensor can be created with different integration types allowing you to integrate specific
light directions in the sensor.
Creating an Irradiance Sensor
Creating an Irradiance Sensor
Note: The source can also be an extended light source like luminaire, ambient, surface source, etc.
The sensor is sensible to light incoming from all directions except the direction exactly opposed to the
integration direction.
Related tasks
Creating an Irradiance Sensor on page 173
This page shows how to create an Irradiance Sensor that computes and analyzes irradiance and illuminance
distribution. The irradiance sensor can be created with different integration types allowing you to integrate specific
light directions in the sensor.
Creating an Irradiance Sensor
Creating an Irradiance Sensor
Note: The source can also be an extended light source like luminaire, ambient, surface source, etc.
The sensor is sensible to light coming from all directions except the direction exactly normal to the sensor
plane.
The cylindrical illuminance can be defined by the specific case of vertical illuminance (when α = 0°).
Because of the rotational symmetry (around z axis) only the angle ε is important. In that case, you do not need a
specific integration direction.
The formula is
Related tasks
Creating an Irradiance Sensor on page 173
This page shows how to create an Irradiance Sensor that computes and analyzes irradiance and illuminance
distribution. The irradiance sensor can be created with different integration types allowing you to integrate specific
light directions in the sensor.
Creating an Irradiance Sensor
Creating an Irradiance Sensor
Note: The source can also be an extended light source like luminaire, ambient, surface source, etc.
The sensor is sensible to light coming from all directions, except the directions included in a half plan
delimited by the cylinder axis and situated behind the half cylinder.
Contrary to the cylindrical illuminance, you need an integration direction to calculate the semi-cylindrical illuminance.
In addition, the illuminance depends on the lateral deviation (like the vertical illuminance).
The formula is
Related tasks
Creating an Irradiance Sensor on page 173
This page shows how to create an Irradiance Sensor that computes and analyzes irradiance and illuminance
distribution. The irradiance sensor can be created with different integration types allowing you to integrate specific
light directions in the sensor.
Creating an Irradiance Sensor
Creating an Irradiance Sensor
Example
The following example presents you two sensors with a planar integration type separated by incident angles.
The sampling defined is 6 meaning you will generate the 6 following layers:
• Layer 0-15
• Layer 15-30
• Layer 30-45
• Layer 45-60
• Layer 60-75
• Layer 75-90
In the figure 1, you can see that the Integration Direction determines how pixels are oriented, which therefore will
generate different results according to the rays that will be integrated in each layer.
In the figure 2, both Ray 1 and Ray 2 will be integrated in the Layer 60-75.
Example
The following example presents you a sensor with a semi-cylindrical integration type separated by incident angles.
The sampling defined is 12 meaning you will generate the 12 following layers:
• Layer 0-15
• Layer 15-30
• Layer 30-45
• Layer 45-60
• Layer 60-75
• Layer 75-90
• Layer 90-105
• Layer 105-120
• Layer 120-135
• Layer 135-150
• Layer 150-165
• Layer 165-180
The ray 1 hits the sensor plane with a theta angle of 17° in relation to the Integration Direction. Therefore, it will be
integrated in the Layer 15-30.
The ray 2 hits the sensor plane with a theta angle of 36° in relation to the Integration Direction. Therefore, it will be
integrated in the Layer 30-45.
• Select Photometric if you want the sensor to consider the visible spectrum and get the results in cd/m2 or
lm/sr/m2.
• Select Radiometric if you want the sensor to consider the entire spectrum and get the results in W/sr/m2.
Note: With both Photometric and Radiometric types, the illuminance levels are displayed with a false
color and you cannot make any spectral or color analysis on the results.
• Select Colorimetric to get the color results without any spectral data or layer separation (in cd/m2 or W/sr/m2).
• Select Spectral to get the color results and spectral data separated by wavelength (in cd/m2 or W/sr/m2).
Note: Spectral results take more time to compute as they contain more information.
Tip: You can change the source's power or spectrum with the Virtual Lighting Controller in the Virtual
Photometric Lab or in the Virtual Human Vision Lab.
4. In Definition from, select the point of view you want to use for the sensor:
• Observer
a. In Focal, define the distance between the sensor plane and the observer point.
b. Set the Axis System of the sensor by placing one point and two directions in the scene:
and select one point to place the observer point in the scene.
and select a line to define the Front direction (corresponding by default to Z axis).
• In the 3D view, click
and select a line to define the Top direction (corresponding by default to Y axis).
• or click
Tip: To adjust the sensor position and orientation dynamically from the 3D view, you can also
use the Move option (Design tab).
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
c. If you want to use an XMP Template to define the sensor, in XMP Template, click Browse to load an .xmp
file.
Note: An XMP Template is an .xml file generated from an XMP result. It contains data and information
related to the options of the XMP result (dimensions, type, wavelength and display properties).
When using an XMP Template, measures are then automatically created in the new .xmp generated
during the simulation based on the data contained in the template file.
• If you want to inherit the axis system of the sensor from the XMP Template file, set Dimensions from file
to True.
The dimensions are inherited from the file and cannot be edited from the definition panel.
• If you want to define the radiance sensor according to display settings (grid, scale etc.) of the XMP Template,
set Display properties from file to True.
Tip: You can either use the manipulators of the 3D view to adjust the sensor or directly edit the
values from the definition panel.
• Adjust the sampling of the sensor. The sampling corresponds to the number of pixels of the XMP map.
The Central resolution is automatically calculated and depends on the Sampling value.
Note: The Central resolution is the central angular resolution which corresponds to the angular
resolution of the pixel located in front of the observer.
• Frame
a. From the Observer Type drop-down list, choose how you want to define the distance between the sensor
plane and the observer point:
• Select Focal to manually define the distance between the sensor plane and the observer point in mm.
• Select Observer to select a Focal point from the 3D view.
In this case, the focal is automatically computed between the sensor plane and the selected point.
b. In the 3D view, set the Axis System of the sensor by clicking to select one point for the origin and ,
to select two lines for X and Y axes or click and select a coordinate system to autofill the Axis
System.
If you need to adjust the ray's propagation direction, set Reverse direction to True.
Tip: To adjust the sensor position and orientation dynamically from the 3D view, you can also use
the Move option (Design tab).
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
c. If you want to use an XMP Template to define the sensor, in XMP Template, click Browse to load an .xmp
file.
Note: An XMP Template is an .xml file generated from an XMP result. It contains data and information
related to the options of the XMP result (dimensions, type, wavelength and display properties).
When using an XMP Template, measures are then automatically created in the new .xmp generated
during the simulation based on the data contained in the template file.
• If you want to inherit the axis system of the sensor from the XMP Template file, set Dimensions from file
to True.
The dimensions are inherited from the file and cannot be edited from the definition panel.
• If you want to define the radiance sensor according to display settings (grid, scale etc.) of the XMP Template,
set Display properties from file to True.
• If you want to symmetrize the sensor by linking Start and End values, set Mirrored extent to True.
• Edit the Start and End positions of the sensor on X and Y axes.
Tip: You can either use the manipulators of the 3D view to adjust the sensor or directly edit the
values from the definition panel.
• Adjust the sampling of the sensor. The sampling corresponds to the number of pixels of the XMP map.
5. If you selected Spectral or Colorimetric as sensor type, set the spectral excursion to use for simulation.
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
The Resolution is automatically computed according to the sampling and wavelength start and end values.
• To display the sensor grid in the 3D view, set Show grid to True.
• If needed, adjust the Integration angle.
The Radiance Sensor is created and appears in Speos tree and in the 3D view.
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
Integration Angle on page 165
This section describes the Integration Angle parameter to better understand its possible impacts on the simulation's
results and provides pieces of advices to correctly set it. The integration angle must be defined for various sensors.
IES A
IES B
IES C
Eulumdat
Near Field
In practice, actual measurement devices measure intensity at a finite distance. In Speos, the Near Field option allows
you to model the integration at the finite distance by defining the actual cell of an intensity bench.
The sensor cell is brought closer to the source, therefore bringing the measuring field of the sensor closer to the
source.
Important: When defining a near field intensity sensor, only the near field part is defined at a finite distance
from the system. The rest of the intensity sensor is still considered at infinite distance and contributes to
the result.
A near-field sensor is useful when wanting to compare simulated intensity to measured intensity on small devices.
This option generates results that are physically closer to reality.
In practice, the sensor cell is a disk which size is determined by the Cell diameter value. This diameter corresponds
to the size of the photosensitive sensor that would be used to perform the measure in reality.
The Cell distance value defines the sensor visualization in the 3D view.
Make sure to define a Cell Diameter bigger than the pixel size. Otherwise, the sensor angle is smaller than the pixel,
and some rays will pass between the pixels and will not be considered. These rays are uselessly calculated, however,
the result is still correct.
If the Resolution of X range is different from the Resolution of Y range, take the biggest resolution as reference to
calculate your sensor system.
To calculate the adequate system use the following equation: Cell Diameter = tan(res)*Cell Distance*√2
Note: You cannot use the result of near-field sensor to model the near field of a light source.
The result obtained with a near field sensor can be inaccurate on the edge of the map, over a width equal
to the radius of a cell.
Adaptive Sampling
Adaptive sampling allows you to browse a .txt file describing the angles of result to be used for the sensor.
The format of the adaptive sampling file is the following:
2. Set the Axis System of the sensor by clicking to select the origin, to define X axis and to define the
Y axis or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
Note: An XMP Template is an .xml file generated from an XMP result. It contains data and information
related to the options of the XMP result (dimensions, type, wavelength and display properties).
When using an XMP Template, measures are then automatically created in the new .xmp generated
during the simulation based on the data contained in the template file.
• If you want to inherit the axis system of the sensor from the XMP Template file, set Dimensions from file to
True.
The dimensions are inherited from the file and cannot be edited from the definition panel.
• If you want to define the radiance sensor according to display settings (grid, scale etc.) of the XMP Template,
set Display properties from file to True.
• Select Photometric if you want the sensor to consider the visible spectrum and get the results in cd.
• Select Radiometric if you want the sensor to consider the entire spectrum and get the results in W/sr.
Note: With both Photometric and Radiometric types, the illuminance levels are displayed with a
false color and you cannot make any spectral or color analysis on the results.
• Select Colorimetric to get the color results without any spectral data or layer separation (in cd or W/sr).
• Select Spectral to get the results and spectral data separated by wavelength (in cd or W/sr).
Note: Spectral results take more time to compute as they contain more information.
5. From the Format drop-down list, select XMP to integrate light according to a standard coordinate system.
6. From the Orientation drop-down list, define which axis represents the polar axis:
Tip: You can change the source's power or spectrum with the Virtual Lighting Controller in the Virtual
Photometric Lab or in the Virtual Human Vision Lab.
• Select Face to include one layer per surface selected in the result.
Tip: Separating the result by face is useful when working on a reflector analysis.
• In the 3D view click and select the contributing faces you want to include for layer separation.
Tip: Select a group (Named Selection) to separate the result with one layer for all the faces contained
in the group.
• Select the filtering mode to use to store the results (in the *.xmp map):
• Last Impact: with this mode, the ray is integrated in the layer of the last hit surface before hitting the sensor.
• Intersected one time: with this mode, the ray is integrated in the layer of the last hit selected surface if the
surface has been selected as a contributing face or the ray intersects it at least one time.
• If you want to link the start and end points values, set Mirror extent to True.
• Define the Start and End points of the sensor on X and Y axes by editing the values in degrees.
• In Sampling, define the number of pixels of the XMP map.
The Resolution is automatically calculated.
9. If you selected Colorimetric or Spectral as sensor type, in Wavelength, define the spectral range the sensor
needs to consider:
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
The Resolution is automatically computed according to the sampling and wavelength start and end values.
10. If you want to bring the measuring field of the sensor closer to the source, in Properties set Near field to True.
a) In Cell Distance, define the position of the sensor in regard to the origin in the 3D view.
b) In Cell Diameter, define the actual size of the photosensitive sensor.
Make sure to define a Cell Diameter bigger than the pixel size. For more information refer to Near field.
The intensity sensor is created and is visible both in Speos tree and in the 3D view.
Related tasks
Creating a Polar Intensity Sensor on page 202
This page shows how to create an Intensity Sensor that computes and analyzes radiant or luminous intensity following
standard and intensity formats (IESNA format, Eulumdat format).
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
• Select Photometric if you want the sensor to consider the visible spectrum and get the results in cd.
• Select Radiometric if you want the sensor to consider the entire spectrum and get the results in W/sr .
Note: With both Photometric and Radiometric types, the illuminance levels are displayed with a
false color and you cannot make any spectral or color analysis on the results.
• Select Colorimetric to get the color results without any spectral data or layer separation (in cd or W/sr).
• Select Spectral to get the results and spectral data separated by wavelength (in cd or W/sr).
Note: Spectral results take more time to compute as they contain more information.
3. From the Format drop-down list, select the type of standard you want to follow:
• Select IESNA type A, B or C if you want to generate an .ies file and follow US standard data format for storing
the spatial light output distribution.
• Select Eulumdat if you want to generate an .ldt file and follow the European standard data format for storing
the spatial light output distribution.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 202
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Sensors
Tip: You can change the source's power or spectrum with the Virtual Lighting Controller in the Virtual
Photometric Lab or in the Virtual Human Vision Lab.
• Select Face to include one layer per surface selected in the result.
Tip: Separating the result by face is useful when working on a reflector analysis.
• In the 3D view click and select the contributing faces you want to include for layer separation.
Tip: Select a group (Named Selection) to separate the result with one layer for all the faces contained
in the group.
• Select the filtering mode to use to store the results (in the *.xmp map):
• Last Impact: with this mode, the ray is integrated in the layer of the last hit surface before hitting the sensor.
• Intersected one time: with this mode, the ray is integrated in the layer of the last hit selected surface if the
surface has been selected as a contributing face or the ray intersects it at least one time.
• Select Sequence to include one layer per sequence in the result.
• If you selected IESNA type B, click to select an origin, to select a line to define the Polar Axis and
to select a line to define VO Axis.
• If you selected IESNA type A, IESNA type C or Eulumdat, click to select an origin, to select a line to
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
6. Define the sensor's sampling for H Plane (horizontal plane) and V Plane (vertical plane).
The Resolution is automatically computed.
Note: X and Y range define the sensor's dimensions and depend on the sensor format previously selected.
7. If you want to apply a specific sampling to the sensor, in Adaptive sampling, click Browse to load a .txt file.
8. If you want to bring the measuring field of the sensor closer to the source, in Properties set Near field to True.
The near-field sensor appears in the 3D view.
• In Cell Distance, define the position of the sensor in regard to the origin in the 3D view.
• In Cell Diameter, define the actual size of the photosensitive sensor. The size must be superior to one pixel.
The intensity sensor is created and is visible both in Speos tree and in the 3D view.
Related reference
Understanding the Parameters of an Intensity Sensor on page 193
This page describes advanced settings to set when creating an Intensity Sensor.
Resolution
The Resolution corresponds to the visual acuity of the observer. The visual acuity is commonly expressed in arc
minutes (60 arc minute = 1 degree).
The value taken by as a reference (1 arc minute=0.0167 degrees) corresponds to the normal visual acuity of an
observer. The better the visual acuity, the smaller the angle of resolution gets and the finer the details are perceived
by the eye.
Field Of View
The standard field of view of an observer is 45° (-45°, 45°).
Note: The peculiarity of the human eye sensor lies in its capacity to replicate the physiological characteristics
of the eye. It is precisely what makes it accurate and at the same time inherently less suited for certain
configurations.
This sensor should not be used with transparent diffusive materials in direct simulation because:
• Its ray-acquisition capability appears fairly low with diffusive parts (due to the smallness of the integration
angle that drastically reduces the probability for enough rays to fall into it.)
• Simulations might take too much time to run.
• Simulation results might present noise, grain or blur whilst appearing more sharp.
.
2. Set the Axis System of the sensor by placing two points in the scene:
• In the 3D view, click
• or click
3. If you want to use an XMP Template to define the sensor, in XMP Template, click Browse to load an .xmp file.
Note: An XMP Template is an .xml file generated from an XMP result. It contains data and information
related to the options of the XMP result (dimensions, type, wavelength and display properties).
When using an XMP Template, measures are then automatically created in the new .xmp generated
during the simulation based on the data contained in the template file.
• Select Colorimetric to get the color results without any spectral layer separation (in cd/m2 or W/sr.m2).
• Select Spectral to get the color results and spectral data separated by wavelength (in cd/m2 or W/sr.m2).
Tip: You can change the source's power or spectrum with the Virtual Lighting Controller in Virtual
Photometric Lab or Virtual Human Vision Lab.
• Adjust the Start and End values of the horizontal and vertical fields view.
Tip: You can either use the manipulators of the 3D view to adjust the sensor or directly edit the values
from the definition panel.
Tip: The Resolution corresponds to the visual acuity of the observer. The value taken here as a reference
(1 arc minute = 0.0167 degrees) corresponds to the normal visual acuity of an observer.
• Set Mirrored Extent to True if you want to symmetrize the sensor by linking Start and End values.
Note: The Wavelength Start and End values are inherited from the human eye sensitivity.
8. If you need to modify the pupil diameter of the observer, type a value in mm.
The pupil diameter corresponds to the physical aperture of the eye. By default, the diameter is set to 4mm.
Note: The pupil dilation strongly depends on the environment. The default value used here (4mm)
corresponds to the pupil dilation in broad daylight. In a bright environment the pupil contracts and can
reach 2mm. In a dark environment the pupil dilates and can reach 9mm. The pupil's dilation capacity
also depends on a lot of factors (age of the observer, disease etc).
9. In Optional or advanced settings , adjust the preview to display in the 3D view if needed.
The Human Eye Sensor is created and appears in Speos tree and in the 3D view.
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
Important: This feature is only available under Speos Premium or Enterprise license.
2. In the 3D view, click and select one or more faces or bodies to include to the sensor.
3. If you want to use a XM3 template to define the sensor, in XM3 template, click Browse and load a *.xml file.
A XM3 template allows you to apply a *.xml file containing measure data exported from an existing *.xm3 file.
Measures are automatically created in the new *.xm3 file generated during the simulation based on the data
contained in the template file.
Note: The *.xm3 file generated is not related to the measure template file. If you modify the template
file, the *.xm3 file generated remains the same.
• Photometric to compute the luminous intensity (in cd) and generate an extended map for Virtual Photometric
Lab.
The illuminance levels are displayed with a false color and you cannot make a spectral or a colorimetric analysis
with an extended map.
• Radiometric to compute the radiant intensity (in W.sr-1) and generate an extended map for Virtual Photometric
Lab.
The illuminance levels are displayed with a false color and you cannot make a spectral or a colorimetric analysis
with an extended map.
• Colorimetric to compute the color results without any spectral layer separation (in cd or W.sr-1)
5. From the Integration type drop-down list, define how the illuminance is integrated in the sensor:
• Select Planar for an integration that is made orthogonally with the sensor plan.
• Select Radial if you need to follow specific street lighting illumination regulations.
With the Radial type, the calculation is based on the standard EN-13201 which gives mathematical formulas
equivalent to different types of illuminance.
The integration direction must be orthogonal to avoid wrong flux computation.
6. If you want to generate a ray file containing the rays that will be integrated to the sensor, select the Ray file type:
Note: According to the geometry on which you use the 3D Irradiance Sensor, a same ray can be stored
several times at different locations leading to an over-estimation of the flux.
Note: The size of a ray file is roughly 30 MB per Mray. Check the free space on the hard drive before
generating a ray file.
8. If you selected Photometric or Radiometric, in the Additional measures section, define which type of
contributions (transmission, absorption, reflection) need to be taken into account for the integrating faces of
the sensor.
9. If you selected Colorimetric, in the Wavelength section, define the wavelength characteristics:
a) With Start and End, define the wavelength interval of the spectral data.
b) With Sampling or Resolution, define the number of wavelengths or the step between each wavelength to
take into account in the wavelength interval.
The 3D Irradiance Sensor is created in the 3D view and the Speos tree.
Important: This feature is only available under Speos Premium or Enterprise license.
• Click
to select another origin than the absolute origin of the assembly.
• Click
and select a line to define the Front direction (corresponding by default to Y axis).
• Click
and select a line to define the Top direction (corresponding by default to Z axis).
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
4. If you want to define an interocular distance, set Stereo to True and type the interpupillary distance in mm.
Note: When you define a stereo sensor, make sure that the Front direction is horizontal, the Top direction
is vertical, and the Central resolution matches the intended 3D display.
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
The Resolution is automatically computed according to the sampling and wavelength start and end values.
• In Preview, from the Active Vision Field drop-down list, select which face of the sensor should be used as
default viewpoint for the Automatic framing
option.
The Immersive Sensor is created and appears both in Speos tree and in the 3D view.
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
Important: This feature is only available under Speos Premium or Enterprise license.
2. If you want to modify the axis system of the sensor, click to select the origin, to define the X axis and
to define the Y axis or click and select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
Tip: You can change the source's power or spectrum with the Virtual Lighting Controller in Virtual 3D
Photometric Lab.
• Select Face to include one layer per surface selected in the result.
Tip: Separating the result by face is useful when working on a reflector analysis.
• In the 3D view click and select the contributing faces you want to include for layer separation.
Tip: Select a group (Named Selection) to separate the result with one layer for all the faces contained
in the group.
• Last Impact: with this mode, the ray is integrated in the layer of the last hit surface before hitting the sensor.
• Intersected one time: with this mode, the ray is integrated in the layer of the last hit selected surface if the
surface has been selected as a contributing face or the ray intersects it at least one time.
5. If needed, adjust the dimensions of the sensor by either entering the values or using the 3D view manipulators.
6. Adjust the sampling of the sensor on X, Y and Z axes.
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
Note: If your computer has enough memory, deactivate the VR Sensor Memory Management (accessible
from the general options) for better performance.
Note: With the Observer Sensor, the axis system does not define the sensor's position but the observed
object/scene position. A sphere is then generated around the defined origin allowing you to adjust the
sensor's position by adjusting the sphere itself.
• Click
to select an origin point.
• Click
to select a line defining the horizontal direction.
• Click
to select a line defining the vertical direction.
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
3. In Distance, adjust the radius of the sphere to narrow or widen the global field of vision.
4. In Focal, adjust the distance between the sensor radiance plan and the origin point of the observed object. The
larger the focal, the closer to the object.
5. If you want to define an Interocular distance, set Stereo to True and type a value in mm.
Note: When you define a stereo sensor, make sure the stereo sensor size (height, width) matches your
display and the stereo sensor focal distance matches the focal distance from your display.
6. From the Layerdrop-down list, define how you want the data to be separated in the results:
• Select None if you want the simulation to generate a Speos360 file with one layer containing all sources.
• Select Source if you have created more than one source and want to include one layer per active source in the
result.
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
The Resolution is automatically computed according to the sampling and wavelength start and end values.
8. In Observer locations, define the sensor's position on the sphere around the target point:
Tip: You can use the 3D view manipulators to adjust the Vertical and Horizontal Start and End positions
of the sphere.
• Set the observer location horizontally by adjusting H start and end values.
• Set the observer location vertically by adjusting V start and end values.
• Adjust the V and H sampling.
The resolution is automatically computed.
• Set the horizontal size of the sensor by adjusting H start and end values (in mm).
• Set the vertical size of the sensor by adjusting V start and end values (in mm).
• Adjust the V and H sampling.
The resolution is automatically computed.
Note: When defined, this position is used as the default viewpoint for the Automatic framing
option.
The Observer Sensor is created and appears both in Speos tree and in the 3D view.
Related information
Sensors Overview on page 164
Sensors integrate rays coming from the source to analyze the optical result.
Integration Angle on page 165
This section describes the Integration Angle parameter to better understand its possible impacts on the simulation's
results and provides pieces of advices to correctly set it. The integration angle must be defined for various sensors.
Note: The Light Field feature is in BETA mode for the current release.
Note: The Light Field feature is in BETA mode for the current release.
Optical systems can be composed of sub-optical systems. When you focus on the main optical system, recalculating
every time the propagation inside those sub optical systems can be time-consuming. This time can be optimized:
the goal is to speed up the simulation by pre-computing the propagation of the sub-optical systems.
To proceed to the pre-calculation of those sub-optical systems, the Light Field feature generates a *.olf (Optical
Light Field) file format thanks to a Light Field sensor, that is then used as a Light Field source in the main optical
system. Thus, the simulation does not have to compute the propagation of the sub-optical system, reducing the
simulation time.
LED including chips with a lens on top Original Radiance Simulation of the Radiance Simulation of the Light Field
LED representing the LED
Reference time = T Simulation time = 0.43 * T
General Workflow
1. Create a Local Meshing of the surface to be integrated in the Optical Light Field file.
-Or-
At Step 3, use the Meshing Options of the Direct Simulation to generate the meshing.
Note: As no optical properties are defined on a Light Field meshing, the Light Field is fully absorbing.
2. Create a Light Field Sensor to define how the *.olf file will be generated.
3. Create a Direct Simulation, and if no Local Meshing is applied on the Light Field surface, define the Meshing
Options.
4. Run the Direct Simulation to generate the *.olf file.
5. Create a Light Field Source that uses the generated *.olf file as input.
6. Create and run an Interactive, Direct or Inverse Simulation of the main optical system, using the Light Field Source
as input.
Note: The Light Field feature is in BETA mode for the current release.
Oriented Faces
Light Field sensor measures the distribution of light hitting selected Oriented Faces after a reflection or a transmission.
The faces are meshed using either Direct simulation’s options or Local meshing, and the light distribution is stored
for each triangle.
The smaller the angular resolution and the thinner the meshing,
the larger memory used to generate the Light Field file and the
bigger the file size.
Note: The Light Field feature is in BETA mode for the current release.
• Click
to select an origin point.
• Click
to select a line defining the horizontal direction.
• Click
to select a line defining the vertical direction.
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
• Select Photometric if you want the sensor to consider the visible spectrum and get the results in lm/m2 or lx.
• Select Radiometric if you want the sensor to consider the entire spectrum and get the results in W/m2.
Note: With both Photometric and Radiometric types, the illuminance levels are displayed with a false
color and you cannot make any spectral or color analysis on the results.
• Select Spectral to store the spectral data according to the wavelength sampling defined (in lx or W/m2).
Note: Spectral results take more time to compute as they contain more information.
4. In the 3D view, click to select the oriented faces on which to measure the light distribution.
The selected faces appear in the list as Oriented Faces.
5. In Incident angles, define the angular sampling or the angular resolution.
Note: In the 2022 R1 version, the Start and End values are fixed to 0° and 90°.
Note: In the 2022 R1 version, the Start and End values are fixed to 0° and 360°.
7. If you selected Spectral as sensor type, set the spectral excursion to use for simulation.
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
Note: The Resolution is automatically computed according to the sampling and wavelength start and
end values.
The Light Field Sensor is created and appears both in Speos tree and in the 3D view.
Create and run a Direct Simulation containing the Light Field Sensor to generate the Light Field file.
Important: This feature is only available with the Speos Optical Sensor Test add on.
Static
Solid-State or flashing LiDARs are static systems with no moving parts. With a camera-like behavior, the flashing
LiDAR sends one large laser pulse in one direction to detect and model its environment.
This type of LiDAR, however, rarely exceeds a 120° field of view.
Scanning
Scanning LiDAR systems use dual-axis mirrors to deflect laser beams into specific scanning angles. With this type of
LiDAR, you can define a custom beam detection pattern by describing the azimuth and elevation shooting angles
of the LiDAR. These systems have a detection range that strongly depends on the optical components and the field
of view of the imager. They can cover, at most,180 degrees.
Rotating
Rotating LiDARs are systems with a mechanical rotation offering controlled aiming directions over a 360 field of
view. This type of LiDAR enables a multi-direction detection and a high speed scanning.
With this configuration, the scanning pattern is repeated sequentially for each rotation step of the LiDAR.
In Speos
• From solid-state to rotating LiDARs, define LiDAR systems based on currently commercialized LiDAR models.
• Manage all the characteristics of your LiDAR system: its energy, range, emission pattern etc.
Related concepts
LiDAR Sensor Parameters on page 225
The following section provides more information on static, scanning or rotating LiDAR sensor parameters.
Related tasks
Creating a LiDAR Sensor on page 244
This section gathers the LiDAR sensor creation procedures for the three LiDAR sensor types available in Speos
Related information
Understanding LiDAR Simulation on page 408
This page gives a global presentation on LiDAR principles and simulation.
System Definition
The system definition corresponds to the definition of the LiDAR physical module and its emission pattern.
• Axis System: the system's axis system defines the position and orientation of the "physical" LiDAR module. This
axis is only required for rotating LiDARs as it is used as the revolution axis of the system.
During the simulation, the Y vector of this axis is used as reference to allow the other axes (the source's and sensor's
axes) to revolve around it.
• Trajectory file: a trajectory describes the different positions and orientations of features in time. The Trajectory
file is used to simulate dynamic objects.
• Firing Sequence: firing sequence files are files used to describe a LiDAR's emission pattern.
Two types of firing sequence files are available in Speos allowing you to define the emission pattern of a scanning
or a rotating LiDAR.
Source Definition
The source definition corresponds to the definition of the emitter channel of the LiDAR.
• Axis system: sets the position and orientation of the source channel.
Note: When using an intensity distribution file, verify the orientation of the IES file to correctly orient the
beam which changes according to the IES type (IESNA type A B or C).
Example:
Source X direction = X axis
Source Y direction = Y axis
• Spectrum: for static LiDAR source, the spectrum is monochromatic. Only one wavelength must be defined to set
the spectral excursion of the LiDAR.
For scanning or rotating LiDARs, the spectrum can either be monochromatic or defined by a spectrum file
(*.spectrum) as input.
• Intensity: for static LiDARs, the intensity of the source is defined using an IES (*.ies) or eulumdat (*.ldt) file.
For scanning or rotating LiDARs, the intensity of the source can be defined either by using an intensity distribution
file (*.ies or *.ldt file) as input or by manually defining an asymmetrical gaussian profile.
• Energy: the source energy corresponds to the laser pulse energy expressed in Joules (J).
• The Minimum intensity threshold is the threshold under which the signal of the LiDAR source is not taken into
account. This helps LiDAR differentiate what should be used or ignored to calculate LiDAR's field of view.
Note: For a static LiDAR, the energy and minimum intensity threshold are defined in the interface.
For scanning or rotating LiDARs, only the pulse energy must be defined in the scanning sequence file.
Sensor Definition
The sensor definition corresponds to the receiver channel of the LiDAR.
• Axis System: sets the position and orientation of the sensor channel.
• The Optics section allows you to model the optical system in front of the imager.
º Distortion file: the .OPTDistortion file is used to introduce/replicate the optical distortion of the lens.
º The Transmittance corresponds to the capacity of the lens to allow the light to pass through.
º The Focal length defines the distance between the center of the optical system to the focus.
º The Pupil is the diameter of the LiDAR sensor'saperture.
According to the Pupil size, rays coming with the same direction are not integrated in the same pixel.
If you keep increasing the pupil size, you will observe blur as shown below:
º The Horizontal and Vertical FOV correspond to the fields of view calculated by the sensor using the Focal
Length, Distortion File, Imager Width and Height.
• The Imager represents the LiDAR sensor (receiver channel).
º The Imager Width and Height correspond to the horizontal and vertical size of the sensor.
Note: For scanning and rotating LiDARs, only the Width and Height of the imager must be defined as
the sensor is a one-pixel sensor.
• The Aiming area allows you to define the position, size and shape of the LiDAR's protective sheet.
Note: For more information, see LiDAR Sensor Aiming Area Parameters.
Related concepts
LiDAR Sensor Parameters on page 225
The following section provides more information on static, scanning or rotating LiDAR sensor parameters.
Related tasks
Creating a LiDAR Sensor on page 244
This section gathers the LiDAR sensor creation procedures for the three LiDAR sensor types available in Speos
Description
Firing sequence files are used to describe a LiDAR's emission pattern. These files embed several information like the
firing time, firing angles and source power, that will be used during the simulation to draw the scanning sequence.
Two types of firing sequence files are available in Speos allowing you to define the emission pattern of a scanning
or a rotating LiDAR.
For more information on how to generate a scanning sequence file, refer to Generating a Scanning Sequence File.
Note: When using the Dirac pulse type, the Width of pulse is skipped.
• Shooting Time (s): time when the pulse is emitted in seconds (s)
• Pulse energy (J): Energy of the pulse in Joules
• Pulse type: From pulses, DIRAC, Gaussian, Rectangular, Triangular
Note: From pulse means that the Pulse type inherits the Type of pulses value.
• Pulse width (s): duration of one pulse in seconds (s), Δt (only for Gaussian, Triangular, Rectangular pulse types)
Note: If you enter a Pulse width value different from the Width of pulse value, the Pulse width has
priority.
Note: When using the Dirac pulse type, the Width of pulse is skipped.
• Azimuth angle (°): Azimuth angles within the range [0;360[ expressed in degrees
• Elevation angle (°): Elevation angles within the range [-90;90] expressed in degrees
Note: Azimuth and elevation angles are expressed with respect to the source's axis system of the LiDAR
sensor.
• Position X Y Z (mm): Starting position of the beam with respect to the source's axis system of the LiDAR sensor.
That means the beam can be offset.
• Emissive Surface Description: corresponds to the emissive surface of the beam.
º Surface shape: Point, Rectangular, Elliptic
º Width: width of the beam (for Rectangular and Elliptic shapes)
º Height: height of the beam (for Rectangular and Elliptic shapes)
º Psi angle (°): corresponds to the rotation of the surface shape around the Z axis
• Intensity Description
º Intensity type: From source definition, Sampled, Gaussian
If you set From source definition, the Source - Intensity section's parameters from the LiDAR sensor definition
are used.
If you set Sampled, define an Intensity file (IES file)
If you set Gaussian define the three following Gaussian parameters.
º Intensity file: IES file that will be converted into a readable intensity distribution when generating the
*.OPTScanSequence (or *.txt) file. The intensity value is defined on 2π steradian.
º Gaussian FWHM along X and Y (°): Gaussian intensity distribution of the beam
º Gaussian total angle (°): Total angle of the beam for a Gaussian intensity distribution
• Spectrum description
º Spectrum type: From source definition, Sampled, Monochromatic
If you set From source definition, the Source - Spectrum section's parameters from the LiDAR sensor definition
are used.
If you set Sampled, define an Spectrum file
If you set Monochromatic defined Wavelength (nm)
º Spectrum file: Specific spectrum file for the beam
º Wavelength (nm): wavelength for monochromatic spectrum type
Note: When using the DIRAC pulse type (line 3), the Width of the pulse (line 4) is skipped, meaning no
blank line.
Note: Azimuth and elevation angles are expressed with respect to the source's axis system of the LiDAR
sensor.
Note: The rotation time is only indicative, it does not impact the simulation.
Related tasks
Creating a LiDAR Sensor on page 244
This section gathers the LiDAR sensor creation procedures for the three LiDAR sensor types available in Speos
Related reference
Understanding LiDAR Sensor Parameters on page 225
This page describes the parameters to set when creating a static, scanning or rotating LiDAR sensor.
Axis System
The Axis System of the aiming area can either be inherited from the sensor's axis system or defined manually. In the
3D view, a preview of the aiming area is also displayed to help position it to the desired location.
Dimensions
The dimensions of the aiming area are either inherited from the sensor's pupil diameter or have to be edited manually.
Definition
The aiming area is defined by its shape and size.
Two shape types are available, Rectangular or Elliptical.
Description
Positions and orientations are expressed with the respect to a reference coordinate system, so the trajectory is
relative to this coordinate system.
For instance, the same trajectory file can be used to describe a translation movement of a car as well as the LiDAR
sensor placed on it.
Trajectory is described in a *.json file that contains each chronological sample:
Script Example
Trajectory file can be easily accessed (read or write) using dedicated scripting interfaces available in IronPython
and Python.
Note: Make sure to use the 3.9 version of IronPython or Python language to write your scripts.
IronPython Example
import sys
import clr
clr.AddReferenceToFile("Optis.Core_net.dll")
clr.AddReferenceToFile("Optis.Data_net.dll")
try:
firstData = OptisCore.DAxisSystemData()
firstData.Time = 0.0
firstData.Origin.Init(0, 0, 0)
firstData.Direction_X.Init(1, 0, 0)
firstData.Direction_Y.Init(0, 1, 0)
lastData = OptisCore.DAxisSystemData()
lastData.Time = 7.0
lastData.Origin.Init(0, 0, 3000)
lastData.Direction_X.Init(1, 0, 0)
lastData.Direction_Y.Init(0, 1, 0)
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(2)
dmTrajectory.Trajectory.Set(0, firstData)
dmTrajectory.Trajectory.Set(1, lastData)
strPathTrajectoryFile = OptisCore.String.From(R"C:\trajectory.json")
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryReader = OptisData.CAxisSystemTrajectoryReader()
cAxisSystemTrajectoryReader.Open(pathTrajectoryFile)
dmTrajectory = cAxisSystemTrajectoryReader.Read()
cAxisSystemTrajectoryReader.Close()
print "Done"
except:
print "Exception raised"
Python Example
import sys
try:
firstData = OptisCore.DAxisSystemData()
firstData.Time = 0.0
firstData.Origin.Init(0, 0, 0)
firstData.Direction_X.Init(1, 0, 0)
firstData.Direction_Y.Init(0, 1, 0)
lastData = OptisCore.DAxisSystemData()
lastData.Time = 7.0
lastData.Origin.Init(0, 0, 3000)
lastData.Direction_X.Init(1, 0, 0)
lastData.Direction_Y.Init(0, 1, 0)
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(2)
dmTrajectory.Trajectory.Set(0, firstData)
dmTrajectory.Trajectory.Set(1, lastData)
strPathTrajectoryFile = OptisCore.String(R"C:\trajectory.json")
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryWriter.Close()
print("Done")
except:
print("Exception raised")
Note: Make sure to use the 3.9 version of IronPython or Python language to write your scripts.
# THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS
AND ARE CONFIDENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES,
OR
LICENSORS. The software products and documentation are furnished by ANSYS,
Inc., its subsidiaries,
or affiliates under a software license agreement that contains provisions
concerning non-disclosure,
copying, length and nature of use, compliance with exporting laws, warranties,
disclaimers,
limitations of liability, and remedies, and other provisions. The software
products and
documentation may be used, disclosed, transferred, or copied only in accordance
with the terms and
conditions of that software license agreement.
import sys
import clr
import os
from os import path
clr.AddReferenceToFile("Optis.Core_net.dll")
clr.AddReferenceToFile("Optis.Data_net.dll")
tmpVector = OptisCore.Vector3_double()
tmpVector.Init(iFrame.DirX.X,
iFrame.DirX.Y,
iFrame.DirX.Z)
tmpVector.Normalize()
dAxisSystemData.Direction_X.Init(tmpVector.Get(0),
tmpVector.Get(1),
tmpVector.Get(2))
tmpVector.Init(iFrame.DirY.X,
iFrame.DirY.Y,
iFrame.DirY.Z)
tmpVector.Normalize()
dAxisSystemData.Direction_Y.Init(tmpVector.Get(0),
tmpVector.Get(1),
tmpVector.Get(2))
return dAxisSystemData
# Working directory
workingDirectory = path.dirname(GetRootPart().Document.Path.ToString())
# SpaceClaim InputHelper
ihTrajectoryName = InputHelper.CreateTextBox("Trajectory.1", 'Trajectory name:
', 'Enter the name of the trajectory')
ihFrameFrequency = InputHelper.CreateTextBox(30, 'Timeline frame rate:', 'Set
timeline frame rate (s-1)', ValueType.PositiveInteger)
ihReverseDirection = InputHelper.CreateCheckBox(False, "Reverse direction",
"Reverse direction on trajectory")
ihObjectSpeed = InputHelper.CreateTextBox(50, 'Object speed:', 'Set the moving
object speed (km/h)', ValueType.PositiveDouble)
# Trajectory file
trajectoryName = str(ihTrajectoryName.Value)
trajectoryAcquisitionFile = workingDirectory + "\\" + trajectoryName
# Trajectory curve
trajCurveSelection = ihTrajectory.Value
trajectoryCurve = trajCurveSelection.Items[0]
# Acquisition of positions
def GetPositionOrientation(i_CoordSys, i_ReferenceCoordSys):
# change base of current position
newMatric = Matrix.CreateMapping(i_ReferenceCoordSys.Frame)
pathLength = i_trajectoryCurve.Shape.Length
selectedCurve = Selection.Create(i_trajectoryCurve)
currentPosition = 0.0
timeStamp = 0.0
positionTable = []
timeTable = []
if currentPosition == 0:
options.Copy = True
else:
options.Copy = False
if i_isReversedTrajectory:
result = Move.AlongTrajectory(selectedCoordSys, selectedCurve,
1-currentPosition, options)
if currentPosition == 0:
newselectedCoordSys = result.GetCreated[ICoordinateSystem]()[0]
selectedCoordSys = Selection.Create(newselectedCoordSys)
if newselectedCoordSys.Frame.Origin !=
i_trajectoryCoordSys.Frame.Origin:
options.Copy = False
result = Move.AlongTrajectory(selectedCoordSys,
selectedCurve, currentPosition, options)
else:
result = Move.AlongTrajectory(selectedCoordSys, selectedCurve,
currentPosition, options)
if currentPosition == 0:
newselectedCoordSys = result.GetCreated[ICoordinateSystem]()[0]
selectedCoordSys = Selection.Create(newselectedCoordSys)
if newselectedCoordSys.Frame.Origin !=
i_trajectoryCoordSys.Frame.Origin:
options.Copy = False
result = Move.AlongTrajectory(selectedCoordSys,
selectedCurve, currentPosition, options)
if(result):
movedFrame = GetPositionOrientation(newselectedCoordSys,
i_trajectoryCoordSys)
positionTable.append(movedFrame)
timeTable.append(timeStamp)
result = Delete.Execute(selectedCoordSys)
dAxisSystemData_Table = []
for time in timeTable:
timeIndex = timeTable.index(time)
fFrame = positionTable[timeIndex]
if len(dAxisSystemData_Table) > 0:
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(len(dAxisSystemData_Table))
dmTrajectory.Trajectory.Set(dAxisSystemData_Table.index(dAxisSystemData),
dAxisSystemData)
str(ihTrajectoryName.Value) + ".json")
strPathTrajectoryFile = OptisCore.String.From(pathTrajectoryFile)
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryWriter.Close()
Note: From the Line 156, you will define the parameters to set as indicated by the comments. However
the line numbering can vary according to the modifications you apply.
2. From Line 162, define the PulseShape (Rectangular, Triangular, Gaussian, Dirac) and PulseWidth corresponding
to the default width of pulses in seconds.
3. From Line 166, if you want to apply an unique wavelength, define the default monochromatic spectrum in nm.
4. From Line 171, ff you want to apply a spectrum with multiple wavelengths, define the sampled spectrum in nm.
scan_sequence_data_model.SpectrumList.Push_back(OptisCore.SharedPtr_Cast_DISpectrum_From_DSpectrumSampled(spectrum_Sampled))
for p in range(numberOfPhiAngles):
intensity.Ptr().Distribution.m_X.Set(p, p * 360.0 / (numberOfPhiAngles - 1))
pos = OptisCore.Extent_uint_2()
for t in range(numberOfThetaAngles):
theta = 90.0 * t / (numberOfThetaAngles - 1)
intensity.Ptr().Distribution.m_Y.Set(t, theta)
pos.Set(1, t)
for p in range(numberOfPhiAngles):
pos.Set(0, p)
intensity.Ptr().Distribution.m_Value.Set(pos, math.cos(math.radians(theta)))
# lambertian (no need to normalize)
scan_sequence_data_model.SampledIntensityList.Push_back(intensity)
scanParameters1.ShootingTime = 0.001 # s
scanParameters1.PulseEnergy = 0.05 # J
# If you want to use the PulseWidth defined in Step 3, do not add the parameter
line for PulseWidth
scanParameters1.PulseWidth = 0.00002 # s
# If you do not want to apply an angle, do not add the corresponding parameter
line
scanParameters1.RotationAngle = 0.0132 # azimuth angle in degrees
scanParameters1.TiltAngle = 0.0276 # elevation angle degrees
scanParameters1.PsiAngle = 0.234 # degrees
#If you selected Gaussian as EIntensityType, add the following parameter lines
scanParameters1.GaussianXWidth = 0.5 # FWHM in degrees
scanParameters1.GaussianYHeight = 0.5 # FWHM in degrees
scanParameters1.GaussianTotalAngle = 10.0 # degrees
scanParameters1.SpectrumIndex = 1
scan_sequence_data_model.ScanParamList.Push_back(scanParameters1)
For each beam you add, define its parameters using the following template:
scanParametersN = OptisCore.DScanParam()
scanParametersN.PARAMETER_TO_DEFINE
scanParametersN.PARAMETER_TO_DEFINE
...
...
scan_sequence_data_model.ScanParamList.Push_back(scanParametersN)
# serialization file
currentdirectory = os.path.dirname(os.path.realpath(__file__))
sequence_file_name = OptisCore.String(os.path.join(currentdirectory,
"sampleFile.OPTScanSequence"))
sequence_filepath = OptisCore.Path(sequence_file_name)
3. Define the axis system of the source (emitter channel) by setting its position and orientation.
Note: To correctly orient the beam, first verify the orientation of the IES file.
• Click
to select one point for the origin (point where the pulse is emitted).
• Click
to select a line to define the horizontal axis (corresponding to the X axis of the IES file).
• Click
to select a line to define the vertical axis (corresponding to the Y axis of the IES file).
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
5. To define the source intensity distribution, in Source-Intensity, click Browse to load an IES (.ies) or Eulumdat
(.ldt) file.
6. In Pulse energy, define the energy of a single pulse emitted by the LiDAR source (in Joule).
7. Set a Minimum intensity threshold to define a threshold under which the signal of the LiDAR source is not taken
into account. This helps LiDAR differentiate what should be used or ignored to calculate LiDAR's field of view.
8. Define the axis system of the sensor (receiver channel):
• Click
to select one point for the origin (point from which the pulse is received).
• Click
to select a line to define the horizontal axis of the sensor.
• Click
to select a line to define the vertical axis of the sensor.
• or click
Note: The *.OPTDistortion file is used to introduce/replicate the optical distortion of the lens. Every
lens has varying degrees of distortion.
• Define the Transmittance (capacity to allow the light to pass through) of the lens.
• In Focal length, define the distance between the center of the optical system to the focus.
• In Pupil, define the diameter of the objective aperture.
The Horizontal and Vertical Field of Views (FOV) are calculated according to the distortion, transmittance, focal
length and pupil of the objective.
10. Define the size and resolution of the imager (the sensor) that is placed behind the objective.
• In Start, type the minimum distance from which the LiDAR is able to operate and integrate rays.
• In End, type the maximum distance up to which the LiDAR is able to operate and integrate rays.
• In Spatial accuracy, define the sampling used to save the Raw time of flight.
The time of flight is the time taken by the light to travel from the LiDAR to the object.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
Origin and two lines for X and Y directions or click and select a coordinate system to autofill the Axis
System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
c) From the Type drop-down list, select the shape you want to use for the aiming area:
• Select Rectangle
• Select Elliptic
d) Define the Width and Height of the cover lens.
The LiDAR sensor is created and is visible both in Speos tree and in the 3D view.
You can now create a LiDAR Simulation.
Related concepts
LiDAR Sensor Parameters on page 225
The following section provides more information on static, scanning or rotating LiDAR sensor parameters.
Related information
Creating a LiDAR Simulation on page 412
Creating a LiDAR simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration. The LiDAR simulation supports several sensors at a time.
3. Define the axis system of the LiDAR physical module by selecting an origin point , a line for the horizontal
direction and a line for the vertical direction or click and select a coordinate system to autofill the
Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
4. If you want to create a dynamic LiDAR sensor, in the Trajectory file field, click Browse to load a trajectory file
(.json).
When a trajectory file is assigned to a LiDAR sensor and the feature is edited, the trajectory is displayed in the 3D
view.
5. In Firing Sequence, click Browse to load a *.txt file describing the scanning pattern of the LiDAR.
Note: If you need more information on scanning pattern files, see Firing Sequence Files.
Note: When using an intensity distribution file as source, first verify the orientation of the IES file to
correctly orient the beam.
• Click
to select one point for the origin (point where the pulse is received).
• Click
to select a line to define the horizontal axis (corresponding to the X axis of the IES file).
• Click
to select a line to define the vertical axis (corresponding to the Y axis of the IES file).
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
If you want to see the file properties or edit the file, click Open file to open the Spectrum Editor .
• Select Gaussian to manually define the intensity distribution profile of the source.
• Click
to select one point for the origin (point from which the pulse is emitted).
• Click
to select a line to define the horizontal axis of the sensor.
• Click
to select a line to define the vertical axis of the sensor.
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
10. In the Optics section, define the properties of the sensor's objective:
Note: The *.OPTDistortion file is used to introduce/replicate the optical distortion of the lens. Every
lens has varying degrees of distortion.
• Define the Transmittance (capacity to allow the light to pass through) of the lens:
For a monochromatic source, define a constant Transmittance.
For a source using a *.spectrum file, click Browse to load a Transmittance spectrum file.
• In Focal length, define the distance between the center of the optical system to the focus.
• In Pupil, define the diameter of the objective aperture.
The Horizontal and Vertical Field of Views (FOV) are automatically calculated according to the parameters of
the objective.
Note: The fields of view are only indicative and will not be used for LiDAR simulation.
11. Define the Width and Height to define the size of the imager (the sensor) that is placed behind the objective.
12. When Beta features are enabled, if you want to define the imager resolution, set Resolution (beta) to True and
define the number of Horizontal pixels and Vertical pixels.
13. Define the sensor range and accuracy:
• In Start, type the minimum distance from which the LiDAR is able to operate and integrate rays.
• In End, type the maximum distance up to which the LiDAR is able to operate and integrate rays.
• In Spatial accuracy, define the sampling used to save the Raw time of flight.
The time of flight is the time taken by the light to travel from the LiDAR to the object.
Origin and two lines for X and Y directions or click and select a coordinate system to autofill the Axis
System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
c) From the Type drop-down list, select the shape you want to use for the aiming area:
• Select Rectangle
• Select Elliptic
d) Define the Width and Height of the cover lens.
The LiDAR sensor is created and is visible both in Speos tree and in the 3D view. This type of LiDAR generates only
one simulation result (*.OPTTimeOfFlight).
You can now create a LiDAR Simulation.
Related concepts
LiDAR Sensor Parameters on page 225
The following section provides more information on static, scanning or rotating LiDAR sensor parameters.
Related information
Creating a LiDAR Simulation on page 412
Creating a LiDAR simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration. The LiDAR simulation supports several sensors at a time.
3. Define the axis system of the LiDAR physical module by selecting an origin point , a line for the horizontal
direction and a line for the vertical direction or click and select a coordinate system to autofill the
Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
4. If you want to create a dynamic LiDAR sensor, in the Trajectory file field, click Browse to load a trajectory file
(.json).
When a trajectory file is assigned to a LiDAR sensor and the feature is edited, the trajectory is displayed in the 3D
view.
5. In Firing Sequence, click Browse to load two *.txt files respectively describing the scanning pattern and the
rotation pattern of the LiDAR.
Note: If you need more information on scanning pattern files, see Firing Sequence Files.
Note: When using an intensity distribution file as source, first verify the orientation of the IES file to
correctly orient the beam.
• Click
to select one point for the origin (point where the pulse is received).
• Click
to select a line to define the horizontal axis (corresponding to the X axis of the IES file).
• Click
to select a line to define the vertical axis (corresponding to the Y axis of the IES file).
• or click
If you want to see the file properties or edit the file, click Open file to open the Spectrum Editor .
• Select Gaussian to manually define the intensity distribution profile of the source.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
• Click
to select one point for the origin (point from which the pulse is emitted).
• Click
to select a line to define the horizontal axis of the sensor.
• Click
to select a line to define the vertical axis of the sensor.
• or click
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
10. In the Optics section, define the properties of the sensor's objective:
Note: The *.OPTDistortion file is used to introduce/replicate the optical distortion of the lens. Every
lens has varying degrees of distortion.
• Define the Transmittance (capacity to allow the light to pass through) of the lens:
For a monochromatic source, define a constant Transmittance.
For a source using a *.spectrum file, click Browse to load a Transmittance spectrum file.
• In Focal length, define the distance between the center of the optical system to the focus.
• In Pupil, define the diameter of the objective aperture.
The Horizontal and Vertical Field of Views (FOV) are automatically calculated according to the parameters of
the objective.
Note: The fields of view are only indicative and will not be used for LiDAR simulation.
11. Define the Width and Height to define the size of the imager (the sensor) that is placed behind the objective.
12. When Beta features are enabled, if you want to define the imager resolution, set Resolution (beta) to True and
define the number of Horizontal pixels and Vertical pixels.
13. Define the sensor range and accuracy:
• In Start, type the minimum distance from which the LiDAR is able to operate and integrate rays.
• In End, type the maximum distance up to which the LiDAR is able to operate and integrate rays.
• In Spatial accuracy, define the sampling used to save the Raw time of flight.
The time of flight is the time taken by the light to travel from the LiDAR to the object.
b) If you want to adjust the axis system (that is, by default, the same as the sensor's), select one point for the
Origin and two lines for X and Y directions or click and select a coordinate system to autofill the Axis
System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the
axis in the 3D view. Please refer to the axis in the 3D view.
c) From the Type drop-down list, select the shape you want to use for the aiming area:
• Select Rectangle
• Select Elliptic
d) Define the Width and Height of the cover lens.
The LiDAR sensor is created and is visible both in Speos tree and in the 3D view. This type of LiDAR generates only
one simulation result (*.OPTTimeOfFlight).
You can now create a LiDAR Simulation.
Related concepts
LiDAR Sensor Parameters on page 225
The following section provides more information on static, scanning or rotating LiDAR sensor parameters.
Related information
Creating a LiDAR Simulation on page 412
Creating a LiDAR simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration. The LiDAR simulation supports several sensors at a time.
• In the 3D view, click to select one point for the origin (this point places the sensor in the scene and defines
from where the pulses are sent).
Two arrows appear in the 3D view and indicate the sensor's orientation.
• If you need to adjust the Horizontal direction (LiDAR's line of sight), in the 3D view, click and select a line.
The horizontal direction is used as a reference for the horizontal and vertical fields of view.
• If you need to adjust the Vertical direction (the rotation axis), in the 3D view, click and select a line.
The vertical direction is normal to the horizontal axis and is considered as a rotation axis.
• or click
3. From the Type drop-down list, select one of the predefined types of color scale.
The scale is used to display the results in the 3D view. The distance from the object is illustrated by color variation.
a) If needed, edit the Start and End points of the horizontal field of view.
b) Adjust the Resolution of the sensor (in degrees).
5. In Operation range, bound the sensor's detection range by editing the Start and End values.
Note: The vertical channel basically defines the vertical resolution of the rotating LiDAR by listing each
channel's angular direction. The angles range from -90° to 90°. file
The file is structured as follows:
Line 1: Total number of vertical channels
Line 2 : empty
Line 3: 45 (first channel's angular direction)
Line 4: 30 (second channel's angular direction) etc.
Related information
Creating a Geometric Rotating LiDAR Simulation on page 430
Creating a Geometric Rotating LiDAR simulation allows you to perform field of view studies. A field of view study
allows you to quickly identify what can or must be optimized (for example, the number, position and direction of
sensors) in a LiDAR system.
Note: The Light Expert Group feature is in BETA mode for the current release.
2. In the 3D view, click and select one or more Irradiance sensors and/or one Intensity sensor at maximum.
Make sure all sensors have the same Layer type: None, Data Separated by Source or Data separated by
Sequence.
Tip: You can also select a Named selection composed of sensors. For more information, refer to the
Grouping section.
3. Click Validate .
The sensors are added to the Light Expert Group.
You can now create and run a Direct Simulation containing the Light Expert Group.
Click here for more information on how to perform a multi-sensors light expert analysis.
Note: This feature is only available with the Speos Optical Sensor Test add on.
Tip: For the Distortion Curve (V1), the origin corresponds to the entrance pupil point (EPP) (Speos Camera
Origin).
For the Speos Lens System (V2), the origin corresponds to the center of the imager.
General
Mode
• The Geometric mode is a simplified version of the Camera Sensor definition parameters. The rendering properties
are used by default during the mesh generation.
The Geometric camera sensor can be used only in inverse simulations without sources and other sensors.
When you enable the Geometric mode, the parameters relative to spectrum or spectral data are disabled.
• The Photometric / Colorimetric mode allows you to set every Camera Sensor parameters, including the photometric
definition parameters.
Layer
Layers allows you to store all photometric results in the same XMP layer or not:
• None includes the simulation's results in one layer.
• Data Separated By Source includes one layer per active source in the result.
Optics
Parameter Description
Horizontal field of view (deg) Horizontal field of view calculated using the focal length, distortion file, width and
height of the sensor.
Vertical field of view (deg) Vertical field of view calculated using the focal length, distortion file, width and
height of the sensor.
Focal length (mm) Distance between the center of the optical system and the focus.
For more information, click here.
F-number F-number represent the aperture of the front lens. F number has no impact on the
result.
For more information, click here.
Parameter Description
Distortion Optical aberration that deforms and bend straight lines.
The distortion is expressed in a .OPTDistortion file. For more information on the
.OPTDistortion file, see Distortion curve.
For more information, click here.
Sensor
Parameter Description
Horizontal pixels Defines the horizontal pixels number corresponding to the camera resolution.
Vertical pixels Defines the vertical pixels number corresponding to the camera resolution.
Width Defines the sensor's width.
Height Defines the sensor's height.
Color mode The Color mode is available only in Photometric / Colorimetric Mode.
Color: simulation results are available in color according to the White Balance mode.
Monochrome: simulation results are available in grey scale.
Gamma correction The Gamma correction is available only in Photometric / Colorimetric Mode.
Compensation of the curve before the display on the screen.
For more information, see Monitor.
PNG bits The PNG bits is available only in Photometric / Colorimetric Mode.
Choose between 8, 10, 12 and 16-bit.
Sensitivity
The Sensor sensitivity is available only in Photometric / Colorimetric Mode.
Sensor sensitivity allows you define the spectral sensitivity of the camera sensor according to the color mode selected.
Wavelength
The Wavelength is available only in Photometric / Colorimetric Mode.
Wavelength allows you to define the spectral range in which the inverse simulation propagates rays from the sensor.
• Start (nm) defines the minimum wavelength.
• End (nm) defines the maximum wavelength.
• Sampling defines the number of wavelength to be taken into account between the minimum and minimum
wavelengths set.
Visualization
Visualization allows you to define the elements (Camera field, Object field, Aperture) of the camera to display in
the 3D view.
Visualization Radius changes the radius of the Object field of the camera.
Figure 39. Display of the Camera field, Aperture and Object field.
The camera model allows you to render the behavior of the complete camera system, that is how light is going to
propagate through the lens system.
In Speos, two models with different levels of complexity are available to imitate the behavior of a camera. The
complexity of the models is based on the number and variety of inputs they consider/contain.
Both models are described in an *.OPTDistortion file, used as input of the camera sensor.
The Ansys support provides you with the Speos Lens System Importer tool to generate the *.OPTDistortion file
from a Zemax complete camera system or a Speos complete camera system.
The goal of using an *.OPTDistortion file into a simulation is to:
• perform fast simulations of the camera behavior (compared to simulating the complete camera system)
• reproduce most of the real lens properties
• perform simulation without exposing the manufacturer intellectual properties
Note: Be aware that the Speos Lens System Importer tool is in BETA mode for the current release.
Note: These maps (.irradiance.xmp), generated as a result of a simulation, are available only in your SPEOS
output files folder. They are not displayed in Speos tree.
The output of a Camera sensor using a Speos Lens System provides a 180°-rotated image which corresponds to the
reality.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 263
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Sensors
Related concepts
Distortion Curve on page 264
The Distortion curve describes an angular relationship between the incoming ray's angle and the imager ray's angle.
This relationship is described in an *.OPTDistortion file that is interpreted by Speos to render the camera behavior.
Related reference
Speos Lens System on page 266
This page merely describes the data model and file format of the Speos Lens System.
Note: This curve is also called the "Chief ray angle curve" or "Principal ray angle curve".
Tip: The information needed to fill this file are usually provided by the supplier. You only need to format it.
Note: Simulation results generated with distortion curve file V2.0 (.xmp, .png, .hdr) are not rotated. Simulation
results generated with distortion curve file V1.0 (.xmp, .png, .hdr) are rotated.
Function Principle
The camera sensor (using the Speos Lens System model as an input to define the camera behavior) works in a reverse
propagation principle. This means rays are emitted from the sensor to reach targets (object contained in the scene).
As rays are emitted from the sensor, the sensor itself is considered as the entry point of the Speos Lens System. Rays
are then handled by the function and are propagated in the optomechanical system.
Each sample on the sensor is identified by its coordinates:
• For a 2.0 version file, the sensor only considers polar coordinates (r, theta)
• For a 2.1 version file, the sensor can use polar coordinates (r, theta) or cartesian coordinates (x, y).
Note: The Speos Lens System data model v2.0 works only with a ray direction defined as Spherical and a
Starting point defined as cartesian.
Line Description
Line 1 Header indicating filename and version
Line 2 Comment
Line 3 Ray direction coordinate system (spherical)
Line 4 Start point coordinate system (cartesian)
Line 5 Emissive surface type (disk)
Line Description
Line 6 Number of radii r samples on sensor plane
Distances from the optical center (integer value)
2. Version Structure
The OPTDistortion file v2.1 defining the camera behavior can be divided into two parts:
• A Header containing:
º information on the file (name, version etc.), the type of coordinate system used to define several parameters.
º the sampling of the sensor according to polar coordinates (to a r and a theta ɵ).
• A "body" or data bloc containing the data corresponding to each sensor point. Each line of data corresponds to
one sensor point.
Line Description
Line 1 Header indicating filename and version
Line 2 Comment
Line 3 Sensor coordinate system (Polar or Cartesian)
• Polar: equivalent to v2.0 format definition and describes symmetrical optical systems.
• Cartesian: describes asymmetrical optical systems.
Line 8 List of radii r values (in case of Polar sensor coordinate system) or list of x values (in case of Cartesian
sensor coordinate system)
The number of values should correspond to the value defined at line 7
Line 9 • Number of theta samples on sensor plane, in case of Polar sensor coordinate system. This number
is fixed to consider symmetrical lens systems.
• Number of y samples on sensor plane, in case of Cartesian sensor coordinate system.
Line 10 List of theta angle values (in case of Polar sensor coordinate system) or list of y values (in case of
Cartesian sensor coordinate system)
The number of values should correspond to the value defined at line 9.
Elements
Ray Direction
• With a cartesian system, it is defined by three values (l, m, n)
• With a spherical coordinate system, it is defined by two values (ɵd, φd)
Starting Point
• With a cartesian system, it is defined by three values (x, y, z)
• With a spherical coordinate system, it is defined by three values (r, ɵp, φp)
Emissivity
Emissivity is used to model vignetting and is defined by a floating value between 0 and 1. It models the potential
loss of energy of rays when going through the optical system.
• When the Emissivity is set to 0, the sample on the sensor is not used. That means the irradiance will be null. (for
example in case of a fisheye sensor)
• When the Emissivity is set to 1, the sample on the sensor is used
Note: The field of view calculated corresponds to the overlapping between the sensor and the emissive
surface. areas with no overlap are not taken into account in the field of view.
Note: Emissivity is no longer used since the version 2022 R1 and is generally set to 1.
Focus Distance
Focus Distance is defined by a floating value:
• Real focus (>0)
• Virtual focus (<0)
Divergence (d)
Divergence is defined by a positive floating value that allows you to specify a small statistical deviation for each ray.
With Divergence, you are not targeting an exact single point in the focus plane, but a solid angle around it. This
allows you to consider the lens design quality as it is a simple way to model the Point Spread Function.
The statistical distribution follows a 2D gaussian. The Divergence parameter represents the Full Width Half Maximum
(FWHM) of the gaussian.
Description
Positions and orientations are expressed with the respect to a reference coordinate system, so the trajectory is
relative to this coordinate system.
For instance, the same trajectory file can be used to describe a translation movement of a car as well as the LiDAR
sensor placed on it.
Trajectory is described in a *.json file that contains each chronological sample:
Script Example
Trajectory file can be easily accessed (read or write) using dedicated scripting interfaces available in IronPython
and Python.
Note: Make sure to use the 3.9 version of IronPython or Python language to write your scripts.
IronPython Example
import sys
import clr
clr.AddReferenceToFile("Optis.Core_net.dll")
clr.AddReferenceToFile("Optis.Data_net.dll")
try:
firstData = OptisCore.DAxisSystemData()
firstData.Time = 0.0
firstData.Origin.Init(0, 0, 0)
firstData.Direction_X.Init(1, 0, 0)
firstData.Direction_Y.Init(0, 1, 0)
lastData = OptisCore.DAxisSystemData()
lastData.Time = 7.0
lastData.Origin.Init(0, 0, 3000)
lastData.Direction_X.Init(1, 0, 0)
lastData.Direction_Y.Init(0, 1, 0)
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(2)
dmTrajectory.Trajectory.Set(0, firstData)
dmTrajectory.Trajectory.Set(1, lastData)
strPathTrajectoryFile = OptisCore.String.From(R"C:\trajectory.json")
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryReader = OptisData.CAxisSystemTrajectoryReader()
cAxisSystemTrajectoryReader.Open(pathTrajectoryFile)
dmTrajectory = cAxisSystemTrajectoryReader.Read()
cAxisSystemTrajectoryReader.Close()
print "Done"
except:
print "Exception raised"
Python Example
import sys
try:
firstData = OptisCore.DAxisSystemData()
firstData.Time = 0.0
firstData.Origin.Init(0, 0, 0)
firstData.Direction_X.Init(1, 0, 0)
firstData.Direction_Y.Init(0, 1, 0)
lastData = OptisCore.DAxisSystemData()
lastData.Time = 7.0
lastData.Origin.Init(0, 0, 3000)
lastData.Direction_X.Init(1, 0, 0)
lastData.Direction_Y.Init(0, 1, 0)
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(2)
dmTrajectory.Trajectory.Set(0, firstData)
dmTrajectory.Trajectory.Set(1, lastData)
strPathTrajectoryFile = OptisCore.String(R"C:\trajectory.json")
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryWriter.Close()
print("Done")
except:
print("Exception raised")
Note: Make sure to use the 3.9 version of IronPython or Python language to write your scripts.
# THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS
AND ARE CONFIDENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES,
OR
LICENSORS. The software products and documentation are furnished by ANSYS,
Inc., its subsidiaries,
or affiliates under a software license agreement that contains provisions
concerning non-disclosure,
copying, length and nature of use, compliance with exporting laws, warranties,
disclaimers,
limitations of liability, and remedies, and other provisions. The software
products and
documentation may be used, disclosed, transferred, or copied only in accordance
with the terms and
conditions of that software license agreement.
import sys
import clr
import os
from os import path
clr.AddReferenceToFile("Optis.Core_net.dll")
clr.AddReferenceToFile("Optis.Data_net.dll")
tmpVector = OptisCore.Vector3_double()
tmpVector.Init(iFrame.DirX.X,
iFrame.DirX.Y,
iFrame.DirX.Z)
tmpVector.Normalize()
dAxisSystemData.Direction_X.Init(tmpVector.Get(0),
tmpVector.Get(1),
tmpVector.Get(2))
tmpVector.Init(iFrame.DirY.X,
iFrame.DirY.Y,
iFrame.DirY.Z)
tmpVector.Normalize()
dAxisSystemData.Direction_Y.Init(tmpVector.Get(0),
tmpVector.Get(1),
tmpVector.Get(2))
return dAxisSystemData
# Working directory
workingDirectory = path.dirname(GetRootPart().Document.Path.ToString())
# SpaceClaim InputHelper
ihTrajectoryName = InputHelper.CreateTextBox("Trajectory.1", 'Trajectory name:
', 'Enter the name of the trajectory')
ihFrameFrequency = InputHelper.CreateTextBox(30, 'Timeline frame rate:', 'Set
timeline frame rate (s-1)', ValueType.PositiveInteger)
ihReverseDirection = InputHelper.CreateCheckBox(False, "Reverse direction",
"Reverse direction on trajectory")
ihObjectSpeed = InputHelper.CreateTextBox(50, 'Object speed:', 'Set the moving
object speed (km/h)', ValueType.PositiveDouble)
# Trajectory file
trajectoryName = str(ihTrajectoryName.Value)
trajectoryAcquisitionFile = workingDirectory + "\\" + trajectoryName
# Trajectory curve
trajCurveSelection = ihTrajectory.Value
trajectoryCurve = trajCurveSelection.Items[0]
# Acquisition of positions
def GetPositionOrientation(i_CoordSys, i_ReferenceCoordSys):
# change base of current position
newMatric = Matrix.CreateMapping(i_ReferenceCoordSys.Frame)
pathLength = i_trajectoryCurve.Shape.Length
selectedCurve = Selection.Create(i_trajectoryCurve)
currentPosition = 0.0
timeStamp = 0.0
positionTable = []
timeTable = []
if currentPosition == 0:
options.Copy = True
else:
options.Copy = False
if i_isReversedTrajectory:
result = Move.AlongTrajectory(selectedCoordSys, selectedCurve,
1-currentPosition, options)
if currentPosition == 0:
newselectedCoordSys = result.GetCreated[ICoordinateSystem]()[0]
selectedCoordSys = Selection.Create(newselectedCoordSys)
if newselectedCoordSys.Frame.Origin !=
i_trajectoryCoordSys.Frame.Origin:
options.Copy = False
result = Move.AlongTrajectory(selectedCoordSys,
selectedCurve, currentPosition, options)
else:
result = Move.AlongTrajectory(selectedCoordSys, selectedCurve,
currentPosition, options)
if currentPosition == 0:
newselectedCoordSys = result.GetCreated[ICoordinateSystem]()[0]
selectedCoordSys = Selection.Create(newselectedCoordSys)
if newselectedCoordSys.Frame.Origin !=
i_trajectoryCoordSys.Frame.Origin:
options.Copy = False
result = Move.AlongTrajectory(selectedCoordSys,
selectedCurve, currentPosition, options)
if(result):
movedFrame = GetPositionOrientation(newselectedCoordSys,
i_trajectoryCoordSys)
positionTable.append(movedFrame)
timeTable.append(timeStamp)
result = Delete.Execute(selectedCoordSys)
dAxisSystemData_Table = []
for time in timeTable:
timeIndex = timeTable.index(time)
fFrame = positionTable[timeIndex]
if len(dAxisSystemData_Table) > 0:
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(len(dAxisSystemData_Table))
dmTrajectory.Trajectory.Set(dAxisSystemData_Table.index(dAxisSystemData),
dAxisSystemData)
str(ihTrajectoryName.Value) + ".json")
strPathTrajectoryFile = OptisCore.String.From(pathTrajectoryFile)
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryWriter.Close()
Acquisition Parameters
Integration corresponds to the time needed to get the data acquired by one row of pixels.
Lag time corresponds to the time difference between two rows of pixels to start the integration.
Camera Effect
The acquisition parameters influence the effects captured by the Camera sensor during the inverse simulation.
When the Lag time is null, the effect produced is the motion blur. When some Lag time is defined, the effect produced
is the rolling shutter.
Reference
• Inverse Simulation Timeline: False
• Integration: 0ms
• Lag time: 0ns
Motion Blur
• Inverse Simulation Timeline: True
• Integration: 10ms
• Lag time: 0ns
Rolling Shutter
• Inverse Simulation Timeline: True
• Integration: 1ms
• Lag time: 92592ns
Note: The geometric camera sensor can only be used in inverse simulation without sources or other sensors.
In Geometric mode, all parameters relative to spectrum or spectral data are disabled.
3. Define the Axis System of the camera sensor in the 3D view by clicking to select an origin, X to select a line and
Y to select a line or click and select a coordinate system to autofill the Axis System.
Note: Depending on which camera model is described in the .OPTDistortion input file, the origin of the
sensor is different.
• If the .OPTDistortion file is based on the Basic Distortion Curve model (v1 version), the origin corresponds
to the Entrance Pupil Point (EPP) of the camera.
• If the .OPTDistortion file is based on the Speos Lens System model (v2 version), the origin corresponds
to the center of the sensor.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
If you need to adjust the axes orientation, use Reverse direction on one or both axes.
Note: If the .OPTDistortion file is based on the Speos Lens System model (v2 version), the Focal Length
parameter is not taken into account, the field of view is not computed and horizontal/vertical fields of
view values are set to 0.
Note: Horizontal/vertical fields of view set to 0 are not supported and may generate incorrect result. If
the values are set to 0, refresh the Camera sensor feature.
a) In Focal length, adjust the distance between the center of the optical system and the sensor (in mm).
Note: Focal length does not affect the results when the *.OPTDistortion file is based on the Speos
Lens System model (v2 version).
b) In Imager distance, you can adjust the distance between the aperture and the sensor.
Note: Imager distance does not affect the results. Changing it is only used for visualization purposes
and does not represent the real sensor.
c) In F-number, type the size of the aperture of the camera's front lens.
Note: F-number corresponds to the aperture size of the front lens for the OPTDistortion v1 and the
radius of the pupil for the OPTDistortion v2. The smaller the number, the larger the aperture.
The irradiance is calculated from the radiance by using an acceptance cone for the light (the cone
base is the pupil).
The difference between v1 and v2 is that v1 considers a constant pupil (position and size) whereas
the pupil depends on sensor position for v2. The v2 photometry is then more precise than v1.
More details about the F-number can be found here.
a) Define the number of horizontal and vertical pixels corresponding to the camera resolution.
b) Define the sensor's height and width.
6. If you want to adjust the preview of the sensor, click Optional or advanced settings :
a) Activate or deactivate the preview of certain parts of the system by setting them to True/False.
b) Adjust the Visualization radius.
The Camera Sensor is created and visible in Speos tree and in the 3D view.
Related concepts
Camera Sensor Parameters on page 259
This section allows you to better apprehend the camera sensor definition as it describes key settings of the sensor.
Tip: You can change the source's power or spectrum with the Virtual Lighting Controller in the Virtual
Photometric Lab or in the Virtual Human Vision Lab.
4. Define the Axis System of the camera sensor in the 3D view by clicking to select an origin, X to select a line and
Y to select a line or click and select a coordinate system to autofill the Axis System.
Note: Depending on which camera model is described in the .OPTDistortion input file, the origin of the
sensor is different.
• If the .OPTDistortion file is based on the Basic Distortion Curve model (v1 version), the origin corresponds
to the Entrance Pupil Point (EPP) of the camera.
• If the .OPTDistortion file is based on the Speos Lens System model (v2 version), the origin corresponds
to the center of the sensor.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
If you need to adjust the axes orientation, use Reverse direction on one or both axes.
5. If you want to create a dynamic Camera sensor, in the Trajectory file field, click Browse to load a trajectory file
(.json).
The trajectory describes the different positions and orientations of features in time. The Trajectory file is used
to simulate dynamic objects.
Note: For more information on trajectory files, refer to Trajectory File on page 272.
When a trajectory file is assigned to a Camera sensor and the feature is edited, the trajectory is displayed in the
3D view.
6. If you want to create a dynamic Camera sensor, in the Acquisition section, define:
• the Integration time needed to get the data acquired by one row of pixels.
• the Lag time, if you want to create a rolling shutter effect in the result.
Note: For more information on the acquisition parameters, refer to Acquisition Parameters.
Note: If the .OPTDistortion file is based on the Speos Lens System model (v2 version), the Focal Length
parameter is not taken into account, the field of view is not computed and horizontal/vertical fields of
view values are set to 0.
Note: Horizontal/vertical fields of view set to 0 are not supported and may generate incorrect result. If
the values are set to 0, refresh the Camera sensor feature.
a) In Focal length, adjust the distance between the center of the optical system and the sensor (in mm).
Note: Focal length does not affect the results when the *.OPTDistortion file is based on the Speos
Lens System model (v2 version).
b) In Imager distance, you can adjust the distance between the aperture and the sensor.
Note: Imager distance does not affect the results. Changing it is only used for visualization purposes
and does not represent the real sensor.
c) In F-number, type the size of the aperture of the camera's front lens.
Note: F-number corresponds to the aperture size of the front lens for the OPTDistortion v1 and the
radius of the pupil for the OPTDistortion v2. The smaller the number, the larger the aperture.
The irradiance is calculated from the radiance by using an acceptance cone for the light (the cone
base is the pupil).
The difference between v1 and v2 is that v1 considers a constant pupil (position and size) whereas
the pupil depends on sensor position for v2. The v2 photometry is then more precise than v1.
More details about the F-number can be found here.
d) In Transmittance, click Browse to load a .spectrum file. The spectrum file defines the amount of light that
passes through the lens to reach the sensor
e) In Distortion, click Browse to load an .OPTDistortion file.
The *.OPTDistortion file is a .txt file that contains information on the camera and is used to introduce/replicate
the optical distortion of the camera lens.
A preview of the camera sensor system appears in the 3D view.
8. Define the sensor's size and resolution:
a) Define the number of horizontal and vertical pixels corresponding to the camera resolution.
b) Define the sensor's height and width.
9. From the Color mode drop-down list, define the sensor's color management:
Note: With the Color mode, the simulations results are available in color according to the White
Balance Mode.
11. From the White balance mode drop-down list, choose which correction to apply to get true whites:
• Select None to apply no correction and realize a basic conversion from spectral results to RGB picture.
• Select Grey world to apply a coefficient on each channel to get the same average for the three channels.
• Select User white balance to use the grey world method and manually type the coefficients to apply.
In the Sensor white balance section, adjust the coefficients for Red, Green and Blue colors.
• Select Display primaries to display the results and the colors as they are perceived by the camera.
In the Sensor white balance section, click Browse to load a .spectrum file for each color.
12. In PNG bits, select the number of bits used to encode a pixel.
• Edit the Start (minimum wavelength) and End (maximum wavelength) values to determine the wavelength
range to be considered by the sensor.
• If needed, in Sampling, adjust the number of wavelengths to be computed during simulation.
The Resolution is automatically computed according to the sampling and wavelength start and end values.
15. If you want to adjust the preview of the sensor, click Optional or advanced settings :
a) Activate or deactivate the preview of certain parts of the system by setting them to True/False.
b) Adjust the Visualization radius.
The Camera Sensor is created and visible in Speos tree and in the 3D view.
Related concepts
Camera Sensor Parameters on page 259
This section allows you to better apprehend the camera sensor definition as it describes key settings of the sensor.
Description
The Stray Light Analysis allows you to visualize the contribution of the photons, separating them by sequence in
the simulation results.
A sequence is the path taken by the rays, calculated either according to the faces (F) or the geometries (G) the rays
hit.
Example of Sequences
Note: Defining the sequences per faces is useful when you have only one geometry. Defining the sequences
per geometries is useful when you have more than one geometry.
Note: In case of an optical system with one or more lenses, we recommend you to define the sequences
per geometries.
The Stray Light Analysis is compatible with the Light Expert to visualize the interactive ray tracing of each sequence.
Note: The Sequence Detection Tool is not compatible with Isolated Simulations.
List of Interactions
The list of interactions lists the interactions of the rays with the elements of the optical system.
• When a result has been generated using "faces" sequence layer parameter, the list of interactions provide the
different faces that rays have interacted with.
Example: Optical Surface Rectangular.1:3176.Face.1 for the first face of "Optical Surface Rectangular.1:3176" body
A face name can be listed several times in the list of interactions. For instance, when the face is considered for
transmission and reflection.
Note: Face names defined in the Speos are not retrieved by the Sequence Detection tool in case of
sequences by faces. The Sequence Detection tool only knows the name of the body on which the faces
belong. Each face of a body is identified by a unique number (integer): .Face.1 for the first face of the body
found in the list, .Face2, for the second one, etc.
• When a result has been generated using "geometries" layer parameter, the list of interactions provide the different
bodies that rays have interacted with.
Example: Optical Surface Rectangular.1:3174
• When an interaction has no face name, that means it corresponds to the interaction with a sensor.
List of Sequence
The list of sequences provides the different paths taken by the rays, calculated either according to the faces (F) or
the geometries (G) the rays hit.
For more information on the sequence, see Stray Light Analysis Overview.
By default, the sequences of the List of sequences are sorted by descending order of Energy(%) values.
The List of sequences can be sorted according to the following columns content (No., Length, No. hits, Energy(%))
by clicking on the column header by ascending or descending order.
As a result, sequences that include at least 3 times the geometry 13 will appear in the list of sequences.
Note: Whether Light Expert is activated or not during the simulation definition, it is automatically used
as a background process as soon as data are separated by sequence. The advantage of activating Light
Expert manually is that it allows you to activate the option only for the sensors you want to analyze and
to determine the number of rays you want to embed in the results.
• To display the contribution of a sequence in the 3D view: use the Light Expert and select a sequence in the
Layer list.
Note: When the number of sequences found and displayed in the Layer list is lower than the maximum
number of sequences asked, the All other sequences layer is empty.
• To display the rays in the 3D view, open the Measure tool and specify measure areas on the XMP map.
• To combine sequences in the XMP map, use the Virtual Lighting controller.
• In Tools, select Sequence Detection to see the interactions of each sequence with the elements.
Note: The Sequence Detection Tool is not compatible with faceted geometries as the multiple faces of
the faceted geometries are not detected.
Note: When elements (faces or geometries) are hidden behind other elements in the 3D view, when
using Sequence Detection, they can be highlighted only when the rendering mode manages transparency.
Click the face in the List of interactions to highlight it in the 3D view.
Make sure to activate the component (or sub-component) in which the simulation is located if you want
to highlight faces or geometries with the Sequence Detection tool.
The following section comprises several optical components such as 3D texture and Speos Light Boxes.
Note: Ansys software can only read and use data from a Speos Light Box. Nothing can be modified from
a Speos Light Box.
Geometries
Compatible geometries along with associated optical and meshing properties.
All geometries integrated to the Speos Light Box are stored as mesh (with meshing properties related to the Speos
properties).
CAUTION: You cannot add faceted geometries from the 3D view into a Speos Light Box. Add them from the
Structure tree.
Sources
In the Sources list, from the specification tree or 3D view, you can click the sources to add to the exported component.
You can add the following sources:
• Source group
• Surface source
• Ray file source
• Display source
• Light Field source
• Sources coming from imported Speos Light Boxes.
Note: Speos Light Box Export does not support Speos Patterns.
Note: The Length of rays of a source is not taken into account in a Speos Light Box. After the Speos Light
Box import, the length will be different from the length set in the original source.
Files
A Speos Light Box can include the following file types:
• *.scattering (advanced scattering files)
• *.simplescattering (simple scattering files)
• *.brdf (complete scattering files)
• *.unpolished (unpolished files)
• *.BSDF180 (BSDF180 files)
• *.anisoptropicbsdf (anisotropic bsdf files)
• .material (material files)
• *.ies (IES files)
• *.ldt (Eulumdat files)
• *.spectrum (Spectrum file)
Note: All other file types not included in the list above are not encapsulated in the Speos Light Boxes.
However they are physically located next to the file as dependency files.
Description
Positions and orientations are expressed with the respect to a reference coordinate system, so the trajectory is
relative to this coordinate system.
For instance, the same trajectory file can be used to describe a translation movement of a car as well as the LiDAR
sensor placed on it.
Trajectory is described in a *.json file that contains each chronological sample:
Script Example
Trajectory file can be easily accessed (read or write) using dedicated scripting interfaces available in IronPython
and Python.
Note: Make sure to use the 3.9 version of IronPython or Python language to write your scripts.
IronPython Example
import sys
import clr
clr.AddReferenceToFile("Optis.Core_net.dll")
clr.AddReferenceToFile("Optis.Data_net.dll")
try:
firstData = OptisCore.DAxisSystemData()
firstData.Time = 0.0
firstData.Origin.Init(0, 0, 0)
firstData.Direction_X.Init(1, 0, 0)
firstData.Direction_Y.Init(0, 1, 0)
lastData = OptisCore.DAxisSystemData()
lastData.Time = 7.0
lastData.Origin.Init(0, 0, 3000)
lastData.Direction_X.Init(1, 0, 0)
lastData.Direction_Y.Init(0, 1, 0)
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(2)
dmTrajectory.Trajectory.Set(0, firstData)
dmTrajectory.Trajectory.Set(1, lastData)
strPathTrajectoryFile = OptisCore.String.From(R"C:\trajectory.json")
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryReader = OptisData.CAxisSystemTrajectoryReader()
cAxisSystemTrajectoryReader.Open(pathTrajectoryFile)
dmTrajectory = cAxisSystemTrajectoryReader.Read()
cAxisSystemTrajectoryReader.Close()
print "Done"
except:
print "Exception raised"
Python Example
import sys
try:
firstData = OptisCore.DAxisSystemData()
firstData.Time = 0.0
firstData.Origin.Init(0, 0, 0)
firstData.Direction_X.Init(1, 0, 0)
firstData.Direction_Y.Init(0, 1, 0)
lastData = OptisCore.DAxisSystemData()
lastData.Time = 7.0
lastData.Origin.Init(0, 0, 3000)
lastData.Direction_X.Init(1, 0, 0)
lastData.Direction_Y.Init(0, 1, 0)
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(2)
dmTrajectory.Trajectory.Set(0, firstData)
dmTrajectory.Trajectory.Set(1, lastData)
strPathTrajectoryFile = OptisCore.String(R"C:\trajectory.json")
pathTrajectoryFile = OptisCore.Path(strPathTrajectoryFile)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryWriter.Close()
print("Done")
except:
print("Exception raised")
Note: Make sure to use the 3.9 version of IronPython or Python language to write your scripts.
# THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS
AND ARE CONFIDENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES,
OR
LICENSORS. The software products and documentation are furnished by ANSYS,
Inc., its subsidiaries,
or affiliates under a software license agreement that contains provisions
concerning non-disclosure,
copying, length and nature of use, compliance with exporting laws, warranties,
disclaimers,
limitations of liability, and remedies, and other provisions. The software
products and
documentation may be used, disclosed, transferred, or copied only in accordance
with the terms and
conditions of that software license agreement.
import sys
import clr
import os
from os import path
clr.AddReferenceToFile("Optis.Core_net.dll")
clr.AddReferenceToFile("Optis.Data_net.dll")
tmpVector = OptisCore.Vector3_double()
tmpVector.Init(iFrame.DirX.X,
iFrame.DirX.Y,
iFrame.DirX.Z)
tmpVector.Normalize()
dAxisSystemData.Direction_X.Init(tmpVector.Get(0),
tmpVector.Get(1),
tmpVector.Get(2))
tmpVector.Init(iFrame.DirY.X,
iFrame.DirY.Y,
iFrame.DirY.Z)
tmpVector.Normalize()
dAxisSystemData.Direction_Y.Init(tmpVector.Get(0),
tmpVector.Get(1),
tmpVector.Get(2))
return dAxisSystemData
# Working directory
workingDirectory = path.dirname(GetRootPart().Document.Path.ToString())
# SpaceClaim InputHelper
ihTrajectoryName = InputHelper.CreateTextBox("Trajectory.1", 'Trajectory name:
', 'Enter the name of the trajectory')
ihFrameFrequency = InputHelper.CreateTextBox(30, 'Timeline frame rate:', 'Set
timeline frame rate (s-1)', ValueType.PositiveInteger)
ihReverseDirection = InputHelper.CreateCheckBox(False, "Reverse direction",
"Reverse direction on trajectory")
ihObjectSpeed = InputHelper.CreateTextBox(50, 'Object speed:', 'Set the moving
object speed (km/h)', ValueType.PositiveDouble)
frameFrequency = float(ihFrameFrequency.Value)
# Trajectory file
trajectoryName = str(ihTrajectoryName.Value)
trajectoryAcquisitionFile = workingDirectory + "\\" + trajectoryName
# Trajectory curve
trajCurveSelection = ihTrajectory.Value
trajectoryCurve = trajCurveSelection.Items[0]
# Acquisition of positions
def GetPositionOrientation(i_CoordSys, i_ReferenceCoordSys):
# change base of current position
newMatric = Matrix.CreateMapping(i_ReferenceCoordSys.Frame)
pathLength = i_trajectoryCurve.Shape.Length
selectedCurve = Selection.Create(i_trajectoryCurve)
currentPosition = 0.0
timeStamp = 0.0
positionTable = []
timeTable = []
if currentPosition == 0:
options.Copy = True
else:
options.Copy = False
if i_isReversedTrajectory:
result = Move.AlongTrajectory(selectedCoordSys, selectedCurve,
1-currentPosition, options)
if currentPosition == 0:
newselectedCoordSys = result.GetCreated[ICoordinateSystem]()[0]
selectedCoordSys = Selection.Create(newselectedCoordSys)
if newselectedCoordSys.Frame.Origin !=
i_trajectoryCoordSys.Frame.Origin:
options.Copy = False
result = Move.AlongTrajectory(selectedCoordSys,
selectedCurve, currentPosition, options)
else:
result = Move.AlongTrajectory(selectedCoordSys, selectedCurve,
currentPosition, options)
if currentPosition == 0:
newselectedCoordSys = result.GetCreated[ICoordinateSystem]()[0]
selectedCoordSys = Selection.Create(newselectedCoordSys)
if newselectedCoordSys.Frame.Origin !=
i_trajectoryCoordSys.Frame.Origin:
options.Copy = False
result = Move.AlongTrajectory(selectedCoordSys,
selectedCurve, currentPosition, options)
if(result):
movedFrame = GetPositionOrientation(newselectedCoordSys,
i_trajectoryCoordSys)
positionTable.append(movedFrame)
timeTable.append(timeStamp)
result = Delete.Execute(selectedCoordSys)
dAxisSystemData_Table = []
for time in timeTable:
timeIndex = timeTable.index(time)
fFrame = positionTable[timeIndex]
if len(dAxisSystemData_Table) > 0:
dmTrajectory = OptisCore.DAxisSystemTrajectory()
dmTrajectory.Trajectory.Resize(len(dAxisSystemData_Table))
dmTrajectory.Trajectory.Set(dAxisSystemData_Table.index(dAxisSystemData),
dAxisSystemData)
cAxisSystemTrajectoryWriter = OptisData.CAxisSystemTrajectoryWriter()
cAxisSystemTrajectoryWriter.Open(pathTrajectoryFile)
cAxisSystemTrajectoryWriter.Write(dmTrajectory)
cAxisSystemTrajectoryWriter.Close()
point and two lines for X and Y directions or click and select a coordinate system to autofill the Axis System.
Setting an axis system is optional but ensures that the exported component keeps the right orientation and
origin point. If the axis system is kept empty, the reference is the origin of the main assembly.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
4. In the 3D view click to select the source(s) you want to include in the export.
Note: You cannot select a source included in a Speos Light Box Import.
5. In the 3D view click to select the geometries you want to include in the export.
Make sure a material is applied on all surface bodies that you want to include in a Speos Light Box Export.
Note: You cannot select a geometry included in a Speos Light Box Import.
Your selection appears in the Geometries list. Optical Properties (VOP, SOP, FOP) related to the selected
geometries are included in the export.
Tip: You can select elements from the 3D view or directly from the tree. To deselect an element, click
back on it.
6. To edit the meshing properties of the exported geometries, right-click the light box from the Simulation panel
and click Options.
7. Set the Fast Transmission Gathering to True if you want to neglect the light transmission with transparent
surfaces.
Note: FTG does not apply to a 3D Texture, Polarization Plate and Speos Light Box Import.
8. If you want to encrypt the light box, set Password activated to True.
9. To define the password:
Warning: In version prior to 2023 R1, passwords were not hidden. As soon as you open a project containing
passwords, they are hidden and you cannot retrieve them. Make sure to save your passwords in a safe
place before opening your project in 2023 R1 or subsequent versions.
The Speos Light Box is now created. A .SPEOSLightBox file is exported in a subfolder located in SPEOS output files
directory. The file dependencies are included in the .SPEOSLightBox file.
Tip: To include modifications done after the export, right-click the Speos Light Box export and click Compute
Related concepts
Speos Light Box Overview on page 293
The Speos Light box allows you to export and import geometries along with the associated optical data (sources,
Optical Properties, meshing properties).
Related tasks
Importing a Speos Light Box on page 303
This page shows how to import .SPEOSLightBox, and .speos files into the CAD platform. Importing a light box enables
you to retrieve exported geometries, sources and optical properties.
1. From the Light Simulation tab, click Light Box > Import Speos Light Box .
2. Set the Axis System by clicking one point for the origin and two lines for X and Y directions or click and
select a coordinate system to autofill the Axis System.
The axis system is needed to correctly place the Speos Light Box into the scene according to the axis system of
the assembly in which you import it.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
3. If you want to create a dynamic group of geometries, in the Trajectory file field, click Browse to load a trajectory
file (.json).
Note: For more information on trajectory files, refer to Trajectory File on page 294.
When a trajectory file is assigned to a Speos Light Box Import and the feature is edited, the trajectory is displayed
in the 3D view.
4. Click in the File field and click Browse to load a Speos component (*.speos) or Speos Light Box (*.SPEOSLightBox)
file.
5. If the exported Light Box is encrypted, a Password field appears.
Warning: In version prior to 2023 R1, passwords were not hidden. As soon as you open a project containing
passwords, they are hidden and you cannot retrieve them. Make sure to save your passwords in a safe
place before opening your project in 2023 R1 or subsequent versions.
Note: Each time you change the Display mode, you need to compute the Speos Light Box.
Note: Once the import is completed, the components inherit the information defined during the export.
If the axis system set for the Speos Light Box Import is edited, you must regenerate the import file manually
to get the change displayed in the 3D view.
The Speos Light Box is imported in the current assembly. Sources and geometries are displayed in the 3D view
according to the parameters set during the export.
Related concepts
Speos Light Box Overview on page 293
The Speos Light box allows you to export and import geometries along with the associated optical data (sources,
Optical Properties, meshing properties).
Related tasks
Exporting a Speos Light Box on page 301
Release 2023 R2 - © Ansys, Inc. All rights reserved. 304
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Components
This page shows how to export a Speos Light Box. Exporting a light box enables you to safely exchange data.
9.2. 3D Texture
3D Texture allows you to design and simulate millions of micro-patterns bypassing CAD system limitations.
3D Texture
Duplications of a geometrical item (Patterns) are projected on your geometrical base (Support) according to a
specific distribution (Mapping) to simulate micro texture.
These patterns have volume and surface optical properties, and can be added, removed etc. (Boolean Operation).
As Speos is not able to model the patterns geometrically, the 3D Texture tool interest is to model them for an optical
simulation without having to create millions of small geometries in the CAD model.
Main Capabilities
• Patterns can be applied on any body.
• 3D Texture can be applied on any CAD shapes (flat or freeform, rectangular or not).
Note: You cannot apply a 3D texture on an element having a surface that is tangent to another element.
Important: 3D texture does not support patterns composed of a material using a custom made *.sop
plug-in (surface optical properties).
In Speos:
• Create 3D Texture using a custom mapping file (a .txt file containing the coordinates of your patterns).
• Control every parameter of your pattern (scale factor, distribution, offset, shape etc.).
Related concepts
Boolean Operation on page 306
The following page lists all boolean operations available with the 3D Texture tool. The boolean operation is executed
between the support and the pattern.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
Note:
• You cannot set tangent surfaces between patterns and a support. A gap is needed and must be larger or
equal to ten times the Geometrical Distance Tolerance.
• You cannot set tangent surfaces between patterns. A gap is needed and must be larger or equal to ten
times the Geometrical Distance Tolerance.
• Patterns cannot intersect.
Add In
Insert
Operations
Actual behavior of the pattern Preview of the rays distribution to visualize
pattern behavior
Remove
Add on different
material
Add on same
material
Add In
Note: Set the Geometrical Distance Tolerance to G / 100 in the assembly preferences (ex: if G=1e-5 then
Geometrical Distance Tolerance=1e-7). This gives fewer errors in the propagation of the photons. Also note
that the texture width cannot be larger than the material width.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
Mapping Parameters
The Mapping file describes the number of patterns used in a 3D texture, their coordinates, orientation and scale
values with respect to the axis system.
The *.OPT3DMapping file is built according to the following structure:
• First line: number of patterns to be built in the texture.
• Following lines: x y z ix iy iz jx jy jz kx ky kz
º x y z: coordinates of the pattern's origin according to the axis system.
º ix iy iz: orientation of the pattern with the respect of X direction of the axis system.
º jx jy jz: orientation of the pattern with the respect of Y direction of the axis system.
º kx ky kz: pattern scale values for x y and z directions.
Related concepts
Scale Factors on page 310
A pattern can have a global scale factor and 3 independent pattern scale factors for X,Y and Z. A scale ratio is used
to model small textures (1/100 mm as an example).
Note: The pattern scale factors are cumulative. Final pattern scale factor = global scale factor * pattern
scale factor.
1 Scale Factor
3 Scale Factors
Each pattern can have three scale factors. These factors are used to set the size of each pattern independently on
the 3 axes (X, Y, Z).
Figure 48. Pattern scale factor (x1,y1,z1) (x2, y1, z1) (x1, y1, z2)
Variable scale
Related information
Mapping File on page 308
The Mapping file basically defines the position, orientation and scale of each elements in the texture over the support.
The mapping file is a text file format with an *.OPT3DMapping extension.
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
To create a 3D Texture:
A pattern must already be created.
The pattern is a geometrical item (a part) that can be duplicated to generate a mapping. A pattern can have its own
optical properties and scaling factors.
Tip: If you import a pattern from another CAD software, save the geometry as an .scdoc file to be able to
use in Speos 3D texture.
3. Set the Axis system by clicking a point for the origin point and two lines for X and Y directions or click and
select a coordinate system to autofill the Axis System.
The Axis System determines where does the mapping begins and in what direction it propagates. It is used as a
reference for the position, orientation and scaling of each pattern.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
5. In Operation, from the Type drop-down list, select the boolean operation to be executed on the patterns.
6. In Pattern, click in the file field and click Browse to load a pattern part.
Note: The pattern file is not compatible with the assembly external component. That means you cannot
retrieve a file that references another file containing the pattern geometry.
Important: 3D texture does not support patterns composed of a material using a custom made *.sop
plug-in (surface optical properties).
Related concepts
Scale Factors on page 310
A pattern can have a global scale factor and 3 independent pattern scale factors for X,Y and Z. A scale ratio is used
to model small textures (1/100 mm as an example).
Related information
3D Texture Overview on page 305
3D Texture allows you to simulate micro texture by modeling and projecting millions of geometrical items on a
geometry.
Mapping File on page 308
The Mapping file basically defines the position, orientation and scale of each elements in the texture over the support.
The mapping file is a text file format with an *.OPT3DMapping extension.
9.2.6. Mapping
The type of mapping you use determines the patterns' distribution on the support.
Different mapping types are available. Each type describes a way to create a virtual grid that is going to be projected
on a part’s surface.
When creating a 3D texture you can choose to use:
• Automatic mappings: Rectangular, Circular, Hexagonal, Variable Pitches. These mappings are automatically
calculated and you only have to set a few parameters to obtain the desired distribution.
• Mapping Files : Mapping files are .txt files containing all the information needed to generate the mapping. It allows
you to generate a completely customized mapping and/or save a mapping you designed with the automatic
mappings and use it in another CAD system where Ansys software is integrated.
1. Mapping: a virtual grid is created using standard parameters (distance between patterns, mapping length, etc.).
2. Filtering: a quilt or a face is used to define the grid limitations.
3. All the patterns included in the limited grid are projected along the Z direction on the first encountered surface
of the selected part.
4. Shift: an offset (shift along Z) can be applied on the projected patterns.
Limiting Surface
The Limiting Surface allows you to apply a surface on the geometry on which you apply the 3D texture to limit
the 3D texture to that specific surface.
Important: The Limiting Surface must be composed of only one face. If the Limiting Surface is multi-faces,
the 3D Texture will be applied on only one face.
Limiting Surface composed of two faces 3D Texture applied on only one face
Offset Surface
The Offset Surface is a shift surface that allows you to apply an offset to the projected patterns along the Z
direction according to the origin of the 3D texture's axis system.
The Shift scale helps you define the offset.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
3D Texture Overview on page 305
3D Texture allows you to simulate micro texture by modeling and projecting millions of geometrical items on a
geometry.
X angular offset
Y angular offset
b) In the 3D view, click and select the imported/created object to limit the 3D texture distribution to that
specific face.
4. If you want to define an offset along Z direction on the projected patterns:
a) In the 3D view, click and select the support on which to apply the offset.
b) Type a value in Offset surface ratio to determine the offset of your patterns.
5. In the Pattern section, define the patterns' orientation.
6. If you want to define three scale factors to set the size of each pattern independently on the 3 axes (X, Y, Z):
a) Click to select an X scale surface, to select a Y scale surface and to select an Z scale surface.
The scaling factor to apply to a specific mapping point is defined by the height of the point of this surface, at
the corresponding coordinates (X,Y).
Note: You can define a specific scale on all three independent axes (X,Y,Z axes) or on one axis only
(only on X for example).
The scale value is applied to all the patterns of a direction. The pattern scale factors are cumulative to the
global scale factor (global scale factor * pattern scale factor = final pattern scale factor).
b) Adjust the X size, Y size and Z size of the preview box to see it appear in the 3D view.
c) Using the 3D view manipulators, drag the box onto the 3D texture support to compute the patterns.
The rectangular mapping is created and an OPT3D Mapping file is automatically generated and stored in the SPEOS
inputs files folder.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
3D Texture Overview on page 305
3D Texture allows you to simulate micro texture by modeling and projecting millions of geometrical items on a
geometry.
Mapping radius
X angular offset
b) In the 3D view, click and select the imported/created object to limit the 3D texture distribution to that
specific face.
4. If you want to define an offset along Z direction on the projected patterns:
a) In the 3D view, click and select the support on which to apply the offset.
b) Type a value in Offset surface ratio to determine the offset of your patterns.
5. In the Pattern section, define the patterns' orientation.
6. If you want to define three scale factors to set the size of each pattern independently on the 3 axes (X, Y, Z):
a) Click to select an X scale surface, to select a Y scale surface and to select an Z scale surface.
The scaling factor to apply to a specific mapping point is defined by the height of the point of this surface, at
the corresponding coordinates (X,Y).
Note: You can define a specific scale on all three independent axes (X,Y,Z axes) or on one axis only
(only on X for example).
The scale value is applied to all the patterns of a direction. The pattern scale factors are cumulative to the
global scale factor (global scale factor * pattern scale factor = final pattern scale factor).
b) Adjust the X size, Y size and Z size of the preview box to see it appear in the 3D view.
c) Using the 3D view manipulators, drag the box onto the 3D texture support to compute the patterns.
The ciruclar mapping is created and an OPT3D Mapping file is automatically generated and stored in the Speos inputs
files folder.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
3D Texture Overview on page 305
3D Texture allows you to simulate micro texture by modeling and projecting millions of geometrical items on a
geometry.
X angular offset
Y angular offset
b) In the 3D view, click and select the imported/created object to limit the 3D texture distribution to that
specific face.
4. If you want to define an offset along Z direction on the projected patterns:
a) In the 3D view, click and select the support on which to apply the offset.
b) Type a value in Offset surface ratio to determine the offset of your patterns.
5. In the Pattern section, define the patterns' orientation.
6. If you want to define three scale factors to set the size of each pattern independently on the 3 axes (X, Y, Z):
a) Click to select an X scale surface, to select a Y scale surface and to select an Z scale surface.
The scaling factor to apply to a specific mapping point is defined by the height of the point of this surface, at
the corresponding coordinates (X,Y).
Note: You can define a specific scale on all three independent axes (X,Y,Z axes) or on one axis only
(only on X for example).
The scale value is applied to all the patterns of a direction. The pattern scale factors are cumulative to the
global scale factor (global scale factor * pattern scale factor = final pattern scale factor).
b) Adjust the X size, Y size and Z size of the preview box to see it appear in the 3D view.
c) Using the 3D view manipulators, drag the box onto the 3D texture support to compute the patterns.
The hexagonal mapping is created and an OPT3D Mapping file is automatically generated and stored in the SPEOS
inputs files folder.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
3D Texture Overview on page 305
3D Texture allows you to simulate micro texture by modeling and projecting millions of geometrical items on a
geometry.
The 3D Texture algorithm reference is the axis system. It defines a projection direction (Z axis) used to:
• Position new element: the position is first computed in the plan, then projected on the body along projection
direction.
• Calculate distance between two variable pitches: the distance between two elements (N and N+1) is the distance
between the curve and the position of the first element (N) in the axis plane along projection direction.
2. If not already done, draw two curves or lines that will determine the distribution of the mapping.
3. In the 3D view, click to select a pitch curve along X and to select a pitch curve along Y.
Note: X pitch curve must cut the yOz plane of the 3D texture. Y pitch curve must cut the xOz plane of the
3D texture.
4. To define the spacing of the patterns along X and/or Y, set the pitch ratio of each curve.
5. Set the following parameters:
X angular offset
Y angular offset
b) In the 3D view, click and select the imported/created object to limit the 3D texture distribution to that
specific face.
7. If you want to define an offset along Z direction on the projected patterns:
a) In the 3D view, click and select the support on which to apply the offset.
b) Type a value in Offset surface ratio to determine the offset of your patterns.
8. In the Pattern section, define the patterns' orientation.
9. If you want to define three scale factors to set the size of each pattern independently on the 3 axes (X, Y, Z):
a) Click to select an X scale surface, to select a Y scale surface and to select an Z scale surface.
The scaling factor to apply to a specific mapping point is defined by the height of the point of this surface, at
the corresponding coordinates (X,Y).
Note: You can define a specific scale on all three independent axes (X,Y,Z axes) or on one axis only
(only on X for example).
b) Adjust the X size, Y size and Z size of the preview box to see it appear in the 3D view.
c) Using the 3D view manipulators, drag the box onto the 3D texture support to compute the patterns.
The variable pitches mapping is created and an OPT3D Mapping file is automatically generated and stored in the SPEOS
inputs files folder.
Related information
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
3D Texture Overview on page 305
3D Texture allows you to simulate micro texture by modeling and projecting millions of geometrical items on a
geometry.
Mapping Parameters
The Mapping file describes the number of patterns used in a 3D texture, their coordinates, orientation and scale
values with respect to the axis system.
The *.OPT3DMapping file is built according to the following structure:
• First line: number of patterns to be built in the texture.
• Following lines: x y z ix iy iz jx jy jz kx ky kz
º x y z: coordinates of the pattern's origin according to the axis system.
º ix iy iz: orientation of the pattern with the respect of X direction of the axis system.
º jx jy jz: orientation of the pattern with the respect of Y direction of the axis system.
º kx ky kz: pattern scale values for x y and z directions.
Related concepts
Scale Factors on page 310
A pattern can have a global scale factor and 3 independent pattern scale factors for X,Y and Z. A scale ratio is used
to model small textures (1/100 mm as an example).
b) Adjust the X size, Y size and Z size of the preview box to see it appear in the 3D view.
c) Using the 3D view manipulators, drag the box onto the 3D texture support to compute the patterns.
The mapping is created. If you want to modify the 3D texture distribution, open and edit the OPT3DMapping (.txt) file
and re-import it to update the modifications.
Related information
Mapping File on page 308
The Mapping file basically defines the position, orientation and scale of each elements in the texture over the support.
The mapping file is a text file format with an *.OPT3DMapping extension.
Creating a 3D Texture on page 312
This page shows the first steps of a 3D texture creation.
Note: We recommend you to create first the origins on which you want to position the instances of the
source pattern.
1. From the Light Simulation tab, in the Component section, click Speos Pattern .
2. In the Pattern section, browse and select a Ray File source (*.ray *.tm25ray) or a Speos Light Box file
(*.SPEOSLightBox) from which you want to create multiple instances.
a) Define the Type of the flux between Luminous flux (lm) and Radiant flux (W).
b) Define the flux value of the instances:
• Set From Ray File to True to use the flux from the ray file.
• Set From Ray File to False and manually define a value for a custom flux.
c) In One Layer Per Instance, define if you want to separate the data per instance in layers in the XMP result.
One layer will represent one instance in the result:
• Set to True if you want to create one layer for each instance in the XMP result.
• Set to False if you want to create only one layer for all instances in the XMP result.
Warning: In version prior to 2023 R1, passwords were not hidden. As soon as you open a project
containing passwords, they are hidden and you cannot retrieve them. Make sure to save your
passwords in a safe place before opening your project in 2023 R1 or subsequent versions.
b) In One Layer Per Instance define if you want to separate the data per instance in layers in the XMP result.
One layer will represent one instance in the result:
• Set to True if you want to create one layer for each instance in the XMP result.
• Set to False if you want to create only one layer for all instances in the XMP result.
c) In One Layer Per Source define if you want to separate the data per source in the XMP result. One layer will
represent one source in the result:
• Set to True if you want to create one layer for each source in the XMP result.
• Set to False if you want to create only one layer for all sources in the XMP result.
Note: If you set both One Layer Per Instance and One Layer Per Source to True, one layer per source
per instance will be created.
• In case of Speos Light Box, you can define how to preview the Light Box in the 3D view:
Note: In case of a complex Speos Light Box, Facets may take long to display whereas Bounding box
may be faster.
7. Click Validate .
Release 2023 R2 - © Ansys, Inc. All rights reserved. 330
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Components
10: Simulations
Simulations allow you to give life to the optical system in order to generate results.
Interactive Simulation PNG Result of an Inverse Simulation Xmp result of a Direct Simulation
Simulations allow to materialize and test out an optical system by propagating rays between the key components
of optical simulation (materials, sources, sensors).
Different types of simulations are available to cover the different needs that might appear along the entire design
process.
From dynamic to deterministic simulation, you can visualize how the light behaves in a given optical system and
analyze the results of such simulation.
Types of Simulations
• The Interactive Simulation allows you to visualize in the CAD the behavior of light rays in an optical system. This
simulation can be really useful to quickly understand how a design modification can impact the optical behavior.
• The Direct Simulation allows you to propagate a large number of rays from sources to sensors through an optical
system (a geometry).
• The Inverse Simulation allows you to propagate a large number of rays from sensors (a camera or an eye) to sources
through an optical system.
• The LiDAR Simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration.
In Speos
You can create any kind of simulation depending on your configuration.
Different simulation outputs may be generated depending on the type of simulation performed:
• HMTL reports that provide detailed information about the simulation.
• Extended Maps (XMP), useful to analyze the results or test a measure according to specific regulations.
• PNG results
• .ldt, .ies files (intensity files generated when creating an interactive or direct simulation including an intensity
sensor).
• HDRIs
• Speos360 files
• Ray files (when the generation is enabled during simulation definition).
• LPF, LP3 files (generated when Light Expert is activated).
Some tools are available to manage the simulations results like Light Expert or the LABS viewers allowing you to
analyze and perform measures on the results.
Related information
Inverse Simulation on page 372
The Inverse Simulation allows you to propagate a large number of rays from a camera or a sensor to the sources
through an optical system.
Interactive Simulation on page 358
The Interactive Simulation allows you to visualize the behavior of light rays in an optical system.
Direct Simulation on page 365
The Direct Simulation allows you to propagate a large number of rays from sources to sensors through an optical
system.
LiDAR on page 408
LIDAR is a remote sensing technology using pulsed laser light to collect data and measure the distance to a target.
LIDAR sensors are used to develop autonomous driving vehicles.
CPU Simulations
Simulation Source Sensor
Warning: During an Inverse simulation only, the first pixel of a sensor determines the medium in which the
sensor is. Make sure the sensor does not overlap with two different media at the same time, otherwise you
may generate propagation errors and wrong result.
Warning: Issue when sensor pixel size is larger than the size of geometry, in Inverse simulation only:
• At the beginning of the simulation, a visibility map is computed. To compute it, a ray is launched at the
center of each pixel of the sensor to know if the ray has intersected geometries or not.
• During the simulation, rays are randomly emitted on all the surface of the pixel. If a ray emitted (during
the visibility map computation) in the center of a pixel intersects no geometry, all rays emitted in that
pixel will never intersect geometries.
This may then generate propagation errors and wrong result.
To solve this issue, you may use a sensor with smaller pixels.
1
When ambient and/or environment sources are enabled for direct simulation, only 2D and 3D irradiance sensors
are taken into account.
2
Only for colorimetric and spectral radiance sensors.
3
The SPEOS Lens System model (.OPTDistortion v2 version) is not compatible with a deterministic algorithm in
inverse simulation.
4
An Inverse Simulation, with Timeline deactivated and using a Camera Sensor, only generates a HDRI file (*.hdr).
Note: Geometries are embedded in the simulation even if not selected from the geometries list. For example,
if a 3D Texture element is selected for simulation, its associated support body is also embedded in the
simulation even if the support body was not selected as geometry.
6
Only one Light Expert Group can be added to a Direct simulation.
7
When a Direct simulation is composed of a sensor using the Gathering algorithm (Radiance, Human Eye, Observer,
Immersive), and a polarizing surface state (unpolished, coated, polarizer, polar plate, optical polished, plugin, polar
anisotropic surface), simulation results might not be accurate due to the fact that gathering does not take into
account the polarization of the ray, acting as if the ray is unpolarized.
8
When using a Ray File source in simulation, make sure all rays start from the same medium. Otherwise you will
have an unrealistic behavior and may face differences between GPU and CPU simulations.
GPU Simulations
For an exhaustive list of GPU Solver limitations and non-compatibility, see GPU Simulation Limitations on page 337.
• Files, components, sources or sensors that are not listed in the following table are not compatible with GPU
Simulations.
• GPU Simulations use the Monte Carlo algorithm.
• GPU Simulations simulate all sensors at once.
CAUTION: As all sensors are loaded into memory at the same time, the Video RAM might become saturated
when using many sensors.
Warning: Propagation errors are not managed by GPU simulations. This may lead to inconsistent results
between CPU and GPU simulations. For instance, when a Surface Source is considered as tangent to a body.
1
When ambient and/or environment sources are enabled for direct simulation, only 2D and 3D irradiance sensors
are taken into account.
2
Only for colorimetric and spectral radiance sensors.
3
Camera Sensors using dynamic parameters (Trajectory file and Acquisition parameters) in an Inverse Simulation
using the Timeline is compatible with the GPU Compute.
4
An Inverse Simulation, with Timeline deactivated and using a Camera Sensor, only generates a HDRI file (*.hdr).
5
Inverse Simulations using Display Sources are not compatible with the FTG option activated (Fast Transmission
Gathering).
6
When a Direct simulation is composed of a sensor using the Gathering algorithm (Radiance, Human Eye, Observer,
Immersive), and a polarizing surface state (unpolished, coated, polarizer, polar plate, optical polished, plugin, polar
anisotropic surface), simulation results might not be accurate due to the fact that gathering does not take into
account the polarization of the ray, acting as if the ray is unpolarized.
7
When using a Ray File source in simulation, make sure all rays start from the same medium. Otherwise you will
have an unrealistic behavior and may face differences between GPU and CPU simulations.
Note: Geometries are embedded in the simulation even if not selected from the geometries list. For example,
if a 3D Texture element is selected for simulation, its associated support body is also embedded in the
simulation even if the support body was not selected as geometry.
Note:
• Files, components, sources or sensors that are not listed in the following table are not compatible with
Speos Live Preview.
• Speos Live Preview is not compatible with propagation in ultraviolet or infrared.
• Speos Live Preview simulations use the Monte Carlo algorithm.
Ray File Source Irradiance Sensor Speos Light Box, BSDF 180 file (*.bsdf180), Unpolished
1 file (*.unpolished), Perfect/rough colored mirror files
Direct Surface Source Intensity Sensor
(*.mirror),
Simulation Luminaire Source Radiance Sensor
Anisotropic BSDF file (*.anisotropicbsdf), spectral
Display Source Human Eye Sensor intensity maps (when used as input of a surface source
definition)
Surface Source Irradiance Sensor
Complete scattering file - BRDF (*.brdf), Coating file
Luminaire Source Radiance Sensor1 (*.coated),
Inverse Ambient Source Camera Sensor2
Simulation Advanced Scattering file (*.scattering), Simple Scattering
Display Source3 Human Eye Sensor file (*.simplescattering), Mirror, Lambertian and Optical
Polished built-in models, Non-fluorescent material
(*.material) files
1
Intensity Sensors with Near Field activated are not supported.
2
Camera Sensors using dynamic parameters (Trajectory file and Acquisition parameters) in an Inverse Simulation
using the Timeline is compatible with the Live Preview.
3
Inverse Simulations using Display Sources are not compatible with the FTG option activated (Fast Transmission
Gathering).
Material Definition
Surface Optical Properties do not support:
• Texture normalization: Color from BSDF and Color from texture
• *.fluorescent file format
• White Specular option in *.anisotropicbsdf file format
• *.retroreflecting file format
• SOP Plugin
Volume Optical Properties do not support:
• Fluorescence
• Birefringence
• Index Gradient
• Metallic
• Non-homogeneous volume
Light Sources
Surface Source does not support Exit Geometries
Ray File Source does not support Exit Geometries
Thermic Surface Source are not supported
Ambient Sources:
• Are not supported in Direct Simulation
• Natural Light Ambient Sources do not support Night Sky model (Moon and Stars)
• US Standard Atmosphere 1976 Ambient Sources are not supported
• MODTRAN Sources are not supported
Lightfield Sources are not supported
Sensors
The GPU Solver does not support:
• 3D Energy Density Sensors
• LiDAR Sensors
• Geometric Rotating LiDAR Sensors
• Lightfield Sensors
Irradiance Sensors:
• Only support the Planar Integration type
3D Irradiance Sensors:
• Only support the Planar Integration type
Intensity Sensors do not support:
• Polar Intensity (IESNA / Eulumdat)
• Near Field in Conoscopic (XMP):
Camera Sensors support the SPEOS Lens System model v1.0 and v2.0. The v2.1 version is not supported
GPU hardware memory shall be adequate to the VR sensor use cases. Otherwise, possible memory issues can occur.
Example: a low GPU hardware with a very high sensor resolution.
Simulation
The GPU Solver does not support:
• Interactive Simulations
• HUD Optical Analysis
• LiDAR Simulations
• Geometric Rotating LiDAR Simulations
General Options:
• GPU Solver cannot handle Ray tracer precision set to Double precision and will always run as Single precision.
Inverse Simulations do not support:
• Deterministic algorithm
• Optimized Propagation
• Splitting
• Number of gathering rays per source
• Maximum gathering error
Result
The Depth of field parameter from the Virtual Human Vision Lab is not supported in XMP map generated with a
GPU Simulation.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 338
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Simulations
Light Expert
The GPU Solver does not support:
• Light Expert
• Result data separated by sequence
• Result data separated by polarization
• Generation of *.lpf and *.lp3
The result data separated by surface are supported only in Direct Simulations
Components
The GPU Solver does not support:
• 3D Textures
• Polarization plates
Reporting
The Simulation Report does not display the Error Tracking
no simulation data in *.Speos360 results
The detailed simulation report is available in the XMP results
On GPU, a ray is emitted when the rays hits an optical polished surface. Speos emits then a gathering ray in direction
of the sensor. All gathering rays that are located before a propagation error are emitted and integrated into the
sensor.
On CPU, the ray is emitted by parsing the full ray trace, once a ray has been fully propagated in the system.
• If the ray has encountered a propagation error, the ray is not integrated into the sensor.
• If the ray has not encountered a propagation error, the gathering ray is integrated into the sensor.
Good Practice
Reduce the propagation errors for the CPU Simulation to get nearly the same result as the GPU Simulation, by
reducing the Geometrical Distance ToleranceGeometrical Distance Tolerance.
Note: The meshing applied on geometries is not considered in the RAM estimation.
The following sensors are not considered in the RAM estimation:
• 3D Energy Density sensor
• 3D Irradiance sensor
• LiDAR sensor
• Light Field sensor
Tip: If a XML template is used in sensors, deactivating Filtering options may divide by around 2 the required
memory.
Automatic Compute
Note: The Automatic compute is only available from the feature contextual menu.
However, when the feature requires a certain amount of memory resources, the Compute option is used to
generate the feature or launch the simulation.
Interactive Simulation can be switched from one mode to another depending on your needs:
• Activate the Automatic compute when wanting the simulation to be updated every time an input is modified
(best suited for light simulations).
• Deactivate this mode when working with heavy simulations to avoid unwanted updates when modifying an input.
GPU Compute
The GPU Compute option runs simulations on your computer using the cores of your GPU. The GPU compute
offers a faster alternative than classic CPU simulation.
To define the GPU to use for the compute:
1. Click File > Speos options
2. Click the Light Simulation section.
3. In the GPU tab, in the GPU simulation section check the GPUs to use.
Note: If you select multiple GPUs, simulations will consume the sum of the equivalent cores per GPU.
If you select no GPU, Speos will automatically select the most powerful available.
To get a list of simulations compatible with GPU Compute, see Simulation Compatibility.
Note: This option is compatible with NVIDIA GPUs only. GPUs supported are NVIDIA Quadro P5200 or higher.
The Preview option allows you to compute the simulation using progressive rendering with the most powerful
GPU available on your computer.
The result is displayed in a dedicated window and calculated in real-time.
With the simulation preview, you can pause the computation and change the color layout/color scheme used to
display the result. You can also:
• Activate the Human Vision, Local Adaptation mode to define the accommodation on a fixed value of the luminance
map.
• Activate the Human Vision, Dynamic Adaptation 2019 mode to enable the adaptation of the human eye.
It models the fact that the eye adapts locally as the viewer scans the different areas of the luminance map.
Fore more information, refer to Parameters of Human Vision.
• Check Maintain Lightness when True Color or Human Vision mode is activated to keep the lightness and hue of
color inside the monitor gamut. Otherwise colors outside the monitor gamut will be saturated.
• Save the current live preview result as XMP (*.xmp) or as picture (*.png, *.jpg, *.hdr, *.exr) when you reach the
targeted design performance.
Note: Live Preview options (Human Visions and Maintain Lightness) are not saved in the export result.
Note: If you do not see the save result option, refer to Setting Speos Preferences to allow the result
generation.
Note: When computing simulations containing camera sensors, the color layout cannot be changed and
the level controller is unavailable.
Note: The simulation preview should not be used for validation purposes.
To get a list of the files that are compatible with Speos Live Preview, see Simulation Compatibility.
External Simulations
External Simulations are simulations run through Speos Core.
Thanks to Speos Core, exported simulations can be run locally or on a network while keeping Speos available.
For more information, see Speos Core.
Related tasks
Using the Feature Contextual Menu on page 31
This page lists all the operations that can be performed from the features' contextual menu.
Related information
Speos Core on page 350
Speos Core is used to run External Simulations. Thanks to Speos Core, exported simulations can be run locally or
on a network while keeping Speos available.
Note: The Interactive Live Preview feature is in BETA mode for the current release.
Description
The Interactive Live Preview tool permits you to directly see in the current running Live Preview window the changes
applied on your project without having to launch a simulation to see the result. Indeed, this save you from launching
a full simulation to see the changes as it does not mesh the optical system. This way you can understand quickly
which modification has an impact and what is not supported.
You can:
• Modify most numerical values in sources, sensors and material definitions, as well as file path (spectrum, IES,
etc.).
• Move an object as long as it is not bound to a geometry. When you have an object oriented by an axis system,
moving the axis system will be considered in Live Preview Update.
Note: Refer to Parameters Compatibility with Interactive Live Preview to make sure that the parameters
you change are compatible with the Interactive Live Preview. Otherwise, if you modify an incompatible
parameter a warning will be raised and Update Preview button will not be available.
Example
The following video shows you an example on how to use the Interactive Live Preview in the case of parameters
change and also in the case of Timeline parameter change:
• You can see that parameters are applied to Live Preview rendering upon click of Interactive Live Preview.
• In the case of Timeline parameter change, the change is automatically applied to the Live Preview rendering
without having to click Interactive Live Preview.
Speos detects that modifications have been made on the project. The Update Preview button is ungreyed.
10.2.5.3. Using the Interactive Live Preview with the Timeline Parameter
The following procedure helps you use the Interactive Live Preview in the Live Preview window in case of Timeline
parameter change of an Inverse Simulation.
You must have created an Inverse simulation.
You must have run first a Preview of the simulation.
1. After running the Preview, leave the Live Preview window opened.
2. In the Inverse Simulation definition, modify the Timeline parameter.
The Live Preview updates automatically without having to click the Update Preview button.
Sources
Source Parameters
Surface Source • Flux
• Intensity
• Spectrum
• Emissive
Source Parameters
CIE Standard General Sky Source • Axis System
• CIE Type
• Luminance
• Sun Automatic/Manual
• Time zone and location
Sensors
Sensor Parameters
Irradiance Sensor • Axis System
• XMP Template
• Integration type (only Planar type supported)
• X/Y Range
• X/Y Sampling and Resolution
• Wavelength
• Integration Direction
Sensor Parameters
Radiance Sensor • Definition from
• Observer Type
• Focal Length
• Axis System
• XMP Template
• X/Y Range
• X/Y Sampling and Resolutiokn
• Wavelength
• Integration Angle
Optical Properties
Feature Parameters
Material • Volume properties
• Surface properties
• Use texture
Geometry Properties
Feature Parameters
UV Mapping • Geometry
Component
Feature Parameters
Speos Light Box Import • Moving the Light Box
• Trajectory file
To export a simulation:
You must have set a simulation but not have run it.
Warning: Exporting a project breaks the link between your project and the exported simulation. The exported
simulation is not available in the Speos Simulation tree. If you need to export a simulation and to keep it in
the Speos Simulation tree, refer to Linked Export.
1. In the Speos Simulation tree, right-click the simulation you want to export.
2. Click Export .
3. Browse and select a path for your *.speos folder.
4. Name the file and click Save.
The simulation has been exported in the folder of the same name as the *.speos file along with the input files.
Now you can use Speos Core to run the simulation out of Speos.
Description
The main benefit of externalizing a simulation is the ability to perform parallel processing while still being able to
use Speos.
Two main types of external simulations can be used to run exported simulations (*.speos files) using the Speos Core
interface:
• Local Update: runs the simulation with Speos Core on your workstation.
• Network Update: runs the simulation with Speos HPC or Ansys Cloud.
Local Update
The Local Update allows you to run a simulation while keeping Speos available. The exported simulation is launched
locally on the workstation by using the Speos Core interface.
Depending on the simulation export type used, the simulation results are generated in the .speos directory (Export)
or both in the .speos directory and in Speos tree (Linked Export).
Network Update
The Network Update allows you to run a simulation with Speos HPC or the Ansys Cloud service while keeping Speos
available.
Whether your network corresponds to a Linux cluster, a Windows cluster or the Ansys Cloud service, the network
configuration is performed directly from the Speos Core interface.
Once the network environment is configured, simulations can be exported and run using two different methods:
• Manual method: Export the simulation manually using Export or Linked Export, then run it through Speos Core.
This option allows you to specify and adjust the settings used for the simulation during the job submission.
• Automatic method: Launch the simulation using the Speos HPC Compute command from the Speos interface.
This command combines a Linked Export and a job submission.
Note: Before using this command, you need to create a first simulation to submit in order to configure
the cluster for Speos HPC or Ansys Cloud.
Related concepts
Network Update on page 352
The Network Update runs a simulation with Speos HPC or Ansys Cloud while keeping Speos available.
Related tasks
Running a Simulation Using a Local Update on page 351
The following procedure helps you run a simulation in Speos Core out of Speos on your workstation while keeping
Speos available.
Note: The GPU local update considers the cores of the GPU you set in the Speos Options to run the
simulation. For more information, refer to the GPU Compute section of the Computing Simulations page.
Note: Error XX% corresponds to the evolution of the total error number during the simulation.
The simulation results are available in the folder for both Export and Linked Export Simulations and from the Speos
Simulation tree for Linked Export Simulations.
Note: For more information on how to configure the cluster, refer to the Linux Configuration or the
Windows Configuration.
6. Click Speos HPC simulation to configure and submit the simulation job.
Note: For more information on how to configure and submit a simulation job, refer to Submitting a
Simulation Job from Speos Core.
The configuration you defined will be the one that will be used when using the Speos HPC Compute command.
Note: For more information on the Cloud principle, licensing conditions or any other information regarding
the Ansys Cloud, refer to the Ansys Cloud guide.
Note: If you need help regarding subscriptions, get in touch with your Ansys representative.
Related tasks
Solving Simulations in the Ansys Cloud on page 354
This procedure shows you how to solve simulations in the Cloud using Speos Core to specify the cluster configuration
and Ansys Cloud as a job scheduler.
4. In Job name, type a meaningful job simulation name that will help you identify the job in the job monitor.
Note: By default this name is your user name with a simulation index. The index is incremented at each
new simulation.
5. From the Select Region drop-down list, select the Cloud region that is the closest to you.
Note: A flexible region allows you to define the Total number of cores below.
6. In Configuration choose from a list of pre-configured hardware configurations that have been optimized for the
solver that you are using. Each configuration has a set number of cores, nodes, and Ansys Elastic Units (AEUs)
per hour.
7. With the Total number of cores slider, define the maximum number of cores to use for the simulation.
We recommend you to use a multiple of the number of core available per machine, as job are launched on several
nodes with the same number of cores per node. For example, for HB60rs, define a multiple of 60.
8. Check Download results after completion if you want the results to be downloaded in the .speos folder.
9. In Number of rays, define the number of rays used for simulation. This number is retrieved from the *.speos
data set in Speos but can be adjusted if needed.
10. In Simulation time, define maximum time the simulation can run.
Note: This duration cannot exceed the "Maximum scheduler wall clock", that is 1 hour.
11. Tick Disable ray files and lpf/lp3 output if you want to disable the generation of these outputs in order to
improve simulation time and performance.
12. Click Submit job.
The job input files begin uploading to a storage directory in the Cloud. Once the files have been uploaded, the
Cloud service will allocate resources for the job, and the job will start running on the Cloud hardware.
Note: You will receive an email notification when a job starts, completes, fails, or is stopped, enabling
you to keep track of jobs even when you are not using the Cloud portal.
13. Once the simulation is complete, from the Ansys Cloud portal, click Jobs, select your job from the list, then click
Files and download your output result files into the Speos isolated files folder.
The simulation is solved through Ansys Cloud and can be managed on the Cloud portal.
For more information on file transfers or job monitoring on the Cloud portal, refer to the Solving in the Cloud section
of the Ansys Cloud Guide.
The configuration you defined will be the one that will be used when using the Speos HPC Compute command.
Related tasks
Setting Up the Cloud Environment on page 353
This procedure shows you how to install and configure your environment in order to be able to solve simulations
in the cloud.
General Description
The propagation error is expressed as a fraction of the rays emitted (Total number of errors in the HTML report) or
a fraction of the energy emitted (Error power in the HTML report).
We usually recommend a Total number of errors ratio below 3%, and below 1% in case of a sensitive project.
Note: The fraction of energy emitted is usually more valuable to assess the criticality of error amount: you
may have plenty of rays in error in a part insignificant to your project because almost no energy is propagated
there.
In the following report example, 100006 rays are emitted. 98 rays are in error representing an energy of 0,00098 Watt.
Warning: The more propagation errors you have, the more visible the difference between the CPU result
and the GPU result for a same simulation. For more information, refer to GPU/CPU Differences.
Note: This propagation error should be corrected by changing the geometry modeling.
2D tangency error
2D tangency error occurs when a solid geometry and surface geometry, or a surface geometry and another surface
geometry are tangent in a same simulation.
Note: Speos can manage two tangent solid geometries in a same simulation.
Note: This propagation error can be avoided by separating geometric elements by at least geometrical
optical precision.
Tip: You can update the simulation each time you modify the model to immediately see how your modification
impacted the light propagation.
Note: If you do not have the .material file corresponding to the media you want to use for simulation,
use the User Material Editor, then load it in the simulation.
5. In the 3D view, click to select geometries, to select sources and to select sensors.
The selected geometry, source(s) and sensor(s) appear in the Linked objects.
Note: If you want to customize the interactive simulation advanced settings, see Adjusting Interactive
Simulation Settings .
6. Preview the Meshing of the geometries before running a simulation and adjust meshing values if needed to avoid
simulation or geometries errors.
The Interactive Simulation is created and the rays are displayed in the 3D view to illustrate the light's optical behavior
with respect to the geometry.
Related concepts
Light Expert on page 431
The Light Expert is a tool that allows you to specify what ray path to display in the 3D view.
Related information
Adjusting Interactive Simulation Settings on page 360
This page describes the simulation settings that you can adjust to customize your interactive simulation.
Geometry's Settings
Optical Properties
Texture application can have an impact on the simulation results. If texture have been applied in the scene, activate
Texture and/or Normal Map if needed.
The Texture normalization determines the rendering of the texture.
• With None, the simulation results uses both the image texture and the texture mapping optical properties
• Color from Texture means that the simulation result uses the color and the color lightness of the image texture.
• Color from BSDF means that the simulation result uses the BSDF information of the texture mapping optical
properties.
Meshing
Note: For same values of meshing, meshing results can be different between the CAD platforms in which
Speos is integrated.
With the meshing settings, you can lighten memory resources and accelerate simulation in specific cases.
• Meshing Sag and Step Mode
º Proportional to Face size: create a mesh of triangles that are proportional to the size of each face of the object.
The sag and step value therefore depend on the size of each face.
º Proportional to Body size: create a mesh of triangles that are proportional to the size of the object. The sag
and step value therefore depend on the size of the body.
º Fixed: creates a mesh of triangles fixed in size regardless of the size of the body or faces. The mesh of triangles
is forced on the object.
• Meshing sag value: defines the maximum distance between the object and the geometry.
Note: If the Meshing sag value is too large compared to the body size, Speos recalculate with a Meshing
sag value 128 to better correspond to the body size.
• Meshing step value: defines the maximum length of a segment (in mm).
Note: In Parasolid modeler, for a Heavyweight body, the Meshing step value precision decreases when
applying a value below 0.01mm.
Simulation
Meshing
Note: When choosing the Automatic mode, the ray tracing method chosen by Speos is available in the
simulation HTML report.
Propagation
• The Geometrical distance tolerance defines the maximum distance to consider two faces as tangent.
• The Maximum number of surface interactions allows you to define a value to determine the maximum number
of ray impacts during propagation. When a ray has interacted N times with the geometry, the propagation of the
ray stops. This option can be useful to stop the propagation of rays in specific optical systems (in an integrated
sphere in which a ray is never stopped for example).
• The Weight option allows you to activate the consideration of the ray's energy. Each time the rays interacts with
a geometry, it loses some energy (weight).
º The Minimum energy percentage value defines the minimum energy ratio to continue to propagate a ray with
weight. It helps the solver to better converge according to the simulated lighting system.
Interactive Simulation
Display
• Draw rays
Allows you to display the rays trajectories in the 3D view. This option is activated by default when creating an
interactive simulation.
Note: Deactivate this option when working with Light Expert results to prevent LXP rays and simulation
rays from overlapping. For changes to be taken into account, you need to recompute the simulation.
• Draw impacts
Allows you to display the rays impacts in the 3D view.
Note: You can display the rays impact or trajectories alone or together.
Result
Impact report: Allows you to integrate details like number of impacts, position and surface state to the HTML
simulation report.
Related information
Creating an Interactive Simulation on page 358
The Interactive Simulation allows you to analyze, visualize and validate the effect of your model on the optical
behavior of the light.
Understanding Advanced Simulation Settings on page 396
The following section describes the advanced parameters to set when creating a simulation.
Note: The projected grid is generated as a result of an interactive simulation containing a camera sensor.
Once a grid is generated, you can edit its parameters. Each time you generate a grid, the grid uses the parameters
of the previous generated grid as default parameters.
Connection
Pixels' Connection
With grid connection parameters, you can connect two adjacent pixels of the grid that do not belong to the same
body.
To connect two adjacent pixels, they need to fulfill one of the two parameters Min distance tolerance (mm) or Max
incidence (deg):
• The parameter Min distance tolerance (mm) is having priority over the parameter Max incidence (deg).
• If the two adjacent pixels do not fulfill the parameter Min. distance tolerance (mm), then Speos checks if they fulfill
the parameter Max incidence (deg).
• The two adjacent pixels can fulfill both parameters.
Parameters
• Min distance tolerance (mm): The distance tolerance for which two adjacent pixels to be connected by a line.
Example: for a Min. distance tolerance of 5mm, all adjacent pixels, for which the distance is less than 5mm, are
connected by a line.
• Max incidence: Maximum angle under which two projected pixels should be connected by a line. Example: for a
Max. incidence of 85°, if the angle to the normal (normal of the plane of the two pixels) of the farther pixel from
the origin is less than 85°, then the two pixels are connected by a line.
• Max distance from camera (mm): Maximum distance between a pixel and the camera. With maximum distance
from camera, you can limit the visualization at a specific distance of the camera.
• Authorize connection between bodies: allows you to decide to display the connection between bodies that fulfill
one of the parameters (Min. distance tolerance or Max. incidence).
Graduation
With the grid graduations, you can modify the two levels of graduation, Primary step (yellow default color) and
Secondary step (green default color).
To lighten the visualization, we recommend you to increase the graduation step parameters when the grid resolution
becomes high.
Note: Setting the graduation steps to zero prevents the display of the grids.
Highlights
These parameters allow to define four lines to highlight on the grid.
2. If you want a ray file to be generated at the end of the simulation, from the Ray File drop-down list:
• Select SPEOS without polarization to generate a ray file without polarization data.
• Select SPEOS with polarization to generate a ray file with the polarization data for each ray.
• Select IES TM-25 with polarization to generate a .tm25ray file with polarization data for each ray.
• Select IES TM-25 without polarization to generate a .tm25ray file without polarization data.
Note: The size of a ray file approximates 30Mb to 1 Mrays. Consider freeing space on your computer
prior to launching the simulation.
3. If you want the simulation to generate a Light Path Finder file, set Light Expert to True.
4. If you activated Light Expert, in LPF max path, you can adjust the maximum number of rays the light expert file
can contain.
Note: If you do not have the .material file corresponding to the media you want to use for simulation,
use the User Material Editor, then load it in the simulation.
6. In the 3D view, click to select geometries, to select sources and to select sensors.
The selected geometry, source(s) and sensor(s) appear in their respective lists as Linked objects.
7. If Light Expert is activated, you can activate LXP for each sensor contained in the simulation to make a light expert
analysis.
In case you add a Light Expert Group for a multi-sensors light expert analysis, the LXP is automatically activated.
8. Define the criteria to reach for the simulation to end:
Important: The maximum amount of rays a sensor's pixel can receive is 16 millions. Beyond this number
a saturation effect appears which leads to incorrect results. You can only observe at the end of the
simulation if there is a saturation effect. In this case make sure to modify the parameters to avoid such
issue.
• To stop the simulation after a certain number of rays were sent, set On number of rays limit to True and define
the number of rays.
• To stop the simulation after a certain duration, set On duration limit to True and define a duration.
Note: If you activate both criteria, the first condition reached ends the simulation.
If you select none of the criteria, the simulation ends when you stop the process.
Note: If you want to adjust the Direct Simulation advanced settings, see Adjusting Direct Simulation
settings.
9. Preview the Meshing of the geometries before running a simulation and adjust meshing values if needed to avoid
simulation or geometries errors.
10. In the 3D view, click Compute
Tip: To faster compute the simulation using GPU cores, click GPU Compute option .
To compute the simulation in progressive rendering, click Preview . This option opens a new window
and displays the result as the simulation is running.
Preview is compatible with NVIDIA GPUs only. GPUs supported are NVIDIA Quadro P5200 or higher.
The Direct Simulation is created along with .xmp results, an HTML report and if Light Expert is activated, a .lpf file. The
.xmp result also appears in the 3D view on the sensor(s).
In case of a Light Field Source used, an Optical Light Field *.olf file is generated.
If only a Natural Light Ambient Source is used in the Direct Simulation, no power will be noted in the HTML report, as
no power is set in the definition of the Source (unlike other sources).
Related concepts
Light Expert on page 431
The Light Expert is a tool that allows you to specify what ray path to display in the 3D view.
Related information
Adjusting Direct Simulation Settings on page 368
This page describes the simulation settings that you can adjust to customize your direct simulation.
Geometry's Settings
Optical Properties
Texture application can have an impact on the simulation results. If texture have been applied in the scene, activate
Texture and/or Normal Map if needed.
Meshing
Note: For same values of meshing, meshing results can be different between the CAD platforms in which
Speos is integrated.
Note: In Parasolid mode, in case of a thin body, make sure to apply a fixed meshing sag mode and a meshing
sag value smaller than the thickness of the body. Otherwise you may generate incorrect results.
With the meshing settings, you can lighten memory resources and accelerate simulation in specific cases.
• Meshing Sag and Step Mode
º Proportional to Face size: create a mesh of triangles that are proportional to the size of each face of the object.
The sag and step value therefore depend on the size of each face.
º Proportional to Body size: create a mesh of triangles that are proportional to the size of the object. The sag
and step value therefore depend on the size of the body.
º Fixed: creates a mesh of triangles fixed in size regardless of the size of the body or faces. The mesh of triangles
is forced on the object.
• Meshing sag value: defines the maximum distance between the object and the geometry.
Note: If the Meshing sag value is too large compared to the body size, Speos recalculate with a Meshing
sag value 128 to better correspond to the body size.
• Meshing step value: defines the maximum length of a segment (in mm).
Note: In Parasolid modeler, for a Heavyweight body, the Meshing step value precision decreases when
applying a value below 0.01mm.
Simulation
Meshing
Note: When choosing the Automatic mode, the ray tracing method chosen by Speos is available in the
simulation HTML report.
Propagation
• The Geometrical distance tolerance defines the maximum distance to consider two faces as tangent.
• The Maximum number of surface interactions allows you to define a value to determine the maximum number
of ray impacts during propagation. When a ray has interacted N times with the geometry, the propagation of the
ray stops. This option can be useful to stop the propagation of rays in specific optical systems (in an integrated
sphere in which a ray is never stopped).
• The Weight option allows you to activate the consideration of the ray's energy. Each time the rays interacts with
a geometry, it loses some energy (weight).
º The Minimum energy percentage value defines the minimum energy ratio to continue to propagate a ray with
weight. It helps the solver to better converge according to the simulated lighting system.
Direct Simulation
Propagation
Fast transmission gathering
Fast Transmission Gathering accelerates the simulation by neglecting the light refraction that occurs when the light
is being transmitted though a transparent surface.
This option is useful when transparent objects of a scene are flat enough to neglect the refraction effect on the
direction of a ray (windows, windshield etc).
Note: Fast Transmission Gathering does not apply to 3D Texture, Polarization Plate and Speos Component
Import.
Backup
Automatic save frequency allows you to define a backup interval. This option is useful when computing long
simulations.
Note: A reduced number of save operations naturally increases the simulation performance.
Related information
Creating a Direct Simulation on page 365
The Direct Simulation is commonly used to analyze standard optical systems.
Understanding Advanced Simulation Settings on page 396
The following section describes the advanced parameters to set when creating a simulation.
2. If you want the simulation to generate a Light Path Finder file, set Light Expert to True.
3. If you activated Light Expert, in LPF max path, you can adjust the maximum number of rays the light expert file
can contain.
Note: If you do not have the .material file corresponding to the media you want to use for simulation,
use the User Material Editor, then load it in the simulation.
5. If you want to consider time during the simulation, set Timeline to True.
Note: This allows dynamic objects, such as Speos Light Box Import and Camera Sensor, to move along
their defined trajectories and consider different frames corresponding to the positions and orientations
of these objects in simulation.
6. In the 3D view, click to select geometries, to select sources and to select sensors.
The selected geometry, source(s) and sensor(s) appear in their respective lists as Linked objects.
Note:
• When you include Geometric Camera Sensors, you cannot include another sensor type and you must
not select a source. When you include Photometric / Colorimetric Camera Sensors, you must select a
source.
• If you include an Irradiance sensor in a non-Monte-Carlo Inverse simulation, geometries are considered
as absorbent.
7. If Light Expert is activated, you can activate LXP for each sensor contained in the simulation to make a light expert
analysis.
8. If some sources of your scene generate light that needs to pass through specific faces to reach the sensor (for
example if you have an ambient source that needs to pass through the windshield), define out path faces/sources:
a. In the 3D view, click and select the face(s) to be considered as out path faces.
The faces are selected and make the interface between the interior and exterior environments.
b. In the 3D view, click and select the source(s) to be considered as out sources.
The sources are selected and are linked to the out path faces selected.
9. If the Optimized propagation algorithm is set to None, in Stop Conditions, define the criteria to reach for the
simulation to end:
For more information on how to set Optimized propagation, refer to Monte Carlo Calculation Properties on
page 380.
• To stop the simulation after a certain number of rays were sent, set On number of passes limit to True and
define the number of passes.
In case of a GPU simulation, the number of passes defined corresponds to the minimum threshold of rays
received per pixel. That means while a pixel has not received this minimum number of rays, the simulation still
runs.
Note: If you activated the dispersion, this minimum threshold is multiplied by the sensor sampling.
That means for a number of pass of 100 and a sampling of 13, each pixel will need to receive at least
1300 rays (the progress of the number of rays per pixel is indicated in the Simulation Progress Bar).
In case of a CPU simulation, the number of pass, there is no minimum threshold of rays to be received per pixel,
the simulation stops when the number of pass is reached.
• To stop the simulation after a certain duration, set On duration limit to True and define a duration.
Note: If you activate both criteria, the first condition to be reached ends the simulation.
If you select none of the criteria, the simulation ends when you stop the process.
Note: If you want to adjust the Inverse Simulation advanced settings, see Adjusting Inverse Simulation
settings.
10. If the Optimized propagation algorithm is set to Relative or Absolute, in Stop Conditions, define the Absolute
stop value or Relative stop value to reach for the simulation to end.
For more information on how to set Optimized propagation, refer to Monte Carlo Calculation Properties on
page 380.
11. If Timeline is set to True, from the Timeline section, define the Start time.
Warning: From the version 2022 R2, the Timeline Start have been improved in order to enter time values
below 1ms. This change will impact your custom scripts, and will need to be modified consequently, as
previous time format is no longer supported.
12. Preview the Meshing of the geometries before running a simulation and adjust meshing values if needed to avoid
simulation or geometries errors.
Tip: To faster compute the simulation using GPU cores, click GPU Compute option .
To compute the simulation in progressive rendering, click Preview . This option opens a new window
and displays the result as the simulation is running.
Preview is compatible with NVIDIA GPUs only. GPUs supported are NVIDIA Quadro P5200 or higher.
The Inverse Simulation is created along with .xmp results, an HTML report and if Light Expert is activated, a .lpf file. The
.xmp result also appears in the 3D view on the sensor(s).
If Timeline is not activated and the Inverse Simulation uses a Camera sensor, only a HDRI file (*.hdr) is generated.
If Timeline is activated and a Camera sensor is moving, the Inverse Simulation is considered as dynamic and only
generates a spectral exposure map for each sensor, even the static sensors. This map corresponds to the acquisition
of the camera sensor and expresses the data for each pixel in Joules/m²/nm.
Related concepts
Light Expert on page 431
The Light Expert is a tool that allows you to specify what ray path to display in the 3D view.
Related information
Adjusting Inverse Simulation Settings on page 376
This page describes the simulation settings that you can adjust to customize your inverse simulation.
Geometry's Settings
Optical Properties
Texture application can have an impact on the simulation results. If texture have been applied in the scene, activate
Texture and/or Normal Map if needed.
The Texture normalization determines the rendering of the texture.
• With None, the simulation results uses both the image texture and the texture mapping optical properties
• Color from Texture means that the simulation result uses the color and the color lightness of the image texture.
• Color from BSDF means that the simulation result uses the BSDF information of the texture mapping optical
properties.
Meshing
Note: For same values of meshing, meshing results can be different between the CAD platforms in which
Speos is integrated.
Note: In Parasolid mode, in case of a thin body, make sure to apply a fixed meshing sag mode and a meshing
sag value smaller than the thickness of the body. Otherwise you may generate incorrect results.
With the meshing settings, you can lighten memory resources and accelerate simulation in specific cases.
• Meshing Sag and Step Mode
º Proportional to Face size: create a mesh of triangles that are proportional to the size of each face of the object.
The sag and step value therefore depend on the size of each face.
º Proportional to Body size: create a mesh of triangles that are proportional to the size of the object. The sag
and step value therefore depend on the size of the body.
º Fixed: creates a mesh of triangles fixed in size regardless of the size of the body or faces. The mesh of triangles
is forced on the object.
• Meshing sag value: defines the maximum distance between the object and the geometry.
Note: If the Meshing sag value is too large compared to the body size, Speos recalculate with a Meshing
sag value 128 to better correspond to the body size.
• Meshing step value: defines the maximum length of a segment (in mm).
Note: In Parasolid modeler, for a Heavyweight body, the Meshing step value precision decreases when
applying a value below 0.01mm.
Simulation
Meshing
Note: When choosing the Automatic mode, the ray tracing method chosen by Speos is available in the
simulation HTML report.
Propagation
• The Geometrical distance tolerance defines the maximum distance to consider two faces as tangent.
• The Maximum number of surface interactions allows you to define a value to determine the maximum number
of ray impacts during propagation. When a ray has interacted N times with the geometry, the propagation of the
ray stops. This option can be useful to stop the propagation of rays in specific optical systems (in an integrated
sphere in which a ray is never stopped).
• The Weight option allows you to activate the consideration of the ray's energy. Each time the rays interacts with
a geometry, it loses some energy (weight).
º The Minimum energy percentage value defines the minimum energy ratio to continue to propagate a ray with
weight. It helps the solver to better converge according to the simulated lighting system.
Inverse Simulation
Optical Properties
Activating Use rendering properties as optical properties allows you to automatically convert appearance properties
into physical parameters according to the following conversion table.
Algorithm
With the inverse simulation, you can select the calculation algorithm used to interpret your optical system.
You can choose between the Monte Carlo algorithm and a deterministic calculation. This selection impacts the
parameters to set in the Propagation section.
• The Monte Carlo algorithm is a randomized algorithm that allows you to perform probabilistic simulations. It
allows you to manage dispersion, bulk diffusion, multiple diffuse inter-reflections and supports light expert analysis.
Note: To define the propagation settings, see Monte Carlo Calculation Properties .
• The deterministic algorithm allows you to perform determinist simulations that produce results showing little to
no noise but that are considered as biased. This algorithm does not manage dispersion, bulk diffusion or light
expert analysis. You can create a deterministic simulation with or without generating a photon map.
Related information
Creating an Inverse Simulation on page 372
The Inverse Simulation allows you to reverse the light trajectory direction. The propagation is done from the sensors
to the sources. It is useful when needing to analyze optical systems where the sensors are small and the sources are
diffuse.
Understanding Advanced Simulation Settings on page 396
The following section describes the advanced parameters to set when creating a simulation.
Note: In a Monte Carlo inverse simulation, if the absorption value of a BRDF is negative, it is considered as
a null value.
Optimized Propagation
Note: The Optimized propagation algorithm is only compatible with the Radiance sensors.
The Optimized Propagation algorithms consist in sending rays from each pixel of the sensor until one of the stopping
criteria is reached.
• None: the same number of passes is used for each pixel of the image (current and default algorithm).
This algorithm may generate unbalanced results. Some pixels may have a good signal-to-noise ratio (SNR) whereas
some other pixels may show too much noise.
• Relative and Absolute: the algorithm adapts the number of passes per pixel to send the optimal number of rays
according to the signal each pixel needs. As a result, the SNR is adequate in areas where pixels need more rays
thus giving a balanced image.
These two modes are based on the same principle, however the method of calculation is slightly different.
º In Relative, the pixel's standard deviation is compared with a threshold value (error margin expressed in
percentage) defined by the user. This value determines the error margin (standard deviation) tolerated. Rays
will be launched until the standard deviation of the pixel is lower than the defined threshold value. All the values
of the map are then known with the same "relative" precision.
The standard deviation is normalized (expressed as an average) and compared to the threshold value (percentage).
The pixel's standard deviation is computed on each pixel according to the following formula:
σN: Estimate of the standard deviation relative to the number of rays (N).
θ: average signal of the map.
σr: User defined standard deviation.
The greater N is (the more rays are sent), the more the σ (standard deviation) converges to the threshold value
(σA). Here the standard deviation is normalized by an average (θ).
º In Absolute, the pixel's value is simply compared with a fixed threshold value (photometric value) defined by
the user. This photometric unit determines the error margin (standard deviation) tolerated for each pixel of the
map. Rays will be launched until the standard deviation of the pixel is lower than the defined threshold value.
All the values are thereby known with the same precision.
The standard deviation is simply compared to the threshold value.
The pixel's standard deviation is computed on each pixel according to the following formula:
σN: Estimate of the standard deviation relative to the number of rays (N).
σA: User defined standard deviation.
The greater N is (the more rays are sent), the more the σ (standard deviation) converges to the threshold value
(σA).
Number of standard passes before optimized passes: corresponds to the minimum number of passes without
pass optimization (standard pass : all pixels emit rays. Optimized pass : pixels with standard deviation higher than
defined threshold emit rays).
Dispersion
With this parameter, you can activate the dispersion calculation. In optical systems in which the dispersion phenomena
can be neglected, the colorimetric noise is canceled by deactivating this parameter.
Note: This parameter should not be used with simulations involving diffusive materials.
For more details, refer to Dispersion.
Splitting
This option is useful when designing tail lamps.
Splitting allows you to split each propagated ray into several paths at their first impact after leaving the observer
point. Further impacts along the split paths do not provide further path splitting. This feature is primarily intended
to provide a faster noise reduction on scenes with optical polished surface as the first surface state seen from the
observer. An observer watching a car rear lamp is a typical example of such a scene.
Note: The split is only done to the first impact. Two rays are split on an optical polished surface. The 2 rays
are weighted using Fresnel's law. On other surfaces there may be more or less split rays depending on the
surface model.
We are considering either the transmitted or the reflected ray (only one of them, we pick one each time).
The choice (R or T) is achieved using Monte Carlo: the probability for reflection is the Fresnel coefficient for
reflection. So depending on the generated random number, the ray will be either reflected or transmitted.
Close to the normal incidence, the reflection probability is around 4%, which is low. This low probability makes that
when we want to see the reflection of the environment, we observe a lot of noise. The splitting algorithm removes
this noise by computing the first interaction without using Monte Carlo.
Note: You must take some precautions by using layer operations tool of the Virtual Photometric Lab. For
instance if maximum gathering error is defined at 1% for a simulation and if the flux of a source is increased
10 times with the layer operations tool, it means that maximum gathering error is now 10% for this source.
Note: Fast Transmission Gathering does not apply to 3D Texture, Polarization Plate and Speos Component
Import.
During an inverse simulation, intermediate results can be saved. Setting an intermediate save frequency is useful
when computing long simulations and wanting to check intermediate results.
Setting this option to 0 means that the result is saved only at the end of the simulation. If the simulation is stopped
without finishing the current pass, no result is available.
Note: A reduced number of save operations naturally increases the simulation performance.
In the case of high sensor sampling the save operation can take the half of the simulation time when automatic save
frequency is set to 1.
Related information
Adjusting Inverse Simulation Settings on page 376
This page describes the simulation settings that you can adjust to customize your inverse simulation.
Deterministic Calculation Properties on page 384
This section describes the Deterministic calculation properties to set when creating an inverse simulation.
Photon Mapping
Photon Mapping is a luminance algorithm that allows you to realistically simulate and render the interaction of light
with objects.
This algorithm takes into account diffuse inter reflections, caustics and surface contributions of the optical system.
The Photon Mapping process is the following:
• The first step of photon mapping is a photon propagation phase. A first pass is done using a Monte Carlo direct
simulation to send photons from sources into the scene. Photons are then stored in a map.
• The second pass, called the Gathering phase, is a deterministic inverse simulation. The photon map from the first
pass is used to compute local radiance.
At the end of the simulation, a noise map is generated and can be reused for future simulations.
Simulation Results
As the first pass corresponds to a Monte Carlo direct simulation, photons are randomly drawn. As a result, photon
deposition on scene parts are different from one simulation to another, implying different photometric results in
localized measurements as shown below.
For example, considering a 10x10 mm² ellipsoid measurement area, and running 20 simulations, a 2.3 cd/m² standard
deviation is obtained on this measurement series.
Related information
Deterministic Calculation Properties without Photon Map on page 386
This page describes the parameters to set when wanting to create a deterministic inverse simulation without
generating a photon map.
Deterministic Calculation Properties with Photon Map on page 389
This page describes the parameters to set when wanting to create a deterministic inverse simulation while generating
a photon map.
Algorithm
Note: A simulation without photon map avoids noise but does not manage the diffuse inter-reflections.
Only color are taken into account.
Propagation Properties
Ambient Sampling
This parameter defines the sampling. The sampling corresponds to the quality of the ambient source. The greater
the value, the better the quality of the result, but the longer the simulation. The following table gives some ideas of
the balance between quality and time.
Note: A default value could be 20 and a value for good results could be 100.
Ambient
Sampling = 20
Reference
Time / 3
Default value
Ambient
Sampling =
100 Reference
Time
Ambient
Sampling =
500 Reference
Time x 4
Anti-Aliasing
The anti-aliasing option allows you to reduce artifacts such as jagged profiles and helps refine details. However, this
option tends to increase simulation time.
Anti-aliasing deactivated Reference Time / 2 Default Value Anti-aliasing activated Reference Time
Related information
Monte Carlo Calculation Properties on page 380
This page describes the Monte Carlo Calculation Properties to set when creating an inverse simulation.
Deterministic Calculation Properties with Photon Map on page 389
This page describes the parameters to set when wanting to create a deterministic inverse simulation while generating
a photon map.
Note: A simulation with build photon map generates map noises and manages diffuse inter-reflections.
It is safe to use photon maps when surfaces are lambertian, diffuse or specular.
It is not safe to use photon maps when surfaces are gaussian with a small FWHM angle.
Photon Mapping
The Photon Mapping is a luminance algorithm used to take into account multiples diffuse inter-reflections. But in
the preset case, it is a two pass algorithm.
• The first step of photon mapping is a photon propagation phase. A first pass is done using a Monte Carlo direct
simulation to send photons from sources into the scene. Photons are then stored in a map.
• The second pass is a deterministic inverse simulation and is called the Gathering phase. The photon map from
the first pass is used to compute local radiance.
Note: If you need more information on Photon Mapping, see Understanding Photon Mapping for a
Deterministic Simulation .
Algorithm
Three modes are available with Photon Mapping:
The parameters to set may vary depending on the option selected. When loading an existing map, less parameters
need to be set.
Note: The parameters described in the following sections correspond to the Build photon map mode.
Propagation Properties
Ambient Sampling
This parameter defines the sampling. The sampling corresponds to the quality of the ambient source. The greater
the value, the better the quality of the result, but the longer the simulation. The following table gives some ideas of
the balance between quality and time.
Note: A default value could be 20 and a value for good results could be 100.
Ambient
Sampling = 20
Reference
Time / 3
Default value
Ambient
Sampling =
100 Reference
Time
Ambient
Sampling =
500 Reference
Time x 4
Anti-Aliasing
The anti-aliasing option allows you to reduce artifacts such as jagged profiles and helps refine details. However, this
option tends to increase simulation time.
Anti-aliasing deactivated Reference Time / 2 Default Value Anti-aliasing activated Reference Time
Max Neighbors
Max neighbors represents the number of photons from the photon map taken into account to calculate the luminance.
Infinite max search radius Max search radius equals to the depth of the walls
In the following example, the effect of a too large max search radius in simulation results is described.
Note the white dots on the right illustration. They result from an unbalanced relationship between max search
radius and max neighbors.
For a given max neighbors value, if the max search radius is too small, the sensor does not collect enough neighbors
and generates noisy result.
And vice versa if the max search radius is fixed and the max neighbors value is too high, the sensor gathers all the
neighbors but there are not enough information on the defined area of research.
Note: Diffuse transmission is not taken into account with Final gathering.
This option allows you to exploit the secondary rays and not the primary impacts.
This algorithm produces better results but is much slower to compute.
• Final gathering max neighbor allows you to pilot the number of neighbors after the secondary rays. The neighbors
are used to compute the luminance for each split ray.
• Splitting Number allows you to set the number of split rays.
Note: Fast Transmission Gathering does not apply to 3D Texture, Polarization Plate and Speos Component
Import.
Related information
Understanding Photon Mapping for a Deterministic Simulation on page 384
The Deterministic simulation is fast and targeted but is best suited to analyze simple optical paths. When contributions
and optical interactions are multiple, you should use Monte Carlo algorithm or generate a photon map.
Deterministic Calculation Properties without Photon Map on page 386
This page describes the parameters to set when wanting to create a deterministic inverse simulation without
generating a photon map.
Note: For same values of meshing, meshing results can be different between the CAD platforms in which
Speos is integrated.
Note: In Parasolid mode, in case of a thin body, make sure to apply a fixed meshing sag mode and a meshing
sag value smaller than the thickness of the body. Otherwise you may generate incorrect results.
Creating a meshing on an object, a face or a surface allows you to mobilize and concentrate computing power on
one or certain areas of a geometry to obtain a better level of detail in your results. In a CAD software, meshing helps
you to subdivide your model into simpler blocks. By breaking an object down into smaller and simpler pieces such
as triangular shapes, you can concentrate more computing power on them, and therefore improve the quality of
your results. During a simulation, it will no longer be one single object that interprets the incoming rays but a
multitude of small objects.
Warning: if you created a file in version 2021 R1, then migrated to 2021 R2 and changed the values for Sag
/ Step type (when it became Proportional to Body size), these values may not be good in 2022 R2 when
the document is migrated back to Proportional to Face size. You cannot know that the values were changed
over the versions.
Note: From 2022R2, the new default value is Proportional to Face size. Selecting between Proportional
to Face size and Proportional to Body size may slightly affect the result according to the elements meshed.
Note: When setting the meshing to Proportional to Face size, the results may return more faces than
Proportional to Body size. These additional faces should be really small and they should not influence the
ray propagation.
Note: When running a simulation for the first time, Speos caches meshing information if the Meshing mode
is Fixed or Proportional to Body size. This way, when you run a subsequent simulation and you have not
modified the Meshing mode, the initialization time may be a bit faster than the first simulation run.
Sag Tolerance
The sag tolerance defines the maximum distance between the geometry and the meshing.
By setting the sag tolerance, the distance between the meshing and the surface changes. A small sag tolerance
creates triangles that are smaller in size and generated closer to the surface. This will increase the number of triangles
and potentially computation time. A large sag tolerance will generate looser triangles that are placed farther from
the surface. A looser meshing can be used on objects that do not require a great level of detail.
Note: If the Meshing sag value is too large compared to the body size, Speos recalculate with a Meshing
sag value 128 to better correspond to the body size.
Note: In Parasolid modeler, for a Heavyweight body, the Meshing step value precision decreases when
applying a value below 0.01mm.
A small maximum step size generates triangles with smaller edge lengths. This usually increases the accuracy of the
results.
A greater maximum step size generates triangles with bigger edge lengths.
Angle Tolerance
The angle tolerance defines the maximum angle tolerated between the normal of the tangent formed at each end
of the segments.
Related information
Adjusting Interactive Simulation Settings on page 360
This page describes the simulation settings that you can adjust to customize your interactive simulation.
Adjusting Direct Simulation Settings on page 368
This page describes the simulation settings that you can adjust to customize your direct simulation.
Adjusting Inverse Simulation Settings on page 376
This page describes the simulation settings that you can adjust to customize your inverse simulation.
Note: When a 3D Texture support is tangent to another body, rays intersecting the 3D Texture are set in
error. The html report indicates the percentage of rays in error with the 3D Texture support body tangent
to another body entry.
Note: You must correctly consider the design intent by setting the same or different SOP between the
two interfaces.
D : distance between the two projected impacts of the incoming ray on two faces (projection without any shift).
If D is inferior to the Geometrical distance tolerance, then the two faces are tangent (% error added to the Speos
report). If D is superior, then the faces are not tangent.
Note: It is not possible to manage geometries that are smaller than a nanometer (1e-6 mm). However, we
do not recommend you to set a smaller value than 1e-3 mm as the value is small enough in most cases.
To correctly set the Geometrical Distance Tolerance (GDT) option, the following criteria must be fulfilled:
• Two faces are tangent if the distance between the intersection points of the two faces is smaller than the Geometrical
Distance Tolerance.
• The Geometrical Distance Tolerance must be much larger than the meshing tolerance: GDT >> sag(Body1) +
sag(Body2)(see figure below)
• Speos tangent faces management can handle only two tangent faces. The Geometrical Distance Tolerance must
be smaller than the smallest thickness of the bodies.
• Make sure that sources or sensor distance to geometries are greater than the Geometrical Distance Tolerance.
When two faces are very close (closer than Geometrical Distance Tolerance) with a big angle between them, the
faces are not considered as tangent and it may cause propagation error.
Related information
Adjusting Interactive Simulation Settings on page 360
This page describes the simulation settings that you can adjust to customize your interactive simulation.
Adjusting Direct Simulation Settings on page 368
This page describes the simulation settings that you can adjust to customize your direct simulation.
Adjusting Inverse Simulation Settings on page 376
This page describes the simulation settings that you can adjust to customize your inverse simulation.
Note: It is not recommended to change the default smart engine value for a classical use of Speos.
However in some cases when memory use is critical due to huge geometries (complete cockpit, cabin, car
or building), this value must not exceed 9 and can be reduced in order to save memory.
Also in other cases when a simulation contains small detailed geometries inserted in a big scene (detail of
headlamp bulb placed in a simulation with a 50m long road geometry) this value can be increased to reach
better performances.
It becomes interesting to use the smart engine parameter when sending a large number of rays. As an
example, it is not the case for a Light Modeling interactive simulation with around 100 rays, and it is the case
for a Digital Vision and Surveillance interactive simulation with around 300k rays.
Related information
Adjusting Interactive Simulation Settings on page 360
This page describes the simulation settings that you can adjust to customize your interactive simulation.
Adjusting Direct Simulation Settings on page 368
This page describes the simulation settings that you can adjust to customize your direct simulation.
Adjusting Inverse Simulation Settings on page 376
This page describes the simulation settings that you can adjust to customize your inverse simulation.
10.6.4. Dispersion
Dispersion refers to the chromatic dispersion of visible light based on the Snell-Descartes law.
Note: The Dispersion option is only available for Inverse and Direct simulations. In Interactive simulation,
the Dispersion option is always activated (and so hidden in the interface).
Allowing the Dispersion option activates the dispersion calculation. In optical systems in which the dispersion
phenomena can be neglected, the colorimetric noise is reduced by deactivating the Dispersion option.
Dispersion influences the Monte Carlo algorithm in terms of ray number generated by pass in Inverse simulation. It
increase the number of rays in Inverse simulation compared to when the Dispersion option is not activated.
Important: When a dispersive material is present in the optical system, we highly recommend you to activate
the dispersion.
Refraction Calculation
From version 2023 R2, in order for both the CPU and the GPU simulations to converge towards the same result,
Speos uses a new algorithm to calculate the refraction of a ray.
Note: In case of a Direct Simulation, activating or deactivating Dispersion for a Ray File Source has no effect
as each ray of the source is composed of only one wavelength.
Both algorithms generate colorimetric and photometric noises. Comparing them, you can observe that:
• When activating Dispersion, noise has a more colorimetric nature.
• When deactivating Dispersion, noise has a more photometric nature.
In terms of rendering, noise obtained without dispersion is visually more appealing than the one with dispersion.
Noise
Note: In case of a Radiometric sensor, no difference is made. Speos takes the entire spectral range in both
CPU and GPU Simulations
• Drawbacks
º There is a sensor dependency as it takes the spectral range of sensors.
Example: let's take a sensor of [400 ; 700] spectral range. In simulation, the spectral range used will be the sensor
one in [400 ; 700]. If you add another sensor in Infrared for example [800 ; 1000], then both sensors will influence
each other and the spectral range used in simulation will be [400 ; 1000].
Example
Let's take a simulation with a source containing a *.spectrum file ranged from 400nm to 1000nm. The colorimetric
sensor wavelength integration is defined between 400nm and 700nm.
• The CPU Direct Simulation will launch rays from all the spectral range of the source ([400nm ; 1000nm]). When
passing through the dispersive material, each ray is refracted with the refraction angle of a wavelength randomly
drawn from the spectral range [400 ; 1000nm]. If the ray hits the sensor, the sensor integrates the ray only with
the wavelengths included in the spectral range of the sensor [400 ; 700].
• The GPU Direct Simulation will launch rays only from the spectral range of the sensor ([400nm ; 700nm]). When
passing through the dispersive material each ray is refracted with the refraction angle of a wavelength randomly
drawn from the spectral range [400 ; 700nm]. As the spectral range of the sensor is used for both the wavelength
selection of the refraction angle and the integration on the sensor, all rays are integrated in the sensor, generating
a low-noise result.
With this algorithm, a CPU simulation should spread refraction more than a GPU simulation due to the different
spectral range.
Note: Note that the following example is used with a sensor which size is large enough to integrate the rays
with all refraction angles of the wavelengths spectral range of the source. If the sensor size were much
smaller the spread and noise would be different as some rays would not hit the sensor due to their refraction
angle.
10.6.5. Weight
The page provides advanced information on the Weight parameter.
Note: It is highly recommend to set this parameter to true excepted in interactive simulation.
Deactivating this option is useful to understand certain phenomena as absorption.
Specific scenarios
According to the configuration, activating the weight parameter may have some impacts on the simulation calculation.
The following configurations illustrate the behavior of the rays depending if the weight has been activated or not.
• Ray/Face interaction Consider rays reaching an optical surface having a 50% reflectivity.
º If you do not activate weight, rays have 50% probability to be reflected.
º If you do activate weight, all the rays are reflected with 50% of their initial energy.
Tip: Practically, using weight in simulation improves results' precision as more rays with contributing energy
reach the sensors. So, to get the same amount of rays on sensors without the Weight parameter, you need
to set more rays in simulations, which tends to increase simulation time.
Direct simulation result when weight is activated Direct simulation result when weight is deactivated
Weight Deactivation
Deactivating the weight is useful in two specific cases.
1. When you analyze phenomena such as absorption. Considering a material with absorption, here is the observation
of the absorbed rays using an interactive simulation.
Interactive simulation result when weight is activated Interactive simulation result when weight is deactivated
Let us consider an integrating sphere with a light source and sensor inside of it.
The surface inside the sphere has a high reflectivity value. The system is set so the sensor is protected from direct
illumination from the light source.
In this context, activating the Weight would highly extend simulation time.
When weight is activated, simulation time corresponds to 1747.
When weight is not activated, simulation time corresponds to 440.
This difference is due to the fact that low energy rays are still propagating after several bounds in the system for
simulations using weight whereas the probability the rays still propagate decreases each bound they make for
simulations not using weight.
Related information
Adjusting Direct Simulation Settings on page 368
This page describes the simulation settings that you can adjust to customize your direct simulation.
Adjusting Inverse Simulation Settings on page 376
This page describes the simulation settings that you can adjust to customize your inverse simulation.
10.7. LiDAR
LIDAR is a remote sensing technology using pulsed laser light to collect data and measure the distance to a target.
LIDAR sensors are used to develop autonomous driving vehicles.
LiDAR Principle
A LiDAR is a device measuring the distance to a target by sending pulsed laser light. It works on the principle of a
radar but uses light instead of radio waves.
A LiDAR is composed of a light source (an emitter) and a sensor (a receiver).
The emitter illuminates a target by sending pulsed laser light and evaluates the distance to that target based on the
time the reflected pulse took to hit the receiver.
The ray sent by the LiDAR source (emitter channel) interacts with a geometry or vanishes in the environment.
The interaction is managed by the optical properties of the target geometry. A part of the ray energy is reflected
towards the LiDAR sensor (receiver channel). This energy contributes to the signal of a pixel. After all contributions
are integrated, Speos models the Raw time of flight for each pixel. The Raw time of flight expresses the temporal
power on a pixel. A specific power is integrated in each pixel for a given distance.
In the case of static LiDAR simulation, the time of flight is modeled for each pixel of the sensor with one pixel
corresponding to one channel in the result file.
In the case of a scanning or rotating simulation, the record of data is slightly different. Indeed, the number of channels
is no longer equal to the number of pixels but to the number of beams. As the beams are sent at different times with
a very short interval (of micro seconds- µs), a one-pixel sensor can be used and will have a different time of flight
value for each time.
At the end of the optical simulation, Speos applies a signal post-processing to interpret the Raw time of flight signal
in a distance. The signal then identifies the time when the maximum power is received by a pixel and computes the
distance based on this time and the light celerity.
• Raw time of flight signal: power arriving on the sensor, saved in a binary format (*.OPTTimeOfFlight).
• Map of depth: LiDAR-object distance measured for each pixel, saved in a xmp map.
Related concepts
Understanding LiDAR Simulation Results on page 415
This section gathers the different types of results that can be obtained from a LiDAR simulation.
Related information
Creating a LiDAR Simulation on page 412
Creating a LiDAR simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration. The LiDAR simulation supports several sensors at a time.
Warning: From the version 2022 R2, the Timeline Start and End parameters have been improved in order
to enter time values below 1ms. This change will impact your custom scripts, and will need to be modified
consequently, as previous time format is no longer supported.
Time Consideration
LiDAR simulation considers all the timestamps of the complete acquisition sequence described in the scanning and
rotation sequence files.
Starting from Timeline Start, time will increase by considering the timestamps.
Once the acquisition sequence described in the LiDAR sensor is complete, a new sequence begins from the Timestamp
min.
The loop lasts until Timeline End is reached.
Note: For more information about Rotation and Scanning Sequence File, refer to Firing Sequence Files on
page 228
Trajectory Consideration
At each timestamp, the global time is used to position and orientate features using a Trajectory file.
A linear interpolation is done between two trajectory samples.
Note: The number of samples needs to reflect the shape of the trajectory. For instance, few samples are
necessary for a linear trajectory. On the other hand, the number of samples needs to be high enough for a
curvy trajectory.
i L .
2. In the 3D view click to select the geometries for which you want to calculate the distance to the Lidar.
Your selection appears in the Geometries list.
Tip: You can select elements from the 3D view or directly from the tree.
To deselect an element, click back on it.
To rapidly select all the faces of a same element, click one face of this element and press CTRL+A.
3. In the 3D view click and from the tree select the previously created LiDAR sensor(s).
Your selection appears in the Sensors list.
Note: When selecting several sensors in one LiDAR simulation, the sensors do not illuminate each other
and their contributions are not merged. The simulation will produce individual results for each sensor
selected.
• To stop the simulation after a certain number of rays were sent, set On number of rays limit to True and define
the number of rays. The number can be larger than 2 Giga rays.
• If you are working with static LiDARs and want to stop the simulation after a certain duration, set On duration
limit to True and define a duration.
Note: If you activate both criteria, the first condition reached ends the simulation.
If you select none of the criteria, the simulation ends when you stop the process.
5. In Source grid sampling, define the number of samples used to calculate the field of views.
6. In Sensor pixel grid sampling, define the number of samples used along each pixel side to calculate the field
of views.
7. In Ambient material, browse a .material file if you want to define the environment in which the light will propagate
(water, fog, smoke etc.).
The ambient material allows you to specify the media that surrounds the optical system. Including an ambient
material in a simulation brings realism to the optical result.
The .material file is taken into account for simulation.
Note: If you do not have the .material file corresponding to the media you want to use for simulation,
use the User Material Editor, then load it in the simulation.
8. If you want to consider time during the simulation, set Timeline to True.
Note: This allows dynamic objects, such as Speos Light Box Import and Scanning or Rotating LiDAR
Sensors, to move along their defined trajectories and consider different frames corresponding to the
positions and orientations of these objects in simulation.
Note: For more information on Timeline, refer to Understand LiDAR Simulation with Timeline.
9. From the Results section, filter the results you want to generate by setting them to True or False:
Note: The Fields of view and Map of depth result types are only available for static LiDARs.
• If you want a visualization of the source, sensor and LiDAR fields of view to be displayed in the 3D view after
simulation, activate Fields of view.
• If you want a map of depth to be generated after simulation, activate the corresponding option.
• If you want to generate a Raw time of flight (.OPTTimeOfFlight) result file, activate the corresponding option.
Note: If you need more information about grid sampling or results, see Understanding LIDAR Simulation
Results.
10. If Timeline is set to True, from the Timeline section, define the Start and End times.
Warning: From the version 2022 R2, the Timeline Start and End parameters have been improved in
order to enter time values below 1ms. This change will impact your custom scripts, and will need to be
modified consequently, as previous time format is no longer supported.
CAUTION: When the Number of rays (corresponding to the number of rays by pulse) is low, The Rotating
and Scanning LiDAR simulation progress bar completes before the actual end of the simulation.
The simulation is created and results appear both in the tree and in the 3D view.
If Timeline is activated, the Raw time of flight simulation result will contain all the timestamps of the simulation.
When running a Rotating LiDAR Simulation with Speos HPC Compute, each thread sends one ray per pulse.
Related concepts
Understanding LiDAR Simulation Results on page 415
This section gathers the different types of results that can be obtained from a LiDAR simulation.
Related information
Understanding LiDAR Simulation on page 408
This page gives a global presentation on LiDAR principles and simulation.
Field of View
The Fields of view allow to project in the 3D view a visualization grid of the source, sensor and LIDAR fields of view
(the LiDAR field of view being the overlap of the source and the sensor fields of view).
Note: Projected grids are generated for solid-state LiDAR simulations only (LiDAR simulations using a Static
LiDAR sensor).
You can edit the visualization of the LiDAR Projected Grids.
You can export the projected grids as geometry to convert them into construction lines.
• The Sensor Field of View represents the area observed by the sensor.
• The LiDAR Field of View is the overlap of the Source and Sensor Fields of View.
Grid Sampling
The quality of the projected grid depends on the source and sensor sampling. With an approximate sampling, you
may miss some thin geometries or get an inaccurate grid on geometries' edges.
Figure 55. Poor sampling level - The pixel corner 2 does not intersect any geometry. The
last edge is not drawn.
Figure 56. Intermediate sampling level - The sub-pixel’s sample intersects the ground. An
edge is drawn, but passing through the object.
Figure 57. High sampling level - The sub-pixel’s sample intersects the ground and the object.
The displayed pixel edge takes into account the ground and the object intersection.
Note: Only samples having an intensity higher than the defined threshold are taken into account.
Note: Angles of lower and upper bound of the Field of View can change with the sampling.
Depending on the sampling and shape of the intensity, this difference can be noticeable on the projected grid results.
Related information
Creating a LiDAR Simulation on page 412
Creating a LiDAR simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration. The LiDAR simulation supports several sensors at a time.
Note: Depth maps are generated for solid-state LiDAR simulations only.
The Map of depth is an extended map (*.XMP) that saves, for each pixel, the distance from the LiDAR to the detected
object. This distance is extracted from the raw signal by catching the maximum peak position detected in every
pixel. The closeness to objects is expressed with a color scale.
In the illustration below, the smaller the distance from the LIDAR, the closer to the blue.
Related information
Creating a LiDAR Simulation on page 412
Creating a LiDAR simulation allows you to generate output data and files that enable to analyze a LiDAR system and
configuration. The LiDAR simulation supports several sensors at a time.
Description
The *.OPTTimeofFlight is a result file of a LiDAR simulation used to store the raw time of flight of each pixel of the
LiDAR sensor.
You can use this file to read and export specific data or post-treat it to output results, like a 3D representation of the
scene in the form of a point cloud (impacts collected during simulation).
This file is a compressed binary file that can store large amounts of data but that can only be accessed through APIs.
Principle
The Raw Time of Flight essentially represents the temporal power of a pixel expressed in Watts. It describes the
time interval between the emission of the light pulse and its detection after being reflected by an object in the scene.
Then, through data conversion, the LiDAR-to-object distance is derived from this time of flight.
Important: Speos LiDAR Simulation has been designed so the emitter (source) and receiver (sensor) of the
LiDAR system are located at the same place (meaning equal or less than the Spatial accuracy parameter
of the LiDAR Sensor). The distance traveled by a ray from the source to the sensor corresponds to twice the
distance between the sensor and the target.
In Speos, the raw time of flight data correspond to the optical power integrated in the pixel for a given distance.
This distance depends on the Spatial accuracy of the sensor, that is its discrimination step between two measurement
points (in mm).
Data Storage
CAUTION: The simulation can take time according to how you configured the LiDAR sensor, due to the
memory needed to generate a large amount of data. To give an idea of the amount of data to be generated
you can make this operation: number of time samples * number of pixels * number of scan configurations
* number of rotations
Note: The number of pixels (corresponding to the resolution) is set according to the Horizontal and Vertical
pixels parameters of the Sensor Imager.
To extract or post process the raw data stored in the compressed binary file, you must parse the result file using
scripts and dedicated APIs.
Related tasks
Accessing the OPTTimeOfFlight File on page 426
This procedure describes how to read an *.OPTTimeofFlight result file through scripts.
Related reference
List Of Methods on page 421
This page describes the methods that should be used to access the data stored in the *.OPTTimeOfFlight file generated
as an output of a LiDAR simulation.
Tip: Use script examples as a starting point. The provided scripts are directly compatible with Speos and
contain all the methods described below.
Basic Functions
Name Description Syntax
OpenFile Opens the OPTTimeOfFlight file and object.OpenFile(BSTR bstrFileName) as Boolean
loads its content
• Object: Raw Time Of Flight File Editor
• bstrFileName: path and filename
Should end by .OPTTimeOfFlight
Axis system
Name Description Syntax
GetSourceOrigin Returns the (x, y, z) coordinate object.GetSourceOrigin() as Variant
corresponding to the origin of the
Object: Raw Time Of Flight File Editor
source
GetSourceXDirection Returns the (x, y, z) vector object.GetSourceXDirection() as Variant
corresponding to the X direction of the
Object: Raw Time Of Flight File Editor
source
GetSourceYDirection Returns the (x, y, z) vector object.GetSourceYDirection() as Variant
corresponding to the Y direction of the
Object: Raw Time Of Flight File Editor
source
Operating range
Name Description Syntax
GetSensorMinRange Returns the minimum range of the object.GetSensorMinRange() as Double
sensor
Object: Raw Time Of Flight File Editor
Sensor
Name Description Syntax
GetSensorFocal Returns the sensor's focal length object.GetSensorFocal() as Double
(in mm)
Object: Raw Time Of Flight File Editor
Related concepts
Raw Time of Flight on page 418
This page provides more information on the Raw Time of Flight (*.OPTTimeOfFlight) file and describes how to
operate/analyze its content.
Related tasks
Accessing the OPTTimeOfFlight File on page 426
This procedure describes how to read an *.OPTTimeofFlight result file through scripts.
Note: Scripts are provided in IronPython language and, when possible, in Python language. With Python
scripts, make sure to use Python 3.9 version.
• Static_LiDAR
This script allows you to parse the .OPTTimeOfFlight result in order to export the coordinates of the impacts and
their associated power and distance into a *txt file.
• Scanning_Rotating
This script allows you to parse the .OPTTimeOfFlight result in order to extract the following parameters:
º Get (scanning, rotating) or calculate (static) the line of sight vector.
º Find the peak at 0.1% of the maximum value or obtain the list of peaks above the minimum signal level (peformed
with the isSinglePeak parameter).
º Export the x, y, z coordinates and the energy value for a given distance in a text file.
• Draw_point_cloud
This script includes the same methods as the Scanning_Rotating script but uses the point cloud export to project
it directly in the 3D view. 3D points with a distance-related color scale (from closest in red, to far in blue) are
projected in the 3D view to easily visualize the impacts of the LiDAR sensor.
Related reference
List Of Methods on page 421
This page describes the methods that should be used to access the data stored in the *.OPTTimeOfFlight file generated
as an output of a LiDAR simulation.
Note: You can also write your script in IronPython or Python language from scratch using dedicated
APIs.
2. In Speos, right-click in the Groups panel and click Create Script Group.
3. Right-click the newly created script group and click Edit Script to open the script in the command interpreter.
4. From the script editor, click to browse and load a .scscript or .py script file.
5. To improve performance, make sure the script editor is not in Debug but in Run mode.
6. Click Run.
Depending on your script configuration, different outputs are generated in the SPEOS output files folder and/or
visualizations are generated in the 3D view.
Related concepts
Raw Time of Flight on page 418
This page provides more information on the Raw Time of Flight (*.OPTTimeOfFlight) file and describes how to
operate/analyze its content.
Connection
With grid connection parameters, you can connect two adjacent samples of the grid that do not belong to the same
body.
To connect two adjacent samples, they need to fulfill one of the two parameters Min distance tolerance (mm) or
Max incidence (deg):
• The parameter Min distance tolerance (mm) has priority over the parameter Max incidence (deg).
• If the two adjacent samples do not fulfill the parameter Min. distance tolerance (mm), then Speos checks if they
fulfill the parameter Max incidence (deg).
• The two adjacent samples can fulfill both parameters.
Parameters
• Min distance tolerance (mm): The distance tolerance for which two adjacent samples to be connected by a line.
Example: for a Min. distance tolerance of 5mm, all adjacent samples, for which the distance is less than 5mm, are
connected by a line.
• Max incidence: Maximum angle under which two projected samples should be connected by a line. Example: for
a Max. incidence of 85°, if the angle to the normal (normal of the plane of the two pixels) of the farther sample
from the origin is less than 85°, then the two samples are connected by a line.
• Max distance from camera (mm): Maximum distance between a sample and the LiDAR source/sensor. With
maximum distance from camera, you can limit the visualization at a specific distance of the LiDAR source/sensor.
• Authorize connection between bodies: allows you to decide to display the connection between bodies that fulfill
one of the parameters (Min. distance tolerance or Max. incidence).
Graduation
With the grid graduations, you can modify the two levels of graduation, Primary step (yellow default color) and
Secondary step (green default color).
To lighten the visualization, we recommend you to increase the graduation step parameters when the grid resolution
becomes high.
Note: Setting the graduation steps to zero prevents the display of the grids.
Highlights
These parameters allow to define four lines to highlight on the grid.
LiDAR Principle
A rotating LiDAR is a distance measuring device sending short and rapid pulses of light in specific angular directions
while rotating to capture information about its surrounding environment. It works on the principle of a radar but
uses light instead of radio waves.
The most obvious advantage of a rotating LiDAR lies in its capacity to cover a 360° field of view. In contrast, Solid
State LiDARs rarely exceed a 120° field of view.
Speos Geometric Rotating LiDAR Simulation allows you to reproduce the behavior of a rotating LiDAR.
Note: The sensor does not consider the geometries' optical properties.
At the end of the simulation, interactions and impacts are displayed in the 3D view to represent the LiDAR field of
view.
To perform a custom field of view study, you can control every aspect of the LiDAR scanning pattern:
• Horizontal field of view: allows you to define the azimuthal range (up to 360°) and sampling of LiDAR's aiming
area.
• Vertical field of view:allows you to define the elevation for each channel of the LiDAR.
Related information
Creating a Geometric Rotating LiDAR Simulation on page 430
Creating a Geometric Rotating LiDAR simulation allows you to perform field of view studies. A field of view study
allows you to quickly identify what can or must be optimized (for example, the number, position and direction of
sensors) in a LiDAR system.
Note: The Geometric Rotating LiDAR simulation supports several sensors at a time.
1. From the Light Simulation tab, click System > Geometric Rotating LiDAR .
2. In the 3D view or from the tree, click to select the geometries for which you want to calculate the distance
to the rotating LiDAR.
Your selection appears in the Geometries list.
Note: In some cases, using geometries that have no optical properties applied for simulation can lengthen
the simulation's initialization time.
3. In the 3D view click and from the tree select the previously created LiDAR sensor(s).
Your selection appears in the Sensors list.
4. From the Visualization drop-down list, select which type of results you want to display in the 3D view:
• None
• Impacts
• Rays
• Impacts and Rays
The simulation is created and results appear both in the tree and in the 3D view.
Depending on the visualization type selected, impacts and /or rays are displayed in the scene, allowing you to visualize
the LiDAR's field of view and viewing range. The distance from the LiDAR to the detected object is represented with a
color scale.
Related tasks
Creating a Geometric Rotating LiDAR Sensor on page 256
This page shows how to create a Geometric Rotating LiDAR sensor that will be used for LiDAR simulation.
Area of measure applied on a XMP map to display the rays passing through that area
Note: For more information about *.lpf result management, refer to the Light Path Finder Result section.
We recommend you to set a LPF max path parameter equivalent to the number of rays propagated in the system.
Otherwise, when LPF max path is much more smaller than the number of rays, information will not be integrated
in the analysis.
Warning: The *.lpf file size increases with the number of rays interaction. The more interactions in the
system, the bigger the *.lpf file size.
To avoid such situation, you can create small sensor (not a lot of pixel) to cover the area on which you want
to perform a Light Expert analysis with a *.lpf file.
Warning: The *.lpf file size increases with the number of rays interaction. The more interactions in
the system, the bigger the *.lpf file size. To avoid such situation, you can create small sensor (not a
lot of pixel) to cover the area on which you want to perform the Light Expert analysis with the *.lpf
file.
Note: The Multi-Sensors Light Expert Analysis is in BETA mode for the current release.
4. Add the Light Expert Group to the Sensors list of the simulation.
As Light Expert is set to True, the LXP check box is of the Light Expert Group is automatically activated.
Note: Only one Light Expert Group can be added to the simulation.
Warning: The *.lpf file size increases with the number of rays interaction. The more interactions in the
system, the bigger the *.lpf file size. To avoid such situation, you can create small sensor (not a lot of
pixel) to cover the area on which you want to perform the Light Expert analysis with the *.lpf file.
The simulation will consider the surfaces of the non-closed body as joined during the run. Therefore, the VOP on
Surface will be automatically applied to the non-closed body which will take into account the volume property.
To consider a surfaces as VOP on Surface, create a group of these surfaces.
This allows you to address principally cases like surface-based windshield.
Note: If a Face Optical Properties material (FOP) is applied on a part of a surface used for the VOP on Surface,
the FOP still applies on the part of the resulting body.
This section describes how to manage result files and reports generated from simulations.
Performance Analysis
• Time analysis: Date and time of the initialization/termination and duration of the simulation.
• Computer Details: list of the computer resources used to run the simulation.
Note: The computer details are useful to assess your needs with Speos HPC.
Power Report
• Number of emitted rays
• Power emitted by the sources corresponds to the user defined power of the sources that could be radiometric
or photometric. When the user defined power is defined in radiometric unit, the photometric power is computed
using the source's spectrum Watt to lumen ratio, and reciprocally.
• Generated power corresponds to the power of the rays randomly emitted by the sources the using Monte Carlo
algorithm, during the direct simulation.
Radiometric generated power is always equal to the radiometric power emitted by the sources.
Photometric generated power is computed from the individual wavelengths of the Monte Carlo emitted rays.
This value converges towards the photometric emitted power when the number of rays increases.
• Radiated power corresponds to the power of the rays that leave the system and that will not interact with it
anymore.
• Absorbed power corresponds to the power of the rays that are absorbed during the propagation by one element
of the system.
• Error power corresponds to the power of the rays that are lost to propagation error.
Refer to the Propagation Errors page for more information the different types of error.
• Power of stopped rays corresponds to the power of the rays that are stopped during the propagation because
they have reached the Maximum number of surface interactions.
Note: Error power and Power of stopped rays need to be carefully checked after a simulation as they
need to be as low as possible to avoid biased result.
Important: In case of Scanning or Rotating LiDAR simulation, Speos compute the sources total power from
scanning and rotating pulse parameters contained in the Scanning and Rotating Sequences files:
• For Scanning LiDAR, Speos accumulates the pulses energy.
• For Rotating LiDAR, Speos takes the value computed for scanning and multiplies it by number of rotations.
Error Report
Total number of errors (only for Direct, Inverse simulations)
Results
Preview of the results and associated details.
• Pixels per body represents the coverage of a body by the pixels of the sensor.
Note: Pixels per body is available only for inverse simulations with radiance sensors or camera sensors.
Geometry Report
• Number of geometries included in the simulation (number of faces/bodies)
• Optical Properties applied to each geometry.
• Ray tracing technique used for the simulation (Smart Engine, Embree).
Simulation Parameters
Simulation Parameters represents all simulation settings (Meshing, FTG, Weight, etc).
• Comments: user defined comments (only for Interactive, Direct, Inverse simulations)
• CAD Parameters: user defined parameters (for Interactive, Direct, Inverse simulations)
• Design Screenshots: user defined screenshots (for Interactive, Direct, Inverse simulations)
• Statistical Analysis (for Interactive and Inverse simulations)
• Status Details: statistical information about interaction status (for Interactive and Inverse simulations)
• Luminaire Wattage: only if luminaire in simulation (for Direct and Inverse simulations)
• Impacts Report: detail of impact for all rays (for Interactive simulation)
Related information
Interactive Simulation on page 358
The Interactive Simulation allows you to visualize the behavior of light rays in an optical system.
Inverse Simulation on page 372
The Inverse Simulation allows you to propagate a large number of rays from a camera or a sensor to the sources
through an optical system.
Direct Simulation on page 365
The Direct Simulation allows you to propagate a large number of rays from sources to sensors through an optical
system.
To display or hide a result in the 3D view, check or clear the result's check box.
Note: XMP results appear blurry if the resolution of the XMP preview displayed in the 3D view is 128x128 or
under.
XMP results do not appear in the 3D view when an intensity sensor with a Conoscopic orientation is used
for simulation.
Note: The XMP map generated using either the *.OPTDistortion file V1 or V2 is displayed upside-down in
the 3D view and Virtual Photometric Lab to represent the signal measured on the imager of the camera.
The PNG file resulting from the simulation of the Camera Sensor using either the *.OPTDistortion file V1 or
V2 is right side up as it corresponds to the post-processed image.
Note: The XMP map generated using either the *.OPTDistortion file V1 or V2 is displayed upside-down in
the 3D view and Virtual Photometric Lab to represent the signal measured on the imager of the camera.
The PNG file resulting from the simulation of the Camera Sensor using either the *.OPTDistortion file V1 or
V2 is right side up as it corresponds to the post-processed image.
Note: When you open Virtual 3D Photometric Lab from Speos, Virtual 3D Photometric Lab inherits the default
navigation commands from SpaceClaim.
The navigation commands are:
• spin: middle mouse button
• spin + center definition: middle mouse button when clicking on the mesh
• pan: SHIFT + middle mouse button
• zoom: CTRL + middle mouse button
Note: In some cases, the result preview displayed in the 3D view can differ from the viewer's result. It is
often the case when working with poorly meshed surfaces.
Note: In case of a large XM3 file, we recommend you to use a powerful GPU for better performance of Virtual
3D Photometric Lab.
Ray file results store all rays passing through the sensor during propagation. They can be reused to describe a light
source emission.
The results are opened with the Ray File Editor.
The goal of it is in case of a Light Expert analysis of a sensors's group, when selecting a layer in one of the XMP maps,
it applies the same layer to all XMP maps so that it displays the ray path of the layer. This explains why sequences
of 2nd and beyond sensors may not be sorted by descending order of energy.
Note: The Ray Length parameter only set a preview of the length. If you want to export the rays as geometry,
the parameter has no impact on the length exported when you use export as geometry. The length of the
rays exported depends on the optical system and the simulation parameters.
In Direct Simulation, the Ray Length parameter has different meanings according to the sensor used.
• Irradiance Sensor: it corresponds to the length of the rays after the last impact, located on the sensor.
Irradiance Sensors have a physical location in the system.
• Intensity Sensor: it corresponds to the length of the rays after the last impact on the geometries.
Intensity Sensors have no physical location in the system, radius is only used to size the sensor in the 3D view and
to display the results.
• Radiance Sensor: the Ray Length parameter is not used as the propagation stops when rays are integrated by the
sensor. What you observe is the path to the observer point.
The Light Expert panel is displayed. The Interactive Simulation ray tracing is automatically hidden so that only
the ray tracing of the *.lpf result is displayed in the 3D view.
Note: If other Interactive Simulations are displayed in the 3D view, hide them all to display the *.lpf ray
tracing wanted only.
Note: When you close the Light Expert panel, the Interactive simulation ray tracing is automatically
displayed regardless the hide/show status previously defined.
• Click and select the Required faces (the faces that are taken into account to filter rays).
The rays impacting the required faces and reaching the sensor are displayed in the 3D view.
• Click and select the Rejected faces (the faces that you want to exclude from the light trajectory).
The rays reaching the sensor, excepted those impacting the rejected faces are displayed in the 3D view.
Virtual Photometric Lab loads the XMP result with the first measure area available in the Measures tool on the
first opening. The selected measure area (in the Measure tool) upon save is displayed on the next openings. Light
Expert panel is displayed.
Note: If necessary, you can hide the XMP result of the 3D view for a better display of the *.lpf ray tracing.
6. If necessary, modify the measure area, shape or position on the map to update the ray tracing preview in the 3D
view.
The ray tracing of the 3D view is adjusted in real-time to illustrate the light path from the sources to the sensor.
7. If you want to filter the rays displayed in the 3D view:
• Click and select the Required faces (the faces that are taken into account to filter rays).
The rays impacting the required faces and reaching the sensor are displayed in the 3D view.
• Click and select the Rejected faces (the faces that you want to exclude from the light trajectory).
The rays reaching the sensor, excepted those impacting the rejected faces are displayed in the 3D view.
Note: If you need more information on the Light Expert tool, see Light Expert.
Note: When you open Virtual 3D Photometric Lab from Speos, Virtual 3D Photometric Lab inherits the
default navigation commands from SpaceClaim.
The navigation commands are:
• spin: middle mouse button
• spin + center definition: middle mouse button when clicking on the mesh
• pan: SHIFT + middle mouse button
• zoom: CTRL + middle mouse button
6. Modify the measure area, shape or position on the map to update the ray tracing preview in the 3D view.
The ray tracing of the 3D view is adjusted in real-time to illustrate the light path from the sources to the sensor.
7. If you want to filter the rays displayed in the 3D view:
• Click and select the Required faces (the faces that are taken into account to filter rays).
The rays impacting the required faces and reaching the sensor are displayed in the 3D view.
• Click and select the Rejected faces (the faces that you want to exclude from the light trajectory).
The rays reaching the sensor, excepted those impacting the rejected faces are displayed in the 3D view.
Virtual Photometric Lab loads each XMP result of each sensor contained in the Light Expert Group with the first
measure area available in the Measures tool on the first opening. The selected measure area (in the Measure
tool) upon save is displayed on the next openings. Light Expert panel is displayed.
Note: If necessary, you can hide the XMP results of the 3D view for a better display of the *.lpf ray tracing.
6. If necessary, modify the measure area, shape or position on the maps to update the ray tracing preview in the
3D view.
The ray tracing of the 3D view is adjusted in real-time to illustrate the light path from the sources to the sensors.
7. Select the rays passing:
• Inside all measurement areas: only the rays passing through all of the measurement areas you defined are
displayed in the 3D view.
• Outside at least one measurement area: rays passing outside of at least one measurement area are displayed
in the 3D view.
That means a ray is displayed if he passes outside of one measurement area but passes through all other
measurement area. However you don't know outside which measurement areas the ray passes.
• Click and select the Required faces (the faces that are taken into account to filter rays).
The rays impacting the required faces and reaching the sensor are displayed in the 3D view.
• Click and select the Rejected faces (the faces that you want to exclude from the light trajectory).
The rays reaching the sensor, excepted those impacting the rejected faces are displayed in the 3D view.
9. In the Photometric section, if you want to display only the photometric contribution of the rays displayed in the
3D view, check Filtered Rays Only, then click Update Results to recalculate the photometric values of all XMP
maps from the Light Expert Group.
Note: Selecting a layer in one of the XMP maps applies the layer to all XMP maps. That implies that
sequences are not sorted in descending order of energy from the second sensor of the group. For more
information on how the sequences are ordered, refer to the section XMP Map of Sensors Included in
Sensors' Group.
The optical system (figure 1) displays the rays passing through the area defined in the sensor 1 result (figure 2) and
the area defined in the sensor 2 result (figure 3) and the area defined in the sensor 3 result (figure 4).
Note: You can only export rays as geometry from an interactive simulation or from a light expert analysis
file when this file is active (when the rays are displayed in the 3D view).
The rays exported correspond to the rays propagated during the simulation. Therefore the length of the
rays exported depends on the optical system and the simulation parameters.
Exporting rays as geometries is useful to verify, assess and modify an optical system to optimize it.
For example, it can help you place the elements of your system ideally, or use a specific construction line as an axis,
a base to orient a light guide.
To export ray as geometry, right-click an interactive simulation or *.lpf active file and click Export Rays as Geometry.
The rays geometry is exported in the active component.
When the rays are exported as geometries, they appear in the 3D view. They are stored as construction lines in the
Curves folder of the Structure panel.
Note: This option is only available for simulations generating projected grids, that is interactive simulations
containing a camera sensor, and Static LiDAR simulations generating the Field of View result as
*.OPTProjectedGrid file.
To export projected grid as geometries, right-click the .OPTProjectedGrid result file and click Export projected
grid as geometries.
The projected grid geometry is exported in the active component.
When the grid is exported as geometries, construction lines appear in the 3D view. One line is created per element
of the grid. The lines are stored in the Structure panel, in a Curves sub-folder placed under the "projected grid"
geometry.
Related reference
Camera Projected Grid Parameters on page 364
Isolating a Simulation Result is useful when wanting to save a result in a specific state.
If the result is isolated, further parameter changes and simulations run will not overwrite the result.
To isolate a simulation result, from the Simulation panel, right-click a simulations and click Isolate.
Once isolated, the simulation is available from Speos tree and in the isolated results folder.
12: Optical Part Design
Optical Part Design provides geometrical modeling capabilities dedicated to optical and lighting systems.
Important: All features contained in this section require Speos Optical Part Design add-on and Premium or
Enterprise license.
Speos Optical Part Design provides modeling capabilities dedicated to optical and lighting systems design mainly
for automotive industry.
A wide variety of optical parts and components such as lenses, surfaces or reflectors are available in Speos to cover
different needs and configurations.
Once modeled, the optical components can be integrated in an optical system and be tested out through simulation.
Optical Components
• The Parabolic Surface is a collimating surface, that is a surface that allows a perfectly specular, mirror-like reflection
of rays.
• The TIR (Total Internal Reflection) lens is a collimating optical component, highly efficient to capture and redirect
light emitted from a light source.
• The Light Guide is a thin pipe made of transparent material. Light Guides are highly efficient in transmitting light
and have various possible applications (interior lighting, accent lighting etc.)
• The Optical Lens allows you to create pillow, prismatic, pyramidal or reflex reflectors lenses. Optical lenses are
key components of lighting systems are often used to design rear signal lenses.
• The Optical Surface allows you to generate rectangular, circular or faceted reflectors.
• The Projection Lens allows you to create optical lenses used for automotive projector modules.
• The Poly Ellipsoidal Reflector is a reflector that is mainly used in automotive projector modules to produce spread/
driving beams.
• The Freeform Lens allows you to create a collimating lens from any freeform surface.
• The Micro Optical Stripes helps you create a feasible light guide by defining a Tool Bit Shape used for processing
the Light Guide mold.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 455
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Optical Part Design
In Speos
You can manage all the characteristics of the optical component: its source, focal, shape, size and distribution.
Warning: Opening an Optical Part Design project in two different Speos versions may present different
results.
• In 2022 R2, Aspheric coefficients go from 2 to 31 instead of 2 to 30, and so do the values.
• In 2023 R1, Aspheric coefficients go from 2 to 30.
The issue goes also for the Aspheric coefficient of a Zernike face.
In 2022 R2
• i = 2 value = 0 (this corresponds to the first index 1)
• i = 3 value = 0.02
• i = 4 value = 0.001
• ...
• i = 31 value = 0,001 (this corresponds to the last index 30)
In 2023 R1 after migration
• i = 2 value = 0.02
• i = 3 value = 0.001
• ...
• i = 30 value = 0,001
Note: The index 1 is no longer present in the interface as it is always 0. Only edible coefficient are present.
In Speos, a focus point (representing the light source) sends rays that are reflected on the specular surface and
collimated in the surface's optical axis direction.
Section view of the collimation carried out by a parabolic 3D view of a parabolic surface
surface
Related tasks
Creating a Parabolic Surface on page 459
This page shows how to create a Parabolic Surface that can, later on, be used as a support to create optical lenses.
Related reference
Understanding Parabolic Surface Parameters on page 458
This page describes parameters to set when creating a Parabolic Surface.
Axis System
• The origin point represents the focus point (the point giving the position of the source).
• The Axis refers to the optical axis of the surface. This axis is used to define the direction of the surface revolution
axis.
Note: The focus does not necessarily belongs to this Axis. In this case, the revolution axis of the surface
is defined by the direction of Axis and by the Focus.
• The Orientation fixes the orientation of the surface around the optical Axis. The orientation is usually but not
necessarily defined on a plane normal to the Axis. When the orientation is defined on a plane that is not normal
to the Axis, orientation is projected onto the defined plane,
Focal
The focal length represents the distance between the top and the focus of the parabolic surface.
• Click
to select the surface's Origin point (the source point).
• Click
and select a line giving the direction of the optical axis.
• Click
and select a line fixing the orientation of the surface around Axis.
• or click
Note: Orientation is not necessarily defined on a plane normal to Axis. In this case, Orientation is
projected onto such plane.
Related reference
Understanding Parabolic Surface Parameters on page 458
This page describes parameters to set when creating a Parabolic Surface.
Related information
Parabolic Surface Overview on page 457
This page provides an overview of the parabolic surface and its different applications and uses.
In Speos, optical facets distribution, shape and size can be fully customized by the user.
Related tasks
Creating an Optical Surface on page 461
This page shows how to create an optical surface from a parabolic or freeform support.
Related information
Optical Surface Parameters on page 469
This section provides more information about the parameters to set when creating an optical surface.
3. If you select Extended, you can define the Flux of the source.
The Flux will be used to run the Photometry tool simulation that generates a photometric preview displayed in
the feature viewer, a HTML report and a XMP file of the selected element(s) of the feature.
For more information on the Photometry tool, refer to Understanding Display Properties.
4. In Support, from the Type drop-down list select which support to use to build the optical surface:
• Select Parabolic to create the parabolic support by defining its origin, axis and orientation or click and
select a coordinate system to autofill the Axis System.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated
by Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis
in the 3D view. Please refer to the axis in the 3D view.
• Select Freeform to build the optical surface on an existing surface. Then, in the 3D view click
5. In Target, if you want to verify that your current design passes regulation standards, select a XML Template
corresponding to the regulation standard.
The XML template will be used by the Photometry tool simulation to generates the photometric preview displayed
in the feature viewer, the HTML report and the XMP file of the selected element(s) of the feature.
Note: You can find existing XML templates of regulation standards in the Ansys Optical Library.
For more information on the Photometry tool, refer to Understanding Display Properties.
6. Define the result viewing direction and the position of the observer point:
a) If you want to define specific axes for the sensor, in the 3D view click to select a projection axis and
to select an orientation axis or click and select a coordinate system to autofill the Axis System.
b) From the Intensity result viewing direction drop-down list:
• Select From source looking at sensor to position the observer point from where light is emitted.
• Select From sensor looking at source to position the observer in the opposite of light direction.
7. In the Style tab, set the axis system of the grid define the distribution, pattern and size of the elements to be
created on the support.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
8. In Manufacturing, activate the sewing and drafting of the elements if you want mechanical constraints to be
taken into account:
• Activate the Sewing if you want any gaps between elements to be automatically filled.
• Activate the drafting of the elements if you want mechanical constraints coming from unmolding to be taken
into account.
If activated, define a Draft length in mm. The draft length defines the sewing surface created between two
adjacent faces.
Related tasks
Managing Groups and Elements on page 464
This page shows how to create and manage groups. When designing an optical surface, you can create different
groups of elements to apply specific parameters to these groups.
Related information
Optical Surface Parameters on page 469
This section provides more information about the parameters to set when creating an optical surface.
Note: When no group is created, all the elements (facets) are stored in Ungrouped Elements section.
2. From the Groups panel, click Add to add as many groups as needed.
Tip: For a quicker way, right-click the feature and select Add new group.
• Select Freeform to create facets that are shaped by specifying the area where the light should be sent in.
• Select Sharp Cutoff if you want to create an optical surface beam that directs light underneath a defined cut-off
line.
• Select Flutesif you want to create facets by adjusting the flutes curvature.
4. In the 3D view, click and select the faces to include in the group:
• In case of a Rectangular or Freestyle Rectangular part, you can directly select a row
or select a column
.
• In case of a Circular or a Freestyle Circular part, you can directly select a radius
or select a circle
.
• For a free selection of faces, you can choose a SpaceClaim selection mode in the bottom right corner of the
session and use Box, Lasso or Paint.
• You can add Named Selections composed of faces to Optical Part Design groups.
Note: You can only add a same type of element in a group. Example: a group composed of faces only,
or a group composed of Named Selections only.
Related tasks
Creating an Optical Surface on page 461
This page shows how to create an optical surface from a parabolic or freeform support.
Related information
Beams on page 479
This section gathers all beam types available when designing an optical surface.
Support on page 493
This page describes the parameters to set when working with a Parabolic or Freeform support.
Note: Managing the Optical Surface facets parameters from an Excel file is in Beta mode for the current
release.
4. In the main node definition, from the Excel file drop-down list, click Create Excel (Beta).
An Excel file is created based on the input parameters set in the Style and Ungrouped Elements definition.
The Excel file opens automatically upon creation.
5. If you had already created the Excel File (automatic opening of the file already done), from the Excel file drop-down
list, click Open file.
Important: For Speos to know that you modified the Excel file, you must open the file from here.
Otherwise, if you open the file outside of the interface, Speos cannot detect if you modified it.
Tip: To find the match between a cell in the Excel file and its facet in the 3D view, you can hover over
the feature. This will give you the cell coordinates.
8. In Speos, click Compute to generate the Optical Surface and take into account the changes.
The Optical Surface is generated and built in the 3D view.
Description
The Excel file is an input file that allows you to drive every parameter that you can drive in a group for every facet
of the Optical Surface individually. Basically the Excel file represents every facet of the feature as if one face = one
group. It saves you from creating lots of groups in the Speos interface and allows you to quickly modify each parameter
for each facet so that you can create a smooth evolution of the parameters value along the feature.
Note: The excel file replaces the management of the facets by group. Thus you cannot create groups when
the Excel definition is activated.
Warning: If you create a new Excel file from the definition in Speos, you will not have the modifications you
applied directly in the previous used Excel file. You will have to re-type them manually.
Note: We recommend you to define an Excel file of a maximum size of 100x100. A bigger file would affect
the performance.
With an extended source, the source images are available in the feature viewer.
Related tasks
Creating an Optical Surface on page 461
This page shows how to create an optical surface from a parabolic or freeform support.
12.4.6.2. Support
The support drives the construction of the surface's elements. Elements can be built on a Parabolic or Freeform
surface.
Parabolic
With the Parabolic type, the four corners of each element belong to the parabolic surface.
To generate the support, an origin, axis and orientation need to be selected.
• The Origin point determines the absolute position of the surface. By default, the source position is used.
• The Axis determines the optical axis of the surface.
• The Orientation is defined by selecting a line fixing the orientation of the surface around the Axis.
Note: Orientation might not be defined on a plane normal to the Axis. In this case, orientation is
automatically projected onto such plane.
Freeform
With the Freeform type, elements are built on a user-defined freeform surface.
A surface must be selected as support.
Optical surface created on a freeform support (depicted The 4 edges of the elements belong the freeform support.
in purple).
Related tasks
Creating an Optical Surface on page 461
This page shows how to create an optical surface from a parabolic or freeform support.
12.4.6.3. Target
The target allows you to define how the results are shown in the feature viewer.
Intensity Target
The target is more or less acting like a sensor. You can choose from this section how the results are shown and what
pieces of information are going to be displayed in the feature viewer.
Beam patterns and source images are displayed on an angular grid in the feature viewer. It is often used when the
area to light is defined angularly.
12.4.6.4. Style
This section describes all grid types available to design an optical surface.
Note: In some cases, when modifying the grid of an Optical Surface, group definitions must be updated.
12.4.6.4.1. Rectangular
The grid determines the size and distribution of the elements over the support.
Axis System
An axis system is required to define the elements' orientation and projection on the support.
This axis system is, by default, inherited from the support definition.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Rectangular Grid
Elements are distributed according to a rectangular pattern. The following parameters can be set to customize this
distribution:
• X start: Start value for the feature size along X Grid (mm).
• X end: End value for the feature size along X Grid (mm).
• Y start: Start value for the feature size along Y Grid (mm).
• Y end: End value for the feature size along Y Grid (mm).
• X Angle: Angle made by X axis and the horizontal edge of the elements (deg).
• Y Angle: Angle made by Y axis and the vertical edge of the elements (deg).
• X count: Number of elements along X axis.
• Y count: Number of elements along Y axis.
• X size: Size of the elements along X axis (mm).
• Y size: Size of the elements along Y axis (mm).
12.4.6.4.2. Circular
The grid determines the size and distribution of the elements over the support.
Axis System
An axis system is required to define the elements' orientation and projection on the support.
This axis system is, by default, inherited from the support definition.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Circular Grid
Note: Circular grid distribution is not compatible with Flute or Freeform beam type.
Elements are distributed according to a circular pattern. The following parameters can be set to customize this
distribution:
Shift
• None
• Radial
• Circular
Grid Definition
• Start: Interior radius of the feature (mm).
• End: Exterior radius of the feature (mm).
• Step: Radial length of the elements (mm).
• Sectors: Number of angular subdivisions of the feature.
• Angle: Angle made by the sectors (deg).
• Shift: length (radial shift type) or angle (circular shift type) driving the elements distribution (in mm or deg).
12.4.6.4.3. Stripes
The grid determines the size and distribution of the elements over the support.
Tip: Stripes grid is only compatible with the Flute beam type.
Axis System
The grid axis system is used to define the way that the style curves are projected onto the support.
The axis system origin is optional. It is, by default, inherited from the support.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Note: This axis system has an impact on some parameters of the flutes beams:
• Start angle/End angle.
• Flutes' concavity/convexity.
Support and Style Curves viewed from the plane normal Result of the feature when stripes are generated on the
to Projection Direction. support according to style curves definition.
To ensure a correct construction when using a curved support, make sure the support does not close in on itself
(see figure below). Otherwise, the stripes will not be projected correctly.
Definition
Styles curves are selected to demarcate the stripes.
These curves can be lines, splines or sketches. The stripes are oriented along X Grid, which is given by the tangent
to the set of curves.
12.4.6.4.4. Freestyle
The grid allows you to determine the lens' distribution onto the support.
Note: Freestyle grid is not compatible with the Flute beam type, and the Freestyle circular grid is not
compatible with the Sharp Cutoff beam type.
Two sets of curves are used to delimit the elements of the lens. These curves can give either a rectangular-based or
a circular-based pattern.
Reflector with rectangular-based freestyle grid Reflector with circular-based freestyle grid
Axis System
The grid axis system is used to define the way that the sets of curves are projected onto the support.
The axis system origin is optional. It is, by default, inherited from the source.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Support and the two sets of curves viewed from the plane Result of the feature generated with the support and the
normal to Projection Direction two sets of curves of the picture on the left
Definition
Two sets of curves (X curves and Y curves) are used to demarcate the elements.
These curves can be lines, splines or sketches.
Each set of curves has to follow some rules for the feature to be built properly :
• Each curve of a set cannot intersect any other curve from this set.
• Each curve of a set cannot intersect several times any other curve.
• All the curves of a set has to intersect all the curves of the other set.
• All the curves of a set has to be closed if one curve of this set is closed.
X curves are dedicated to vertical curves and are defined along Y grid axis.
Y curves are dedicated to horizontal curves and are defined along X grid axis.
Y curves are the highlighted curves Curves projected onto support Final result
and drive X grid. X curves drive Y grid
12.4.6.5. Manufacturing
The Manufacturing section allows you to consider and anticipate mechanical constraints coming from the surface's
manufacture.
Note: The sewing and drafting options are not available for the Stripes grid.
Sewing
Sometimes a gap may appear between the elements.
Activating the Sewing option automatically fills any gaps that might appear during construction.
Optical surface without sewing - Gaps remain between Optical surface with sewing - Gaps between the elements
the elements are filled
Drafting
Lenses generated without drafting induce a manufacturing issue when they are removed from their mold.
Activating the Drafting allows you to take into account mechanical constraints and ensure an accurate manufacture.
Note: Drafting reduces the facets' size. This operation is automatically taken into account by the algorithm
generating the elements. As a consequence, the photometry is kept during the drafting operation no matter
how high the drafting value is.
Rectangular Optical lens with a drafting of 2mm Circular Optical lens with a drafting of 2mm
• When selecting Draft angle, the value to define determines the angle to create between the demolding axis and
the facet.
Note: The Draft angleDraft Angle is not supported when pillows/faces have edges in common connected
to vertices that are at different levels, as seen in the following example:
Related tasks
Creating an Optical Surface on page 461
This page shows how to create an optical surface from a parabolic or freeform support.
12.4.6.6. Beams
This section gathers all beam types available when designing an optical surface.
12.4.6.6.1. Radii
The Radii beam type allows you to create spherical facets that ideally diffuse light.
With the radii beam type, a definition of certain parameters is required:
• You need to define the Radius of the facet.
• You need to select an Orientation: concave or convex.
A concave surface is curved inwards.
A convex surface is curved outwards.
Note: For each element, the convexity/concavity is defined with regards to the source.
Rectangular
Start radius: Smallest value of the radius of curvature of the elements along the transverse axis.
End radius: Highest value of the radius of curvature of the elements along the transverse axis.
Radial radius: Radius of the elements along the radial axis.
Circular
12.4.6.6.2. Freeform
With the Freeform beam type, elements are shaped by specifying the rectangular area where the light should be
sent in.
• Orientation: concave or convex.
A concave surface is curved inwards.
A convex surface is curved outwards.
Note: If X start > X end or Y start > Y end, a concave freeform element becomes convex and a convex
freeform element becomes concave respectively on X or Y axis.
Note: We recommend you to set Custom spread to True. Otherwise you may face incorrect pillow
construction.
• X spread and Y spread: Ratio driving the beam spread along X and Y Target axes.
Note: These parameters are only available when Custom spread is activated.
0% < Ratio < 100% 100% 100% < Ratio < 200%
Light distribution inside the Light distribution inside the Light distribution inside the
specification rectangle tends to specification rectangle tends to be specification rectangle tends to
concentrate toward the center. uniform. concentrate toward the edges.
Standard Mode
The Standard mode allows you to easily design a sharp cutoff beam.
This mode provides a standard set of parameters and the definition of the beam is partly automated.
Advanced Mode
The Advanced mode is a flexible mode that allows you to fully control the light direction according to the reflecting
area of the facet.
This mode is useful to design low beams, fog lamps or cornering lights.
Related information
Sharp Cutoff in Standard Mode on page 483
This section focuses on Sharp Cutoff definition and parameters in Standard mode.
Sharp Cutoff Advanced on page 487
This section focuses on Sharp Cutoff definition and parameters in advanced mode.
Note: You can repeat the following task for each created group. For more details about the beam parameters,
see Standard Sharp Cutoff Beam Parameters
1. From the Groups panel, click Add to add as many groups as needed.
2. Select a Group and from the Type drop-down list, select Sharp Cutoff.
7. In Tilt, define the cut-off line's orientation by specifying the degree angle between X Target axis and the
specification line.
8. Define the light distribution under the cut-off line:
• In Y size, define the beam spread along Y Target axis.
• In Y spread, specify the ratio (in %) driving the beam spread along Y Target axis.
Note: When the ratio equals 100%, the light distribution inside the specification line tends to be
uniform. The less the ratio is, the more concentrated the beam light will be under the specification
line.
Related tasks
Managing Groups and Elements on page 464
This page shows how to create and manage groups. When designing an optical surface, you can create different
groups of elements to apply specific parameters to these groups.
Related reference
Standard Sharp Cutoff Beam Parameters on page 485
This page describes the parameters used to create a Sharp Cutoff beam in standard mode.
Note: Convex and Concave parameters have no impact on the rays direction on the Target X axis.
Spread Start < Spread End Spread Start > Spread End
Note: When the ratio equals 100%, the light distribution inside the specification line tends to be uniform.
The less the ratio is, the less spread/the more concentrated the beam light will be under the specification
line.
Elements shape
The convexity/concavity of the elements determine the behavior of the rays on the target.
Note: The facets of the reflector will not necessarily appear concave or convex.
• Convex: a ray reflected on the top of the element will come to the top of the target when Y size is non-null. Rays
never cross each other.
• Concave: a ray reflected on the top of the element will come to the bottom of the target when Y size is non-null.
Rays cross each other.
Beam ray tracing of a sharp cutoff element and a non-null Y Size - Convex on the left, Concave on the right
Performance
A progress bar displays the progress of the feature generation algorithm. The displayed information are:
• Element Numbers: Current and total elements of the feature.
• Status: Progress on the feature generation algorithm (percent).
• Estimated time remaining: Remaining time for the feature generation algorithm (seconds).
1. From the Groups panel, click Add to add as many groups as needed.
2. Select a Group and from the Type drop-down list, select Sharp Cutoff.
3. Set Advanced mode to True.
2. In Horizontal control planes and Vertical control planes, define the X and Y position of the current plane.
b) For each plane created, define the reflected rays direction:
1. In Horizontal Spread, define the direction of the reflected rays along the horizontal axis of the target (X).
2. In Vertical Spread, define the direction of the reflected rays along the vertical axis of the target (Y).
c) In Vertical orientation, type a value to adjust the horizontal spread size of the beam on the target.
d) In Tilt, enter an angular value (in degrees) define the cut-off line's orientation.
Related tasks
Managing Groups and Elements on page 464
This page shows how to create and manage groups. When designing an optical surface, you can create different
groups of elements to apply specific parameters to these groups.
Related reference
Advanced Sharp Cutoff Beam Parameters on page 488
This page describes the parameters used to create a Sharp Cutoff beam in advanced mode.
Control Planes
The control planes determine the intersection points made with the facet. These intersections generate points that
allow to apply different parameters on different areas of a same facet.
Control planes Points created on the facet by the intersection curves of the
control planes and the facet.
• X corresponds to the direction of the grid orientation axis. The vertical control planes are placed along X.
• Y corresponds to the direction normal to the plane made with the grid's projection and orientation. The horizontal
control planes are placed along Y.
Position
The control planes need to be defined and positioned independently on X (horizontal axis) and Y (vertical axis).
X axis Y axis
To position the control planes, you need to define a ratio (X and Y value) between 0 and 1.
0 corresponds to the negative direction of the axis, and 1 to the positive direction of the axis.
12.4.6.6.4. Flutes
Flutes are shaped by the geometrical angle between tangents of the support.
Flutes are shaped by specifying the geometrical angles between tangents of the support and of the optical surface.
By adjusting the flutes curvature, you can drive the angular spread of the beam.
• Start angle: Angle between the vector tangent to the support and the vector tangent to the Flute.
• End angle: Angle between the vector tangent to the support and the vector tangent to the Flute.
• Orientation: the flutes can be defined as concave or convex.
Flutes can also be made of several parts when the support has holes in it or has an irregular shape.
12.4.6.7. Support
This page describes the parameters to set when working with a Parabolic or Freeform support.
Parabolic Surface
• Focal: The focal length of the support is defined differently depending on the beam type selected.
º For Freeform and Radii beam types: the focal length corresponds to the distance between the source barycenter
and the support apex.
º For Sharp Cutoff beam: the focal length is the distance between the barycenter of the source and the corner of
the element closest to this source. If this distance is not unique, the Focal matches the distance related to the
first corner met when going through them in the following order:
Figure 69. Corners are swept in this order: Top Left (TL), Top Right (TR), Bottom Left
(BL), Bottom Right (BR)
Note: This parameter is only available with the Freeform beam type.
This parameter horizontally and/or vertically orientates the parabolic support to make the beam pattern match the
most with the specification rectangle.
The formulas used when the Boolean is true are the following:
• X center = (X start + X end) / 2
• Y center = (Y start + Y end) / 2
Freeform Surface
When working with a freeform surface as support, the only parameter to adjust is the Shift of the facets from the
surface.
• Shift corresponds to a translation of the selected group along the Grid Axis. If Shift > 0, the translation is done in
the Grid Axis direction. If Shift < 0, the translation is done in the opposite of the Grid Axis direction.
Shift = 0 for all groups Pink group Shift = 5mm Red group Shift = 1mm Blue
group Shift = -3mm Yellow group Shift =2mm
• Only one point on support is a boolean imposing that all the elements have only one point belonging to the
support.
Related tasks
Managing Groups and Elements on page 464
This page shows how to create and manage groups. When designing an optical surface, you can create different
groups of elements to apply specific parameters to these groups.
Note: X and Y target axes used to display values in the viewer are defined in the target.
The viewer gives information about the feature (surface) behavior and characteristics.
In the viewer, different types of informations are displayed according to your configuration of the optical surface.
Several types of elements can be displayed.
Grid
A grid is displayed in the viewer and gives information about the size of the beam.
The grid is defined angularly and the step is expressed in degrees.
Source Images
Source Images is only available with the Extended source type.
The source images give information about the surface behavior inside the target. Their color depend on the group's
color of the selected element.
U Samples indicate the number of source images along the first parametric surface direction.
V Samples indicate the number of source images along the second parametric surface direction.
The external face of the elements is discretized according to U Samples and V Samples giving particular points.
The image of the extended source is then calculated for each of these particular points using the Snell's law.
The source images are approximated because of the meshing and the extended source convex hull considered for
the calculation.
Beam Pattern
A beam pattern is depicted as a grid (a network of lines) and gives information about the beam's shape.
U Samples indicate the number of isoparametrics along the first parametric surface direction.
V Samples indicate the number of isoparametrics along the second parametric surface direction.
The external face of the element is discretized according to U Samples and V Samples giving a particular network
of lines on this element. The reflection of this network of lines is carried out using the Snell's law from the source
point.
The calculation of the beam is calculated considering a punctual source. If an extended source is used, the barycenter
of the extended source is considered as the punctual source.
Photometry
Description
The Photometry tool is an integrated simulation that allows you to have a quick preview and result of the selected
element(s) of the feature.
When activating the Photometry, the simulation is automatically launched using the GPU Compute functionality
and generates a photometric preview displayed in the feature viewer, a HTML report and a XMP file in a folder named
as FeatureName_InteractivePhotometry located in the Speos Output Files folder.
Note: Each time you modify the selection of the element(s), the simulation is launched again, which overwrites
the previous HTML report and XMP file.
Therefore, the Photometry tool allows you to directly iterate during the design process of the Optical Part Design
feature in order to reach the regulation standards required, without having to run a Speos simulation that can take
times to generate the output results.
Photometry of a group of elements from the feature XMP result of the group of elements selected
viewer
Simulation Parameters
The Photometry tool simulation needs some information in order to run correctly. Therefore, the simulation considers
the following parameters:
• Source Parameters
º The simulation considers the Extended source type of the feature definition with a lambertian emission
º The simulation considers the Flux of the feature definition (100 lm by default).
º Optical Properties of the extended source: the simulation considers by default a Surface Optical Property as
a Mirror set to 0% and a Volume Optical Property set to None.
• Geometry Parameters
The simulation considers the Optical Part Design feature geometry with the Optical Properties you applied manually.
If you have not set any Optical Properties, the simulation considers by default a Surface Optical Property as a
Mirror set to 85%.
• Sensor Parameters
The simulation considers the Intensity Target type of the feature definition.
The size of the sensor used corresponds to the defined Beam Pattern parameters of the feature viewer.
• Regulation Standards
The simulation considers the XML template selected in the Target definition.
The XML template corresponds to the standard that you want the feature to pass.
Note: You can find existing XML templates of regulation standards in the Ansys Optical Library.
Related tasks
Adjusting Display Properties on page 498
The Display Properties allows you to customize the viewer and the information displayed in it.
4. In Source Images and Beam Pattern, adjust the number of samples for U and V axes.
5. Check Photometry - BETA to run an integrated simulation that generates a photometric preview displayed in
the feature viewer, a HTML report and a XMP file of the selected element(s) of the feature.
This will help you verify if your current design passes the regulation standards that you selected in the XML
template parameter of the feature definition.
Fore more information, refer to Understanding Display Properties.
6. Click Show Regulation if you want to open the HTML report of the simulation.
7. To access grid parameters, right-click in the viewer and click Grid Parameters.
8. Click User and define the step you want for the gridline for U and V axes.
The display properties are set and the modifications are taken into account in the viewer.
Related concepts
Understanding Display Properties on page 495
The viewer and display properties assist the design process by helping to understand the feature behavior.
Source
All source types can be displayed:
• Point Source: point highlighted in the 3D view
• Extended Source: emitting surface highlighted in the 3D view
Example of point source interactive preview Example of extended source interactive preview
Grid
Only rectangular and circular grids can be previewed in the 3D view.
Example of rectangular grid interactive preview Example of circular grid interactive preview
Support
Only Parabolic support can be previewed in the 3D view.
Created groups are not considered in the preview.
Example of parabolic support interactive preview for a Example of parabolic support interactive preview for a
rectangular grid circular grid
Projected Grid
All projected grid types can be previewed in the 3D view.
Created groups are not considered in the preview.
Example of rectangular projected grid interactive preview Example of circular projected grid interactive preview
Important: The Interactive Preview can be time-consuming, especially for the Projected grid preview in
case of a huge number of grid intersections.
3. In the Options panel, check the parameters' interactive previews you want to display in the 3D view.
• Source
• Support (only parabolic support can be previewed)
• Grid (only rectangular and circular grids can be previewed)
• Projected Grid
The parameters' previews are displayed in the 3D view and change dynamically upon feature modifications.
Optical lenses are key components of lighting systems. They allow to create elements to transmit, reflect or diffuse
light.
They are often used to design automotive signal lighting.
Optical lenses distribution, shape and size can be fully customized by the user.
3D view of an optical lens used as pillow lens 3D view of an optical lens used as prismatic lens
3D view of an optical lens used as pyramid lens 3D view of an optical lens used as flute lens
3D view of an optical lens used as freestyle pillow lens 3D view of an optical lens used as freestyle pillow lens
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
3. If you select Extended, you can define the Flux of the source.
The Flux will be used to run the Photometry tool simulation that generates a photometric preview displayed in
the feature viewer, a HTML report and a XMP file of the selected element(s) of the feature.
For more information on the Photometry tool, refer to Understanding Display Properties.
4. Define the Refractive index of the medium from which the light comes.
5. To define the support:
b. Define the Orientation type of the support, that is on which side of the surface the elements will be created:
• Select Outer support to consider the outer surface as the support and create the elements on the inside.
• Select Inner support to consider the inner face as the support and create the elements on the outside.
Note: The principle of inner and outer surface is determined by the source position. The source is
always facing the inner surface of the support.
c. If needed, adjust the Thickness of the support. This value is used to define an intermediary surface carrying
the elements.
d. Define the Refractive index of the lenses.
6. In Target, if you want to verify that your current design passes regulation standards, select a XML Template
corresponding to the regulation standard.
The XML template will be used by the Photometry tool simulation to generates the photometric preview displayed
in the feature viewer, the HTML report and the XMP file of the selected element(s) of the feature.
Note: You can find existing XML templates of regulation standards in the Ansys Optical Library.
For more information on the Photometry tool, refer to Understanding Display Properties.
7. From the Type drop-down list, select the type of metrics used for reviewing the beam pattern:
• If the area to light is in a plane placed at a known distance from the source, select Illuminance.
Light is focalized in one point/plane. The viewer displays illuminance values.
• If the area to light is defined angularly, select Intensity.
Light is directed in a given direction. The viewer displays intensity values.
• If you want to define specific axes for the sensor, in the 3D view click
8. In the Style tab, set the axis system of the grid define the distribution, pattern and size of the elements to be
created on the support.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
9. In Manufacturing, activate the drafting of the elements if you want mechanical constraints coming from unmolding
to be taken into account.
If activated, define a Draft length in mm. The draft length defines the sewing surface created between two
adjacent faces.
Related tasks
Managing Groups and Elements on page 506
This page shows how to create and manage groups. When designing an optical lens, you can create different groups
of elements to apply specific parameters to these groups.
Related information
Style on page 515
This section describes all grid types available to design an optical lens.
Optical Lens Parameters on page 512
This section provides more information about the parameters to set when creating an optical lens.
Note: When no group is created, all the elements (facets) are stored in Ungrouped Elements section.
1. Reach the feature level, and from the Groups panel, click Add to add as many groups as needed.
Tip: For a quicker way, right-click the feature and select Add new group.
or select a column
.
• In case of a Circular or a Freestyle Circular part, you can directly select a radius
or select a circle
.
• For a free selection of faces, you can choose a SpaceClaim selection mode in the bottom right corner of the
session and use Box, Lasso or Paint.
• You can add Named Selections composed of faces to Optical Part Design groups.
Note: You can only add a same type of element in a group. Example: a group composed of faces only,
or a group composed of Named Selections only.
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
Beams on page 525
This section gathers all beam types available when designing an optical lens.
Note: Managing the Optical Lens facets parameters from an Excel file is in Beta mode for the current release.
4. In the main node definition, from the Excel file drop-down list, click Create Excel (Beta).
An Excel file is created based on the input parameters set in the Style and Ungrouped Elements definition.
The Excel file opens automatically upon creation.
5. If you had already created the Excel File (automatic opening of the file already done), from the Excel file drop-down
list, click Open file.
Important: For Speos to know that you modified the Excel file, you must open the file from here.
Otherwise, if you open the file outside of the interface, Speos cannot know if you modified it.
Tip: To find the match between a cell in the Excel file and its facet in the 3D view, you can hover over
the feature. This will give you the cell coordinates.
8. In Speos, click Compute to generate the Optical Lens and take into account the changes.
The Optical Lens is generated and built in the 3D view.
Description
The Excel file is an input file that allows you to drive every parameter that you can drive in a group for every facet
of the Optical Lens individually. Basically the Excel file represents every facet of the feature as if one face = one group.
It saves you from creating lots of groups in the Speos interface and allows you to quickly modify each parameter for
each facet so that you can create a smooth evolution of the parameters value along the feature.
Note: The excel file replaces the management of the facets by group. Thus you cannot create groups when
the Excel definition is activated.
Warning: If you create a new Excel file from the definition in Speos, you will not have the modifications you
applied directly in the previous used Excel file. You will have to re-type them manually.
Note: We recommend you to define an Excel file of a maximum size of 100x100. A bigger file would affect
the performance.
With a directional source, source images are not available and cannot be displayed in the feature viewer.
The line indicates the light direction and is considered as collimated by the reflectors.
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
12.5.6.2. Support
The support is a construction geometry (usually a surface) used to carry the optical lenses.
Support
The lenses are created on a support geometry. The lenses can be created on any freeform surface.
Outer/Inner Surface
The outer/inner surface parameter defines on which side of the support the elements will be created.
With OuterSurface, the outer face of the surface is considered as the support and the elements are created on the
inside. The elements are facing the source.
Outer surface with a punctual source Outer surface with a directional source
With Inner Surface, the inner face of the surface is considered as the support and the elements are created on the
outside.
Note: With this mode, you cannot use the feature viewer.
Inner surface with a punctual source Inner surface with a directional source
The meaning of outer/inner is related to how the source has been defined, the inner surface being always on the
side of the source unlike outer surface.
If a punctual source is moved to the other side of the support or if a directional source is reversed, then the roles of
outer and inner surface are switched.
Thickness
The Thickness allows you to adjust the offset of the elementary elements. The thickness corresponds to the distance
between the support and the "intermediary surface" of the support.
Type a value for the length used to define an intermediary surface carrying the elementary elements.
• Radii: The four corners of each optical face of the pillows belong to the intermediary surface.
• Prism: At least one corner of the optical face of the prisms belong to the intermediary surface.
• Pyramid: The four corners of the pyramid bases belong to the intermediary surface.
• Flute: The four corners of the optical face of the flutes belong to the intermediary surface.
Refractive Index
The Refractive Index of a material is a pure number that describes how light propagates through that medium.
It is defined as n=c/v (where c is the speed of light in vacuum and v is the phase velocity of light in the medium.)
Most transparent materials have refractive indices between 1 and 2. The default refractive index used here (1.49) is
the index of plexiglass. This material is commonly used to design lenses.
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
12.5.6.3. Target
The target defines how the results are going to be interpreted in the feature viewer.
The target is more or less acting like a sensor. You can choose from this section how the results are going to be
interpreted and what pieces of information are going to be displayed in the feature viewer.
Target Type
The target type defines what kind of values/metrics will be available in the feature viewer (Optics tab). Two target
types are available:
• Intensity: with this type, intensity values will be displayed on an angular grid in the feature viewer. It is often used
when the area to light is defined angularly.
• Illuminance: with this type, illuminance values will be displayed in the feature viewer. Illuminance target is often
used when the area to light is located at a known distance from the source.
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
12.5.6.4. Style
This section describes all grid types available to design an optical lens.
Note: In some cases, when modifying the grid of an Optical Lens, group definitions must be updated.
12.5.6.4.1. Rectangular
The grid determines the size and distribution of the lenses over the support.
By default, elements are distributed onto the support according to a rectangular pattern. This distribution can be
customized as well as the shape and size of the lens itself.
Axis System
An axis system is required to define the element's orientation and projection on the support.
This axis system is, by default, inherited from the source definition.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Rectangular Grid
Elements are distributed according to a rectangular pattern. The following parameters can be set to customize this
distribution:
• X start: Start value for the feature size along X Grid (mm).
• X end: End value for the feature size along X Grid (mm).
• Y start: Start value for the feature size along Y Grid (mm).
• Y end: End value for the feature size along Y Grid (mm).
• X Angle: Angle made by X axis and the horizontal edge of the elements (deg).
• Y Angle: Angle made by Y axis and the vertical edge of the elements (deg).
• X count: Number of elements along X axis.
• Y count: Number of elements along Y axis.
• X size: Size of the elements along X axis (mm).
• Y size: Size of the elements along Y axis (mm).
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
12.5.6.4.2. Circular
The grid determines the size and distribution of the lenses over the support.
By default, elements are distributed onto the support according to a rectangular pattern. This distribution can be
customized as well as the shape and size of the lens itself.
Axis System
An axis system is required to define the element's orientation and projection on the support.
This axis system is, by default, inherited from the source definition.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Circular Grid
Note: Circular grid distribution is not compatible with Flute beam type.
Elements are distributed according to a circular pattern. The following parameters can be set to customize this
distribution:
Shift
• None
• Radial
• Circular
Grid Definition
• Start: Interior radius of the feature (mm).
• End: Exterior radius of the feature (mm).
• Step: Radial length of the elements (mm).
• Sectors: Number of angular subdivisions of the feature.
• Angle: Angle made by the sectors (deg).
Circular Edges
The Circular edges option allows you to define Freeform beam elements with circular edges. This option allows
you to create a Fresnel lens.
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
Beams on page 525
This section gathers all beam types available when designing an optical lens.
12.5.6.4.3. Stripes
The grid allows you to determine the lens' distribution onto the support.
Note: Stripes grid is only compatible with the Flute beam type.
Axis System
The grid axis system is used to define the way that the style curves are projected onto the support.
The axis system origin is optional. It is, by default, inherited from the source.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Note: This axis system has an impact on some parameters of the flutes beams:
• Start angle/End angle.
• Flutes' concavity/convexity.
Support and style curves viewed from the plane normal Result of the feature when stripes are generated on the
to Projection direction (Z axis) support according to style curves definition.
When using a curved support, use a support that does not close in on itself (see figure below). Otherwise, the stripes
will not be projected correctly.
Definition
Styles curves are selected to demarcate the stripes.
These curves can be lines, splines or sketches. The stripes curves are not necessarily continuous in tangency.
Note:
• An incomplete style curve is enough to provoke a null result for the whole feature.
• To avoid construction issues, always use support surfaces larger than style curves.
• Stripes do not work with large curve length variation. To limit the size of the last stripes, we recommend
you to cut the support.
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
Beams on page 525
This section gathers all beam types available when designing an optical lens.
12.5.6.4.4. Freestyle
The grid allows you to determine the lens' distribution onto the support.
Note: Freestyle grid is not compatible with the Flute beam type.
Two sets of curves are used to delimit the elements of the lens. These curves can give either a rectangular-based or
a circular-based pattern.
Lens with rectangular-based freestyle grid Lens with circular-based freestyle grid
Axis System
The grid axis system is used to define the way that the sets of curves are projected onto the support.
The axis system origin is optional. It is, by default, inherited from the source.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Support and the two sets of curves viewed from the plane Result of the feature generated with the support and the
normal to Projection Direction two sets of curves of the picture on the left
Definition
Two sets of curves (X curves and Y curves) are used to define the freestyle grid.
These curves can be lines, splines or sketches.
Each set of curves has to follow some rules for the feature to be built properly :
• Each curve of a set cannot intersect any other curve from this set.
• Each curve of a set cannot intersect several times any other curve.
• All the curves of a set has to intersect all the curves of the other set.
• All the curves of a set has to be closed if one curve of this set is closed.
X curves are dedicated to vertical curves and are defined along Y grid axis.
Y curves are dedicated to horizontal curves and are defined along X grid axis.
X curves in magenta define the curves along Y grid and Y curves in purple define the curves along X Grid
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
Beams on page 525
This section gathers all beam types available when designing an optical lens.
12.5.6.4.5. Honeycomb
The grid determines the size and distribution of the lenses over the support.
By default, elements are distributed onto the support according to a rectangular pattern. This distribution can be
customized as well as the shape and size of the lens itself.
Axis System
An axis system is required to define the element's orientation and projection on the support.
This axis system is, by default, inherited from the source definition.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in the
3D view. Please refer to the axis in the 3D view.
Definition
From the definition section you can:
• Define how the elements are going to be distributed on the support by setting start and end values for X and Y
axes.
• Apply a rotation to the lenses (the X axis is taken as a reference to apply the rotation to the elements).
• Adjust the dimensions of the lenses by adjusting either their width or side length.
From the specification tree, under the feature node, more parameters are available to customize the pattern
distribution:
• X Start: Start value for the feature size along X Grid (mm).
• X End: End value for the feature size along X Grid (mm).
• Y Start: Start value for the feature size along Y Grid (mm).
• Y End: End value for the feature size along Y Grid (mm).
• X Count: Number of elements along X Grid.
• Y Count: Number of elements along Y Grid.
• Rotation: Rotation of elements along X axis (deg).
• Side Length: Size of the hexagons' side (mm).
• Width: Hexagons' width (mm).
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
12.5.6.5. Manufacturing
The Manufacturing or Drafting option allows you to consider and anticipate mechanical constraints coming from
the lenses' manufacture.
Note: This option is not available for the Stripes grid type and is compatible with a circular grid only if the
Shift type is Radial.
Lenses generated without drafting induce a manufacturing issue when they are removed from their mold.
Activating the Drafting allows you to take into account mechanical constraints and ensure an accurate manufacture.
Note: Drafting reduces the facets' size. This operation is automatically taken into account by the algorithm
generating the elements. As a consequence, the photometry is kept during the drafting operation no matter
how high the drafting value is.
Rectangular Optical lens with a drafting of 2mm Circular Optical lens with a drafting of 2mm
• When selecting Draft angle, the value to define determines the angle to create between the demolding axis and
the facet.
Note: Draft Angle is not supported if you try to apply a draft angle on 3 groups of one or more facets that
share a same vertex.
Note: The Draft angleDraft Angle is not supported when pillows/faces have edges in common connected
to vertices that are at different levels, as seen in the following example:
Related information
Creating an Optical Lens on page 503
This page shows how to create optical lenses.
12.5.6.6. Beams
This section gathers all beam types available when designing an optical lens.
12.5.6.6.1. Radii
The Radii lens is a spherical lens used to ideally diffuse light.
Pillow or Radii lens is a spherical shaped lens that transmits and spreads light. Light goes through the lens and
illuminates objects placed in front of it.
With the pillow lens, a definition of certain parameters is required:
• You need to define the Radius of the lens.
• You need to select an Orientation: concave or convex.
A concave lens has a surface that is curved inwards.
A convex lens has a surface that is curved outwards.
Note: For each element, the convexity/concavity is defined with regards to the source.
Radii is available for Rectangular, Circular, Freestyle and Honeycomb lens types.
Rectangular
Circular
Freestyle Rectangular: Parameters are the same as the ones used for the rectangular type.
Freestyle
Freestyle Circular: Parameters are the same as the ones used for circular type.
Honeycomb
12.5.6.6.2. Prism
The Prism or prismatic lens spreads and redirects light in a specific direction.
Prismatic lenses are shaped by specifying target points or directions. Light passes through the prisms and is redirected
to illuminate the defined area/direction.
Influence of the Convex/Concave parameter - Convex elements on the left/Concave elements on the right.
The parameters to set depend on the Target type selected in the General tab.
Illuminance • X position: Coordinate of the point targeted by the prism along X target axis.
• Y position: Coordinate of the point targeted by the prism along Y target axis.
• X radius: Radius of curvature applied on the prism along X grid axis.
• Y radius: Radius of curvature applied on the prism along Y grid axis.
Intensity • X Angle: Angle defining the direction targeted by the prism along X Target axis.
• Y Angle: Angle defining the direction targeted by the prism along Y Target axis.
• X Radius: Radius of curvature applied on the prism along X Grid axis.
• Y Radius: Radius of curvature applied on the prism along Y Grid axis.
12.5.6.6.3. Pyramid
This lens type allows you to generate pyramid-shaped lenses.
Concave Convex
12.5.6.6.4. Freeform
The Freeform lens is shaped according to target points or directions.
Note: This beam type is not available if you defined the Inner support option.
The freeform lens is generated to illuminate a desired area. You define this area of interest and the lenses generated
accordingly.
The elements of this lens basically look like prisms having a pillow on their tops.
Contrary to the prism beam type whose curvature is defined with a constant radius, here the curvature is the result
of an optimization minimizing the beam pattern size With this beam type all the rays goes in the targeted points or
directions while with the basic prism beam type the matching is made sure only for one prism point.
Intensity • X angle: Angle defining the direction targeted by the prism along X Target axis.
• Y angle: Angle defining the direction targeted by the prism along Y Target axis.
Illuminance • X position: Coordinate of the point targeted by the prism along X Target axis.
• Y position: Coordinate of the point targeted by the prism along Y Target axis.
12.5.6.6.5. Flute
Flute lenses are shaped by the geometrical angle between tangents of the support.
Flute lenses are shaped by specifying the geometrical angles between tangents of the support and of the optical
surface.
By adjusting the flutes curvature, you can drive the angular spread of the beam.
You can drive the flute horizontal spread by using vertical flutes and reciprocally.
• Start angle: Angle between the vector tangent to the support and the vector tangent to the Flute.
• End angle: Angle between the vector tangent to the support and the vector tangent to the Flute.
• Orientation: the lenses can be defined as concave or convex.
Note:
º Flutes can be made of several parts when the support has holes in it or an irregular shape.
º In some cases, stripes (usually the most external stripes of a group) can present a curvature inversion.
Note: This beam type is only compatible with the Honeycomb lens.
Retroreflectors are composed of small corner reflectors (truncated cubes) that redirect light back to its source.
Reflex reflector lenses are designed to reflect light in the same direction as incident light.
Figure 71. Honeycomb lens built with reflex reflector beam type
With this beam type, no further definition is required. But you can:
• adjust the offset of the group from the intermediary surface (thanks to the Shift parameter).
• adjust the angular spacing between the input and reflected ray (thanks to the Angle parameter).
The Angle parameter allows you to control the cubes' corners flattening. This adjustment impacts the spacing/spread
of the reflected points. The 6 reflected points correspond to the 6 retro-reflections on the corners of the cube.
Note: According to ECE R3 regulation on retro-reflecting devices, available on the Ansys Library, the
measurements are made around the direction of the source within 20 arc minutes (0.33˚) and 1˚30 arc
minutes (1.5˚).
Note: X and Y target axes used to display values in the viewer are defined in the target.
Note: The viewer does not support the Reflex Reflector beam type.
The viewer gives information about the feature (lens) behavior and characteristics.
In the viewer, different types of informations are displayed according to your configuration of the optical lens.
Several types of elements can be displayed.
Grid
A grid is displayed in the viewer and gives information about the size of the beam.
According to the definition made in the target, the grid is defined differently:
• If you selected the intensity target type, the grid is defined angularly and the step is expressed in degrees.
• If you selected the illuminance target type, the step is expressed in mm.
Source Images
Source Images is only available with the Extended source type.
The source images give information about the lens behavior inside the target. Their color depend on the group's
color of the selected element.
U Samples indicate the number of source images along the first parametric surface direction.
V Samples indicate the number of source images along the second parametric surface direction.
The external face of the elements is discretized according to U Samples and V Samples giving particular points.
The image of the extended source is then calculated for each of these particular points using the Snell's law.
The source images are approximated because of the meshing and the extended source convex hull considered for
the calculation.
Beam Pattern
Note: Beam patterns are not displayed when using the inner surface mode with Radii elements. The property
must be checked in the feature viewer to be able to display beam patterns.
A beam pattern is depicted as a grid (a network of lines) and gives information about the beam's shape.
U Samples indicate the number of isoparametrics along the first parametric surface direction.
V Samples indicate the number of isoparametrics along the second parametric surface direction.
The external face of the element is discretized according to U Samples and V Samples giving a particular network
of lines on this element. The reflection of this network of lines is carried out using the Snell's law from the source
point.
The calculation of the beam is calculated considering a punctual source. If an extended source is used, the barycenter
of the extended source is considered as the punctual source.
Photometry
Description
The Photometry tool is an integrated simulation that allows you to have a quick preview and result of the selected
element(s) of the feature.
When activating the Photometry, the simulation is automatically launched using the GPU Compute functionality
and generates a photometric preview displayed in the feature viewer, a HTML report and a XMP file in a folder named
as FeatureName_InteractivePhotometry located in the Speos Output Files folder.
Note: Each time you modify the selection of the element(s), the simulation is launched again, which overwrites
the previous HTML report and XMP file.
Therefore, the Photometry tool allows you to directly iterate during the design process of the Optical Part Design
feature in order to reach the regulation standards required, without having to run a Speos simulation that can take
times to generate the output results.
Photometry of a group of elements from the feature XMP result of the group of elements selected
viewer
Simulation Parameters
The Photometry tool simulation needs some information in order to run correctly. Therefore, the simulation considers
the following parameters:
• Source Parameters
º The simulation considers the Extended source type of the feature definition with a lambertian emission
Note: The Punctual and Directional source types are not compatible.
º The simulation considers the Flux of the feature definition (100 lm by default).
º Optical Properties of the extended source: the simulation considers by default a Surface Optical Property as
a Mirror set to 0% and a Volume Optical Property set to None.
• Geometry Parameters
The simulation considers the Optical Part Design feature geometry with the Optical Properties you applied manually.
If you have not set any Optical Properties, the simulation considers by default the Refractive index set in the
feature definition.
• Sensor Parameters
The simulation considers the Intensity orIlluminance Target type of the feature definition.
The size of the sensor used corresponds to the defined Beam Pattern parameters of the feature viewer.
• Regulation Standards
The simulation considers the XML template selected in the Target definition.
The XML template corresponds to the standard that you want the feature to pass.
Note: You can find existing XML templates of regulation standards in the Ansys Optical Library.
Related tasks
Adjusting Display Properties on page 534
The Display Properties allows you to customize the viewer and the information displayed in it.
4. In Source Images and Beam Pattern, adjust the number of samples for U and V axes.
5. Check Photometry - BETA to run an integrated simulation that generates a photometric preview displayed in
the feature viewer, a HTML report and a XMP file of the selected element(s) of the feature.
This will help you verify if your current design passes the regulation standards that you selected in the XML
template parameter of the feature definition.
Fore more information, refer to Understanding Display Properties.
6. Click Show Regulation if you want to open the HTML report of the simulation.
7. To access grid parameters, right-click in the viewer and click Grid Parameters.
8. Click User and define the step you want for the gridline for U and V axes.
Note: According to the target type selected, the step is expressed in degrees (intensity type) or in mm
(illuminance type).
The display properties are set and the modifications are taken into account in the viewer.
Related concepts
Understanding Display Properties on page 531
The viewer assist the design process by helping to understand the feature behavior.
Source
All source types can be displayed:
• Point Source: point highlighted in the 3D view
• Extended Source: emitting surface highlighted in the 3D view
• Directional Source: displays a pattern of axis on the lens that represent the directional source
Example of point source interactive Example of extended source Example of directional source
preview interactive preview interactive preview
Grid
Only rectangular and circular grids can be previewed in the 3D view.
Example of rectangular grid interactive preview Example of circular grid interactive preview
Support
The offseted support is displayed.
Created groups are not considered in the preview.
Note: When the offseted support computation fails, the support surface is highlighted in red color in the
3D view.
Projected Grid
All projected grid types can be previewed in the 3D view.
Created groups are not considered in the preview.
Example of rectangular projected grid interactive preview Example of circular projected grid interactive preview
Important: The Interactive Preview can be time-consuming, especially for the Projected grid preview in
case of a huge number of grid intersections.
3. In the Options panel, check the parameters' interactive previews you want to display in the 3D view.
• Source
• Support
• Grid (only rectangular and circular grids can be previewed)
• Projected Grid
The parameters' previews are displayed in the 3D view and change dynamically upon feature modifications.
In Speos, the light guide can be linear or curved and the prisms' position, size and distribution can be customized.
CAUTION: The Intersect tools from the Design tab do not work with a created Light Guide. We then
recommend you to avoid creating Light Guide tangent to other bodies.
• Select Circular shape to create a light guide with a circular profile and set its profile diameter.
• Select Constant profile to create a light guide using a profile and click
to select a surface.
• Select Prism only to create a light guide without body. This option is useful when wanting to use a custom
light guide body.
Note: If the Add operation is used with this mode, the height of the prisms is defined to reach the end
of the guide curve.
4. If you selected Prism only combined with the Add or Hybrid operation, you can define an Extra body height.
Extra body height corresponds to body height added from the guide curve in the opposite direction of the
prisms.
5. Define the prisms' orientation and construction mode:
a) From the Type drop-down list, select how the prism should be oriented in relation to the light guide body:
• Select Direction if you want all prisms to have the same orientation.
• Select Normal to surface if you want prisms to potentially have different orientations.
b) In the 3D view, click and select a line to define the optical axis.
The optical axis is used to define the average orientation in which the light is going to be extracted from the
light guide.
Note: The optical axis must not be collinear with any tangent to the guide curve.
c) If you selected the Normal to surface type, click and select a face to define the orientation of the prisms.
d) From the Operation drop-down list, select how you want the prisms to be generated on the light guide:
Click and in the 3D view, select a line to define the Projection axis.
The projection plane is defined as normal to the projection axis.
7. In Start and End group boxes, define the size of the prism-free zones at the beginning and at the end of the guide
curve.
Note: Start value is always respected, however in come cases End value might not be respected because
the last prism is not cut by this parameter. Prisms are created to respect the End value as much as
possible.
• Select Constant to exclude parameter variation along the guide. This mode ensures prisms parameters are
constant along the guide curve.
• Select Control points if you want to create parameter variation along the guide curve.
• Select Input file to import a *.csv file defining the prisms repartition along the light guide body.
Note: When using *.csv files, make sure the regional settings and number format of the workstation
are correctly defined. A dot "." should be defined as Decimal symbol.
Tip: From the Design panel, right-click the light guide feature and click Export as CSV file to export
all prisms parameters. The file can then be modified and used as input for light guide definition.
• Select Automatic if you want parameters to be automatically calculated according to other settings.
Note: Automatic mode is only available for Start Angle, End Angle and Length.
a. Click
Note: The variation is measured on the curve minus the Start and End.
• In case of an Input file mode, browse and load the CSV file.
Related reference
Light Guide Body Parameters on page 545
This page provides detailed information on the Light Guide body parameters.
Light Guide Prism Parameters on page 551
This page describes all the parameters to set when defining prisms.
Manufacturing Parameters on page 554
This page describes the parameters to create a light guide that can be manufactured.
1. Before setting the Top prism milling and/or Bottom prism milling parameters value, define its construction
mode:
• Select Constant to exclude parameter variation along the guide. This mode ensures milling parameters are
constant along the guide curve.
• Select Control points if you want to create parameter variation along the guide curve.
• Select Input file to import a *.csv file defining the prism repartition along the light guide body.
Note: When using *.csv files, make sure the regional settings and number format of the workstation
are correctly defined. A dot "." should be defined as Decimal symbol.
Tip: From the Design panel, right-click the light guide feature and click Export as CSV file to export
all prisms parameters. The file can then be modified and used as input for light guide definition.
a. Click
Note: The variation is measured on the curve minus the Start and End.
• In case of an Input file mode, browse and load the CSV file.
3. Set Drafting to By angle if you want to create an angle between the demolding axis and the prism to ensure a
safe removal of the guide from the mold.
Important: The Drafting may sometimes be incorrectly applied. Make sure to check the Drafting after
computing the Light Guide, and increase it if necessary.
Related reference
Light Guide Body Parameters on page 545
This page provides detailed information on the Light Guide body parameters.
Light Guide Prism Parameters on page 551
This page describes all the parameters to set when defining prisms.
Manufacturing Parameters on page 554
This page describes the parameters to create a light guide that can be manufactured.
Guide Curve
The Guide Curve represents the curve along which the profile is swept to create the Light Guide.
This curve is not necessarily continuous in tangency.
Body Type
The body type allows you to determine what to base the body construction on (specific shape, curve or diameter).
Note: Prisms are built according to the Guide Curve in their middle. The profile you select has no relation
to the prisms position and may be shifted.
Prisms only The prisms are created but not the Light Guide body. This mode is useful when wanting to
use a custom light guide body.
Add operation: the height of the prisms is defined to reach the guide curve.
Important: Up to 2023 R1, a 0.1mm gap is applied automatically between the prisms
and the Guide curve. From 2023 R2, the 0.1mm is no longer applied.
Remove operation: the height of the prisms is not reliable, it is only used for assembling.
Constant profile The body is created using a profile defined by a planar surface and located at the start of the
guide curve. Then the profile is swept along the guide curve to create the light guide body.
Circular shape The body is created using a circular shaped profile. The profile diameter allows you to set
the diameter of the profile.
Extra body height Extra body height corresponds to body height added from the guide curve in the opposite
direction of the prisms.
Prism Orientation
The prism orientation type allows you to determine how the prism should be oriented in relation to the light guide
body.
Direction All the prisms have the same orientation all along the guide curve. Light extracted through
the prisms has the same direction.
This type is particularly suited for linear light guides.
Normal to Surface All the prisms have potentially a different orientation. Light extracted through the prisms
has a different direction all along the guide curve.
This type is suited for non-linear light guides.
The position/orientation of the prisms is driven by two elements, a surface and the optical
axis.
• The optical axis provides the average orientation in which the light is going to be extracted
from the light guide. This axis defines the orientation of the first prism and is used as input
for other prisms.
• The selected surface determines the prisms' orientation. To avoid construction issues, the
whole guide curve needs to lay on the normal surface.
The normal surface width should equal or exceed the light guide and the built prisms.
To create a correct surface, we recommend to create a line at the start of the guide curve
defining the optical axis. Then, use the Pull command to sweep this line all along the guide
curve.
Operation
The Operation corresponds to the prisms generation on the Light Guide.
Add/Remove/Hybrid operations allow to remove/add prisms to the light guide body.
Add
Remove
Hybrid Hybrid operation is based on the prisms parameters to determine whether a prism is added
or removed from the light guide.
Distances
Mode
Curvilinear The parameters Start, End, Step and Length (for add/remove operations) are curvilinear
distances based on the guide curve.
• In Direction prism type, the distance are measured directly on the guide curve.
• In Normal to Surface prism type, the distances are measured on an offset of the guide
curve passing by the middle of the top edge of the prism.
Projection The parameters Start, End, Step and Length (for add/remove operations) are defined as
curvilinear distances based on the guide curve projected on a projection plane.
The Projection Axis corresponds to a line that defines the projection plane (plane normal
to Projection line).
Note: With this model, you can obtain style effects as a constant prism length when
the Light Guide is seen in a specific direction.
Note: Start value is always respected, however in come cases End value might not be respected because
the last prism is not cut by this parameter. Prisms are created to respect the End value as much as possible.
Half of the first prism is included in the Start distance. If Start is null, the first prism will start by its top at the
beginning of the guide curve.
Start
End
Prism-free zones
Step (Add/Remove Step corresponds to the spacing between the mid points of the top edges of two adjacent
Operation) prisms.
CAUTION: Make sure to define a Step value different from the Length value.
Otherwise it may generate unwanted prism geometries.
Step (Hybrid Operation) Stepcorresponds to the spacing between two adjacent prisms projected on the guide
curve.
CAUTION: Make sure to define a Step value different from the Length value.
Otherwise it may generate unwanted prism geometries.
If the offset is too low regarding the profile, a part of the prisms can be created inside the
body. As a consequence, the length of the prisms can appear as not being taken into
account.
CAUTION: Make sure to define a Length value different from the Step value.
Otherwise it may generate unwanted prism geometries.
Trimming ratio (Hybrid The trimming parameters allows you to cut off the prism either from their peak or from
Operation) their base:
• Bottom trimming controls the prisms from their base.
• Peak trimming controls the prisms from their peak.
Prisms without trimming Prisms trimmed from the Prisms trimmed from the peak
bottom
Offset (Add/Remove Offset corresponds to the distance between the guide curve and the middle point of the
Operation) top edge of the prism.
Offset (Hybrid Offset corresponds to the distance between the guide curve and the bottom of prism
Operation) (start angle side).
Start angle End angle Start angle and End angle correspond to the angles of the prisms to local neutral fiber
(tangent line to guide curve).
Start Angle corresponds to the source side angle of the prisms (84,3° in image below).
End Angle can be seen as the angle used to change the direction of a reflected ray to let
the ray get out of the Light Guide (10,7° in image below).
In Add or Hybrid mode, if the start angle is set to automatic, a constant angle of 85° is
used to calculate the prisms' start angle. In Remove mode, if the end angle is set to
automatic, a constant angle of 85° is used to calculate the prisms' end angle.
Start radius corresponds to the radius of curvature applied on the source side face of
the prisms.
End radiuscorresponds to the radius of curvature applied on the non source side face
of the prisms.
Apply a positive value for a convex curvature and a negative value for a concave curvature.
Important: The Start and End Radius is always applied on the edges of the
complete prism. In other words, if you apply a radius on prisms that are partially
added or removed according to your setting, the radius is applied on the complete
prism edge and not on the partial prism edge.
In the following example, you have a prism to remove (yellow). When you apply a radius,
the curvature part is generated from the complete prism edge (green), and not from the
partial prism edge (red):
Top/ Bottom Prism A filet is applied on the top and/or bottom edge of the prisms when the Milling activated.
Milling
Milling is useful to take into account the size of the tool used to shape the surface during the
manufacturing.
If the milling is not computed on the prisms, it might be due to an incorrect design. Verify the
construction of the light guide.
Drafting The Drafting allows you to define an angle and a demolding axis to ensure a safe removal of
the light guide from the mold while taking into account the manufacturing constraints.
In Speos, the light source is represented by a focus point, placed inside the system.
The output beam is collimated along the optical axis direction after passing through the lens.
Section view of the collimation carried out by a TIR lens 3D view of a TIR lens
The light source defined at the focus point is generally a LED located on a printed circuit board (PCB).
Source at the focus point - Generally a LED located on a printed circuit board (PCB)
Axis System
The TIR lens is build around a central axis called revolution or optical axis. This axis is normal to the support plane.
• Source position: the source position is set by selecting a point. This point places the light source in the scene and
will be used to build the TIR lens around it. The source set here is considered as punctual, which means that it is
a simplified source that emits light from a single point and position.
• Support planeSupport point: the support defines the TIR lens bottom face.
Dimensions
• Input radius: internal radius of the TIR Lens on the support plane.
• Depth: distance between the support plane and the first intersection with the lens along the revolution axis.
• Draft angle: angle between the internal component of the lens and the revolution axis.
• Support Thickness: The TIR Lens is considered fastened on a support (represented by the support plane). The
support thickness refers to the thickness of the ring at the bottom of the lens.
• Thickness: height of the TIR lens along the revolution axis.
• Output radius: radius of the lens' output face.
TIR Lens side view TIR Lens angled view TIR Lens bottom view
Refractive Index
The Refractive Index of a material is a pure number that describes how light propagates through that medium.
Most transparent materials have refractive indices between 1 and 2. Here the refractive index refers to the index of
the lenses' material.
Focal
The focal length represents the distance between the source and the top of the internal collimating surface.
Spread Parameters
The Spread value controls the maximum angular aperture for the center (dioptric) and the outer (TIR) faces. By
defining a Spread value higher than 0°, TIR lens will spread light an intensity target.
The Spread behavior influences the results whether the TIR Lens is Convex or Concave.
• When the TIR Lens is concave: the TIR face spreads from max aperture to 0°, meaning rays are crossing.
• When the TIR Lens is convex: the TIR face spreads from 0° to max aperture, meaning rays are opening.
Spread Control
The Spread Control parameter manages how light is spread between 0° and Spread Max.
• When Spread Control is lower than 50, light accumulates on 0° direction.
• When Spread Control is higher than 50, light accumulates on Spread Max direction.
Spread Control = 0
Spread Control = 50
Spread Control = 60
2. From the Mode drop-down list, select which parameter should drive the lens' dimensions:
4. Click and select plane to define the position of the lens input face (the support).
5. In Input radius, type a value to define the internal radius of the TIR Lens on the support plane.
6. In Depth, type a value to define the distance between the support plane and the first intersection with the lens
along the revolution axis.
7. In Draft Angle, type a value to define the revolution axis angle.
8. In Support thickness, define the thickness of the ring at the bottom of the lens.
9. Define the Refractive index of the lens.
10. If you chose Thickness, set a value to define the lens' height, or use the 3D view manipulators.
11. If you chose Output radius, type a value to define the radius of the lens' output face.
12. Set the Focal of the lens, that is the distance between the source and the top of the internal collimating surface.
13. In Spread, type a value in [0° ; 90°[ to define the spread angle of the rays. 0° corresponds to collimated rays.
14. In Spread behavior, select the spread behavior of the rays:
• Concave: the TIR face spreads from max aperture to 0°, meaning rays are crossing.
• Convex: the TIR face spreads from 0° to max aperture, meaning rays are opening.
15. In Spread control, type a value in [0 ; 100] to control the light distribution in the target in the range [0 ; Spread
Max].
16. Press F4 to leave the feature edition mode.
The TIR lens is created and appears both in the tree and in the 3D view.
Note: The Zernike Face type is in BETA mode for the current release.
Note: The Zernike Face type is in BETA mode for the current release.
Projection lenses are designed to redirect light using the refraction that occurs through the transparent material.
They feature large apertures and short focal lengths and allow to convert a divergent light beam into a collimating
one.
In Speos, different types of back and front faces, as well as construction types are available to cover different needs.
As a consequence, projection lenses can be toroidal, aspheric, spherical, cylindrical, plano-concave or convex.
Note: The Zernike Face type is in BETA mode for the current release.
Construction settings
• Focal point: the focal point allows you to position the lens along the optical axis.
• Optical axis: line that corresponds to the light direction after passing through the lens.
• Back Focal Length: the back focal length corresponds to the distance between the focal point and the lens' back
face.
• Refractive index: pure number that describes how light propagates through that medium. Most transparent
materials have refractive indices between 1 and 2. Here the refractive index refers to the index of the lenses'
material.
• Orientation axis (beta): The Orientation axis is used when a Face is defined as Zernike, to consider the polar
coordinates for the Zernike coefficients.
• Thickness
º Lens Thickness: size of the central part of the lens.
º Edge Thickness: size of the lens from the back to the front face of the lens.
Face Types
Compatibility Table
The following table provides you with the compatibility between the Front and Back Face types.
Aspherical
Automatic
Zernike
Plan
Defining a plan face allows you to create a plano-concave/convex lens.
• Aperture Diameter: diameter of the Front Face or Back Face of the lens.
• Aspherics: corresponds to the 29 aspherical coefficients that you can set to adjust the lens or remove the
aberrations.
Note: The index 1 is no longer present in the interface as it is always 0. Only edible coefficient are present.
The formula of the aspherical coefficients is with αi the ith aspherical value.
Aspheric
• Aperture Diameter: diameter of the Front Face or Back Face of the lens.
• Curvature: radius of curvature for the spherical part of the Front Face or Back Face of the lens.
• Conic Constant: conic constant of the lens.
• Aspherics: corresponds to the 29 aspherical coefficients that you can set to adjust the lens or remove the
aberrations.
Note: The index 1 is no longer present in the interface as it is always 0. Only edible coefficient are present.
Automatic
• Aperture Diameter: diameter of the Front Face or Back Face of the lens.
• Refractive Index: refractive index of the lenses' material. This is a pure number that describes how light propagates
through that medium. Most transparent materials have refractive indices between 1 and 2. The refractive index
impacts how the face of the lens is constructed as rays should be collimated.
• Fresnel mode
º Constant step: creates grooves of the same length.
Note: Only full grooves are built. If the last groove cannot be built, a flat face appears instead.
Note: Only full grooves are built. If the last groove cannot be built, a flat face appears instead.
• Draft Angle: angle to respect to be able to remove the lens from the mold when the face is in Fresnel mode. The
draft angle should be between 0° and 15°.
Zernike
• Aperture Diameter: diameter of the Front Face or Back Face of the lens.
• Curvature: radius of curvature for the spherical part of the Front Face or Back Face of the lens.
• Conic Constant: conic constant of the lens.
• Aspherics: corresponds to the 8 aspherical coefficients that you can set to adjust the lens or remove the aberrations.
• Zernike coefficients: correspond to the 28 first Zernike coefficients that you can set to adjust the lens or remove
the aberrations.
The Zernike coefficients are ordered according to the Noll convention.
The angle φ is measured counter clockwise from the local +x axis.
The radial coordinate is the normalized dimensionless parameter ρ.
Construction Type
Revolution
With the Revolution type, the Back Face and Front Face profiles are revolved around the optical axis to create a
spherical lens by default or a Custom Revolution axis to create a toroidal lens.
Note: For a Custom Revolution axis, the Start and End angles must be included in the range [-180 ; 180].
Extrusion
With the Extrusion type, the Back Face and Front Face profiles are extruded along the extrusion axis to create a
cylindrical lens.
Note: The Zernike Face type is in BETA mode for the current release.
2. In the 3D view, click and select a point to define the Focal point of the lens.
3. In the 3D view, click and select a line or axis to define the Optical axis of the lens.
The optical axis is used to define the light direction after passing through the lens.
4. Define the Back Focal Length (the distance between the focal point and the lens' back face) either by entering
a value in mm or using the 3D view manipulators.
5. From the Back face type/ Front face type drop-down list, select which shape the back face and front face of
the lens should have:
Note: The Back face and Front face cannot be set as Automatic at the same time. A Zernike face is only
compatible with a Plan face. For more information on the compatibility between the face types, refer to
Understanding Projection Lens Parameters.
6. From the Thickness to set drop-down list, select on which parameter you want to base the thickness of the lens:
• Select Edge thickness to define the size of the central part of the lens.
• Select Lens thickness to define the size of the lens from the back to the front face of the lens.
Note: If the Back face type or Front face type is set to Automatic or Zernike, the Construction mode
not available.
• To create a spherical lens, from the Construction mode drop-down list, select Revolution.
The back and front face profiles revolve around the optical axis.
• To create a toroidal lens:
Once the general parameters of the lens are defined, define the characteristics of the Back Face and Front Face
according to their selected type.
Note: The Zernike Face type is in BETA mode for the current release.
1. In the Back / Front Face section, define the Aperture diameter of the lens' back / front face in mm.
2. If needed, in the Aspheric table, define the 29 aspherical values corresponding to the aspherical coefficients.
These coefficients are used to adjust the lens or remove any construction aberration.
Note: For more information on the aspherical coefficients, refer to Understanding Projection Lens
Parameters.
1. In the Back / Front Face section, define the Aperture diameter of the lens' back / front face in mm.
2. In Curvature, define the radius of curvature for the spherical part of the lens' back / front face.
3. If needed, define the Conic constant of the lens.
4. If needed, in the Aspheric table, define the 29 aspherical values corresponding to the aspherical coefficients.
These coefficients are used to adjust the lens or remove any construction aberration.
Note: For more information on the aspherical coefficients, refer to Understanding Projection Lens
Parameters.
1. In the Back / Front Face section, define the Aperture diameter of the lens' back / front face in mm.
2. Define the Refractive index of the lens' material.
The refractive index impacts how the face of the lens is constructed as rays should be collimated.
3. If you want to create a Fresnel lens, from the Fresnel mode drop-down list:
• Select Constant step to create grooves of the same length.
b. In Back/Front face draft angle, define a draft angle in the range [0° ; 15°] that takes into account the
manufacturing constraints and ensure a safe removal of the lens from the mold.
Note: The Zernike Face type is in BETA mode for the current release.
1. In the Back / Front Face section, define the Aperture diameter of the lens' back / front face in mm.
2. In Curvature, define the radius of curvature for the spherical part of the lens' back / front face.
3. If needed, define the Conic constant of the lens.
4. If needed, in the Aspheric table, define the 8 aspherical values corresponding to the aspherical coefficients.
These coefficients are used to adjust the lens or remove any construction aberration.
Note: For more information on the aspherical coefficients, refer to Understanding Projection Lens
Parameters.
5. If needed, in the Zernike coefficients table, define the 28 Zernike values corresponding to the Zernike coefficients.
These coefficients are used to adjust the lens or remove any construction aberration.
Note: For more information on the Zernike coefficients, refer to Understanding Projection Lens
Parameters.
The Zernike face has been defined. The other face is automatically set as Plan and uses the same Aperture Diameter
as the Zernike face.
In Speos, reflectors are created in several angular sections around the optical axis.
For each of these profiles (or angular sections), control planes are defined to control the variation of the defocus
from the image point.
The defocus drives the overall beam spread. A poly ellipsoid reflector without defocus produces perfectly collimated
rays with no spread, that is, the beam pattern of a spotlight.
LED reflector - no hole, source emitting only in upper half Light Bulb reflector - hole for the source, source with
space diffuse emission in upper and lower half space
Related concepts
Understanding Parameters of a Poly Ellipsoidal Reflector on page 574
This page describes parameters to set when creating a Poly Ellipsoidal Reflector.
Related tasks
Creating a Poly Ellipsoidal Reflector on page 577
This page shows how to create a Poly ellipsoidal reflector.
Figure 82. Side view of an angular section with parameters to set to create the reflector.
Axis System
• Source point: this point defines the source position. The source is a simplified source that emits light from a single
point and position. The source is used to calculate the poly ellipsoid profiles and illuminate the reflector.
• Image point: this point corresponds to the reflector's focus point, that is where the light rays converge when no
defocus is applied.
• Orientation axis: the orientation axis is the line fixing the orientation of the reflector around the optical axis. The
optical axis passes through the source point and image point. The 0 degree angular section is located in the plane
defined by the optical axis and the Orientation Axis.
Note: The orientation axis may not be defined on a plane normal to the optical axis.
Hole Diameter
Diameter of the hole at the bottom of the reflector. This hole is used to position the light source.
If you want to create a reflector without a hole (for example a reflector with a LED and not a bulb as light source),
deactivate this option to create a closed reflector.
Focal Length
The Focal Length is the distance between the source and the back of the Poly Ellipsoid Reflector.
Angular Sections
The angular sections define how the reflector is built. You can create as many sections as needed between 0° and
90°.
Figure 83. 0 and 90 degree planes with one section added at 45°
Control Planes
Creating control planes allows you to drive the overall beam spread.
Designing a poly ellipsoidal reflector without creating any control plane and defocus would create a pure ellipsoidal
reflector that would generate a spotlight beam (once passing through the projection lens).
Each control plane is identified by its Position from the back of the reflector and its Defocus value.
• The Position is expressed in mm and is defined by the distance between the back of the reflector and where you
want to place the control plane.
• The Defocus can be defined for each control plane and corresponds to the distance from the image point on image
point plane.
Figure 84. Control planes defined between 0 and 50mm with defocus applied.
Related tasks
Creating a Poly Ellipsoidal Reflector on page 577
This page shows how to create a Poly ellipsoidal reflector.
Related information
Poly Ellipsoidal Reflector Overview on page 573
The Poly Ellipsoidal Reflector is a reflector that is mainly used in automotive projector modules to produce spread
driving beams.
• If you need to modify the image point (the origin point), click and select a point in the 3D view.
• If you need to modify the orientation axis (by default the Y axis), click and select a line.
3. Define the Focal length of the reflector, that is the distance between the source point and the back of the reflector.
4. If you want to create a reflector without a hole (for example a reflector with a LED and not a bulb as light source),
set Hole to False.
5. If you activated the creation of a hole, set its diameter to position the light bulb.
6. In Optics, define how you want to build the poly ellipsoidal reflector. From the Symmetry for sections drop-down
list:
• Select Symmetry about the Optical axis to define only the 0° section angle.
• Select Symmetry to 0 deg plane to mirror the definition of the angular sections on the horizontal axis.
• Select Symmetry to 90 deg plane to mirror the definition of the angular sections on the vertical axis.
• Select Symmetry to 0 deg and 90 deg planes to create a full reflector, with mirrored definition on both
horizontal and vertical axes.
7. Create as many angular sections as you need for your reflector and define their angle value.
Note: Only default angular sections (0° and 90°) are visible in the 3D view. But all sections created are
taken into account for simulation.
8. For each angular section created, you can define control planes :
a) From the design panel, select the angular section on which you want to define control planes.
Note: The highest position value defines the size of the reflector. Consequently, the highest value
must be the same for each angular section.
Related concepts
Understanding Parameters of a Poly Ellipsoidal Reflector on page 574
This page describes parameters to set when creating a Poly Ellipsoidal Reflector.
In Speos, the definition of the freeform lens is mainly automatic. The algorithm calculates and designs the back face
of the lens based on the shape of the front face to build a lens with the expected optical behavior.
• For a collimating lens, it only requires the front face of your lens. Once selected, the lens is automatically built to
meet your freeform surface profile
• For a user target-based lens, you design your own target and control the light distribution on the target.
Related tasks
Creating a Freeform Lens on page 584
This page shows how to create a freeform lens from a freeform surface.
Creating a Freeform Lens Based on an Irradiance Target on page 585
This procedure shows how to create a freeform lens based on an irradiance target. The spread type allows you to
control the light distribution on the target you define.
Creating a Freeform Lens Based on an Intensity Target on page 587
This procedure shows how to create a freeform lens based on an intensity target. The spread type allows you to
control the light distribution on the target you define.
Maximum Threshold
The Maximum Threshold is available when you create a Freeform Lens based on an Intensity Target Type with an
Intensity File.
The Freeform Lens works only in refraction. However, according to the intensity distribution of the file which can
display light on a half-sphere, raking rays on edges are not refracted and the Backface of the Freeform Lens cannot
be designed correctly. Thus the Maximum Threshold has been introduced to overcome this issue.
The goal of the Maximum Threshold is to define a threshold above which rays will be considered in the construction
of the Backface, and under which rays will not be considered, therefore removing these potential raking rays that
prevent a correct design.
Maximum Threshold can be set between 1 and 100 and is considered along the optical axis.
In the schematic example below, the Maximum Threshold is set to 65 of the intensity distribution peak. This creates
a 43 angle. Then, only rays included in the solid angle created by the Maximum Threshold will be used to design the
back face (green part). Rays outside the solid angle (red part) will not be used, such as raking rays.
Resolution Factor
The Resolution Factor is available only in two cases:
• When you create a Freeform Lens based on an Intensity Target Type with an Intensity File Type.
• When you create a Freeform Lens based on an Irradiance Target Type with an Image.
The Backface is designed according to the Inverse Simulation principle. Rays are launched from the Target input,
then the Backface is designed according to the result expected. However the Freeform Lens computation may
sometimes lack rays to correctly design the Backface. Thus, the Resolution Factor parameter has been introduced
to overcome the lack of rays.
The goal of the Resolution Factor is to densify the number of rays launched from the target (intensity file or image)
to allow Speos to better design the Backface of the Freeform Lens.
The Resolution Factor is a multiplier of the resolution of the target.
Important: The representations are 2D, but you must consider the Resolution Factor multiplies the resolution
on Theta and Phi.
Warning: The higher the Resolution Factor the more performance and time needed to design the Freeform
Lens by the machine.
The Resolution Factor multiplies the number of rays emitted per pixel on the vertical resolution and the horizontal
resolution of the image.
In the schematic example (figure 4), the resolution of the image is 4 * 4, meaning 16 pixels. With a Resolution Factor
= 1, one ray per pixel is emitted, resulting in 16 rays emitted.
In the schematic example (figure 5), the resolution of the image is 4 * 4, meaning 16 pixels. With a Resolution Factor
= 2, four rays per pixel are emitted, resulting in 64 rays emitted.
Warning: We recommend you to set a Resolution Factor that does exceed 1024 * 1024 pixels emitted: Image
Resolution * Resolution Factor < 1024 * 1024.
,
select the lens' focal point and validate.
• In the 3D view, click
,
select a line to define the Optical axis.
• or click
4. In the 3D view, click to select the front face of your lens (any freeform shape).
Note: Only one surface should be selected for the freeform lens' front face. The selection of multiple
surfaces is not supported.
5. In Minimum thickness, define a thickness threshold to be respected between the lens' front and back face. The
thickness is measured along the optical axis.
Note: The thickness value should not be smaller than 2mm . This value is a target value. This means the
algorithm tries to generate a thickness that is the closest possible to the one defined but might not reach
the exact value.
Related information
Freeform Lens Overview on page 579
The freeform lens allows you to create a collimating lens, a user irradiance target-based lens or a user intensity
target-based lens with a complex shape.
Note: The Freeform Lens based on an irradiance target definition is in BETA mode for the current release.
3. In the 3D view, click and select a point to define the Source point.
4. Define the Refractive index of the lens.
The Refractive Index of a material is a pure number that describes how light propagates through that medium.
Most transparent materials have refractive indices between 1 and 2. Here the refractive index refers to the index
of the lenses' material.
5. In the 3D view, click to select the front face of your lens (any freeform shape).
Note: Only one non-trimmed surface with exactly four sides should be selected for the freeform lens'
front face. The selection of multiple surfaces is not supported.
6. In Minimum thickness, define a thickness threshold to be respected between the lens' front and back face. The
thickness is measured along the optical axis.
Note: This value is a target value. This means the algorithm tries to generate a thickness that is the
closest possible to the one defined but might not reach the exact value.
b) Click and select a line to define the horizontal axis of the sensor.
c) Click and select a line to define the vertical axis of the sensor.
9. In Type, define the uniformity of beam pattern:
• Select Uniform for a uniform light spread on the Target, and set X half size and Y half size to define the size
of the target.
Each half size is generated from the target origin. A X half size of 100 mm means the X size of the target is 200
mm.
• X half size and Y half size to define the size of the target.
Each half size is generated from the target origin. A X half size of 100 mm means the X size of the target is 200
mm.
• the FWHM for X and Y.
• Select Image to define the light spread based on an image, and define:
• X half size and Y half size to define the size of the target.
Each half size is generated from the target origin. A X half size of 100 mm means the X size of the target is 200
mm.
• the image file (*.png, *.jpg or *.bmp file).
• the Contrast ratio with an integer value equal or superior to 2.
The Contrast corresponds to the ratio between the minimum illuminance value (associated in RGB=(0,0,0))
and maximum illuminance value (associated in RGB=(255,255,255)) on the photometric map.
• The Resolution factor with an integer value equal or superior to 1.
The goal of the Resolution Factor is to densify the number of rays launched from the Image File target to
allow to better design the Backface of the Freeform Lens.
Related information
Freeform Lens Overview on page 579
The freeform lens allows you to create a collimating lens, a user irradiance target-based lens or a user intensity
target-based lens with a complex shape.
Note: The Freeform Lens based on an intensity target is in BETA mode for the current release.
3. In the 3D view, click and select a point to define the Source point.
4. Define the Refractive index of the lens.
The Refractive Index of a material is a pure number that describes how light propagates through that medium.
Most transparent materials have refractive indices between 1 and 2. Here the refractive index refers to the index
of the lenses' material.
5. In the 3D view, click to select the front face of your lens (any freeform shape).
Note: Only one non-trimmed surface with exactly four sides should be selected for the freeform lens'
front face. The selection of multiple surfaces is not supported.
6. In Minimum thickness, define a thickness threshold to be respected between the lens' front and back face. The
thickness is measured along the optical axis.
Note: This value is a target value. This means the algorithm tries to generate a thickness that is the
closest possible to the one defined but might not reach the exact value.
a) Click and select an optical axis to define the direction of the light.
• X half spread and Y half spread to define the size of the target.
• the FWHM for X and Y.
• Select Intensity file to define the target intensity distribution based on an intensity file, and:
Note: SAE XMP (non-conoscopic) are not supported as intensity file. If you want to use it, make sure
to open SAE XMP map in the IESNA LM-63 Viewer and convert it to IES.
Related information
Freeform Lens Overview on page 579
The freeform lens allows you to create a collimating lens, a user irradiance target-based lens or a user intensity
target-based lens with a complex shape.
Note: The Micro Optical Stripes feature is in BETA mode for the current release.
Construction Type
The Construction types define how you want the stripes to be processed in the Support surface.
Constant Ratio
The relative position of a stripe along the second curve is equal to the relative position of the stripe along the Guide
Curve according to the length of the curves.
Same Pitch
The distance (pitch) between two stripes on the Second Curve is equal to the distance between these two stripes
on the Guide Curve: dGuide Curve = dSecond Curve.
Projection Type
The Projection Type is the projection of the stripes on the support surface according to the Normal to the support
surface or the Optical axis.
Figure 96. Projection of the stripes according to the normals to the support surface
Support Side
The Support side determines on which side of the Light Guide the support surface is placed.
Outer Surface
The support surface is placed on the exit face of the light.
Inner Surface
The support surface is placed on the emissive face.
Note: Variation of stripes parameters is considered to be linear between two Control Points.
Preview Stripes
As the Micro Optical Stripes generation can take time, you can preview the stripes to see how would the stripes be
generated along the guide.
When you preview the stripes, only the top curves used by the tool to create the stripes are displayed. To see the
different construction curves used by the tool to create the stripes, refer to Extract Tooling Paths below.
and manufacturing teams, as it helps verify how machine tool moves in 3D in detail and check manufacturing
feasibility.
Extract Tooling Path generates 3 curves per stripe ( top curve, start curve, end curve).
The Guide curve and the Second curve define the starting and ending points of the stripes processed in the
Support surface according to the Construction type and the Projection type.
• Constant ratio: the relative position of a stripe along the Second curve is equal to the relative position of the
stripe along the Guide curve according to the length of the curves.
• Same pitch: the distance (pitch) between two stripes on the Second curve is equal to the distance between
these two stripes on the Guide curve.
• Normal to guide curve: stripes are created from the Guide curve to the Second curve according to the normal
direction to the Guide curve at their specific starting point.
4. In the 3D view, click and select a line to define the Optical axis.
The Optical axis is used to define the orientation in which the light is going to be extracted from the light guide.
Note: The Optical axis must not be collinear with any tangent to the support surface.
5. In the 3D view, click and select the Support surface on which the stripes are projected according to the
Projection type.
6. Select the Projection type to define how the stripes are projected on the support surface:
• Select Along Optical Axis to project the stripes according to the Optical axis.
• Select Normal To Support Surface to project the stripes according to the normal of the support surface.
7. In the Support side drop-down list, Select Inner support or Outer support.
The Support side defines on which side of the light guide the support surface is placed.
a) In Side angle, define the trajectory of the tangent to the curvature in [0 ; 90].
b) In Radius, define the radius of curvature of the Tool Bit in [0.005 ; infinity[.
The Tool Bit Shape defines the profile of the tool used to process the mold.
a) In Position, define the position of the control point along the light guide in percentage of the light guide.
b) In Depth, define how deep the Tool Bit goes to create the mold in [0 ; infinity[.
The depth represents the depth of the stripes on the light guide.
c) In Pitch, define the distance between two consecutive stripes in [0 ; infinity[.
d) In Top length, define the offset between the Start and End Tool Bit shapes in [0 ; infinity[.
e) In Bit shape start angle and Bit shape end angle, define the angles of attach of the Tool Bit to process the
mold in [0 ; 90].
12. In the Manufacturing section:
13. If you want to want to preview how the stripes will be generated according to the parameters you set, in the
Design tree, right-click the Micro Optical Stripes and click Preview Stripes.
The preview shows the top curve of each stripe.
Note: Stripes on edges of the guide that cannot be entirely generated are not built at all.
The Micro Optical Stripes are created and appear both in the tree and in the 3D view.
Extract Tooling Path creates 3 curves per stripe ( top curve, start curve, end curve) that are generated as geometry
in the Structure tree in a dedicated component named
"Micro_Optical_Stripes_feature_name"_Tooling_path_"Generation_Date"
Note: The Post Processing is in BETA mode for the current release but is deprecated. Instead we recommend
you to use the Block Recording Tool for the same capabilities.
Note: The Post Processing is in BETA mode for the current release.
2. In the 3D view, click Associate Feature and select an OPD feature in the Design tree.
The Block Recording panel opens. It records all design operations that you will perform on the associated OPD
feature until you stop recording.
4. Apply any SpaceClaim design operation on the OPD geometry.
OPD geometry before design operation OPD geometry after design operation
5. Once you applied the design operations, open the Post Processing feature and click to stop recording.
Design operations are saved and applied to the OPD geometry.
You can now use the post processed OPD geometry in simulation.
Note: The Post Processing is in BETA mode for the current release.
Note: The Post Processing is in BETA mode for the current release.
Note: If you activated the Automatic Compute on the Post Processing feature, the design operations
are automatically reapplied on the OPD geometry without a manual compute.
The design operations are applied on the modified OPD geometry without having to reapply them manually. Links
to Speos features are kept.
You can recompute the simulation using the OPD geometry to regenerate and update the results.
12.12.2.2. Modifying a Post Processed Optical Part Design Geometry with Post
Processing
The following procedure helps you modify a post processed Optical Part Design geometry with the Post Processing
feature.
Note: The Post Processing is in BETA mode for the current release.
To modify a post processed Optical Part Design (OPD) geometry with the Post
Processing:
1. Edit the Post Processing feature related to the OPD feature.
4. Once you applied the design operations, open the Post Processing feature and click to stop recording.
Note: If you activated the Automatic Compute on the Post Processing feature, the design operations
are automatically reapplied on the OPD geometry without a manual compute.
The design operations are applied on the modified OPD geometry without having to reapply them manually. Links
to Speos features are kept.
You can recompute the simulation using the OPD geometry to regenerate and update the results.
13: Head Up Display
This section describes the Head-Up Display design and analysis features.
Important: This feature requires Speos HUD Design and Analysis add-on and Premium or Enterprise license.
A Head-Up Display (HUD) is a system that displays an image in the field of view of the driver. Usually this image is
used to give information to the driver (the car speed for example).
From design to experience, Speos HUD feature encompasses several sub features dedicated to every step of the
HUD system analysis:
• Design: HUD Optical Design (HOD) allows you to model the HUD system in the CAD interface.
• Analyze: HUD Optical Analysis (HOA) allows you to analyze the HUD system by providing a detailed report and
metrics describing the overall optical system's performance.
• Experience: Visualize and experiment your system to anticipate errors and control the quality of the HUD system.
Use Speos features to visualize the virtual image as perceived by the driver, appraise polarization effects, observe
stray light or simulate PGUs using TFT or DLP technologies.
It takes advantage of the standard capabilities of the CAD platform such as direct modeling and features associativity
enabling rapid propagation of design changes and quick investigation of ‘what if’ scenarios.
Thanks to a dedicated user interface, HOD is not restricted to optical specialists and enables automotive engineers
to perform feasibility studies at concept phase thanks to iteration loops with packaging, human factors and glazing
departments. HOD also supports the engineering phase for optical design refinement by experts.
13.2. Design
HUD Optical Design (HOD) is a tool to create the optical system for a HUD system. The optical shapes of mirrors and
combiner are optimized to provide the best virtual image optical quality.
1. The General settings allow to define the axis system used by the HUD system and the degree of polynomial
equation used to design the mirrors.
2. The Eyebox is a uniform grid representing the driver's eye position. A multieyebox mode allows you to create
several eyeboxes and simulate the perception of the system by drivers of different heights.
3. The Target Image represents the image the HUD system needs to produce. You need to define its position and
size. The Virtual Image is the image the HUD system has produced.
4. The Windshield corresponds to the inner surface of the CAD geometry that must be selected to be considered
by the HUD system.
5. The Projector comprises mirrors and the PGU. Mirrors are numbered following the light propagation (so they
are numbered from the Picture Generation Unit (PGU) to the Eyebox).
6. The Picture Generation Unit (PGU) is a module that emits the light perceived by the driver after reflection on
the mirrors and the windshield.
13.2.2.1. General
The General settings allow to define the axis system used by the HUD system.
You can define a custom axis system for your HUD Optical Design (HOD) system. If no axis system is set, the following
default axis system is used:
• Vehicle Direction = -X
• Top Direction = +Z
13.2.2.2. Eyebox
The Eyebox is a uniform grid representing the driver's eye position. A multieyebox mode allows you to create several
eyeboxes and simulate the perception of the system by drivers of different heights.
You can create various drivers' heights by specifying several eyeboxes, each one corresponding to a driver's height
and you can manage their importance in the system by adjusting their Weight
Orientation
The Orientation corresponds to the vertical direction of the Eyebox.
• Normal to Optical Axis sets the vertical direction of the Eyebox as normal to the optical axis defined thanks to
the Target Image section.
• Normal to Vehicle Axis sets the vertical direction of the Eyebox as normal to the vehicle axis defined in the General
section.
Position Direction
The Position Direction corresponds to the direction used to apply the offsets.
• Normal to Central Eyebox Optical Axis sets the position direction as normal to the nominal driver optical axis
defined thanks to the Target Image section.
• Normal to Vehicle Axis sets the position direction as normal to the vehicle axis defined in the General section.
• The Virtual Image Distance corresponds to the distance between the center of the nominal Eyebox and the center
of the Target Image.
• The Look Over Angle corresponds to the horizontal angle between the vehicle axis and the optical axis.
• The Look Down Angle corresponds to the vertical angle between the vehicle axis and the optical axis.
• The Mode defines how the Target Image is defined, either according to the Size (millimeters) or according to the
Field Of View (degrees).
13.2.2.4. Projector
The Projector corresponds to the mirrors and the PGU. Mirrors are numbered following the light propagation (so
they are numbered from the Picture Generation Unit (PGU) to the Eyebox).
Projector Parameters
Mirror Type
Freeform mirror type created for the optimization.
Fold mirror type used to reduce the volume of the projector.
Distance
The distance applied to a mirror or PGU corresponds to the distance separating it from the previous elements.
Tip: In case of a multieyebox analysis, only one of the mirror can turn.
13.2.2.5. Manufacturing
The Manufacturing section allows you to define the degree of polynomial equation used to design the mirrors.
Manufacturing allows you to define the degree of polynomial equation used to design the mirrors.
The degrees influence the shape that the mirror takes. At the end of each computation, a surface export.txt file is
generated.
The file contains the coefficients of the extended polynomial equation of the freeform mirror(s) shape created by
HUD Optical Design. The file can then be shared with the manufacturers to produce the mirrors.
Note: The manufacturing degree can be set between 1 and 20. The default value is 6. The HOD system
computation time depends on the order set here and might take a while, especially when setting high
manufacturing degrees.
Note: As the file exports the equation's coefficient, the exported surfaces are unbounded.
Note: The parameters have initial values that tend to evolve and be modified after the HUD
optimization/computation.
Before the optimization, mirrors are considered as planar on an infinite plane. The infinite plane is orthogonal to
the bisector of the optical axes defined for the two mirrors. These planar mirrors are based on the projection of the
corner’s rays made by the Target Image / Eyebox optical volume on the windshield (Windshield impact zone). Planar
mirrors have a ratio equal to 1 and all rays intersect the mirrors.
During the optimization the mirrors shapes are modified and can be warped, and some rays may not intersect the
mirrors. In this context the Mirror size ratio can help you anticipate a potential set of non-intersecting rays by the
mirrors due to the optimization of the mirror shapes that can be warped.
Thus, in the Speos interface, the default value of the Mirror size ratio is set to 1.3 to anticipate these non-intersecting
rays in most cases. However, in some case the Mirror size ratio can be increased to extend the mirrors if rays still do
not intersect.
Note: If the mirrors are too large, some parts of the surface will not be used by the HUD system but will still
be optimized.
PGU Usage
PGU usage adjusts the ratio between the warping and the Picture Generation Unit to optimize the surface used by
the warping.
By default the PGU usage is set at 0.95, that means 95% of the PGU is used for the image warping.
Note: PGU usage does not impact mono freeform design. It only impacts multi freeform design.
Stopping Criterion
The Stopping criterion defines a threshold value representing the degree of precision to be reached for the
optimization to end.
The optimization works in a principle of cycles. At each cycle, the result is optimized and the optical quality increases.
After a certain number of cycles, the system becomes more and more defined and the improvement gained on a
cycle is greatly reduced. When between two cycles, only 0.05% of improvement is achieved, the optimization stops.
The value is expressed in percentage and is, by default, set to 0.005 (0.05 %).
This criteria is useful to optimize computation time and results according to your needs at different stages of the
development process of a HUD system. In the early stages of the design process, you can set this criterion relatively
high (like 8%) to gain time during optimization. On the contrary, in the final stages of the design process, you can
set this criterion low (< 0.005) so that the system is computed with precision.
Curvature Balance
Curvature Balance pilots the curvature of the first freeform mirror to get the best image quality. If the curvature
balance is left unedited (= 0), it is automatically calculated by the algorithm based on the PGU usage.
In this figure we consider the 3 states of the Mirror B (B B' B'') as being at the same position.
The percentage represents the percentage of the distance between Freeform Mirror A and PGU. For the Mirror B,
dMirrorA/PGU = dMirrorA/PGU image B or for the Mirror B', 90% * dMirrorA/PGU = dMirrorA/PGU image B'
8. If you want to create a multi-driver configuration and create several eyeboxes to analyze the system, In the
10. Set the Look Over Angle (horizontal angle) between the vehicle axis and the optical axis.
Note: To define the optical axis, HUD Optical Design applies first the Look Over Angle, then the Look
Down Angle.
11. Set the Look Down Angle (vertical angle) between the vehicle axis and the optical axis.
12. In Mode specify the mode used to define the Target Image:
• Select Size and define the Horizontal size and Vertical size of the Target Image in millimeters.
• Select Field Of View and define the Horizontal Field of view and Vertical field of view of the Target Image
in degrees.
13. In the Windshield section, click and select the inner surface of the windshield in the 3D view.
Note: Faceted geometries (such as imported *.stl files or other file formats alike) can be used as input
geometry.
14. In the Projectors table, click to create new mirrors. For each element of the table:
a) In the Projector type column, select Freeform mirror to create a mirror used for the optimization, or Fold
mirror to create a mirror used to reduce the volume of the projector
b) In the Distance column, define the distance separating the mirror from the previous element.
c) In Horizontal angle and Vertical angle, define the angles corresponding to the orientation of the element
in the global axis system.
Note: The order of the elements in the table considers the optical path from the windshield towards the
Picture Generation Unit (PGU). The bottom line of the table is always the PGU.
When designing the system, a preview indicates if the construction is correct or not. A green display means the
elements can be computed without error. A red display means one or more elements cannot be computed.
A green display does not mean relevant results. It only means that the system is considered as correctly defined.
For more information on the Projector, refer to Projector.
15. In the PGU section, define the characteristics of the PGU:
16. In the Manufacturing section, define the X degree and Y degree of polynomial equation used to design the
mirrors.
• the Mirror Size Ratio to optimize the mirror's size according to how the HUD system has been designed.
• the PGU Usage to adjust the ratio between the warping and the Picture Generation Unit to optimize the surface
used by the warping.
• the Stopping Criterion to define a threshold value representing the degree of precision to be reached for the
optimization to end.
• the Curvature Balance to pilot the curvature of the first freeform mirror to get the best image quality.
For more information on the Advanced parameters, refer to Advanced Parameters.
18. If you want, click Precompute Head Up Display to help determine the best optical path to choose. The
optical path (position and orientation of mirrors) is optimized to reach the best optical quality for the virtual
image while using the maximum surface of the PGU.
Note: Precompute Head Up Display only supports one Freeform mirror. For multi-freeform mirrors,
use Compute.
by default this term = 0 m,n from 0 to the degree defined by the user
Is reflected in the text file as: Is reflected in the text file as:
• Radius = 0 • Coefficient Xm Yn=Cm,n
• Conic = 0 • Normalization = 1
Note: The Normalization factor is used in the Polynomial term in order to make the Polynomial term
dimensionless.
13.3. Analysis
This section introduces the HUD Optical Analysis (HOA) feature that allows you to perform a complete analysis of a
HUD system with one or several configurations.
13.3.1.1.1. General
The general settings allow to specify the axis system used by HOA and the calculation type to run according to the
3D visualization you want to display.
If no axis system is set, a default axis system is used: Vehicle direction = -X Axis and Top direction = +Z Axis.
• Click
and select the line that defines the vehicle direction.
• Click
and select the line that defines the top direction.
2. In Properties, define what parts of the results you want to visualize after computation:
• Activate Visualization of Optical Volume if you want the optical volume to be displayed in the 3D view at the
end of the computation. This option is useful to check if something obstructs (for example, dashboard too
high) the driver field of view to see the Target Image.
• Activate Visualization per Eyebox Sample if you want to access the Eyebox sample option after computation.
Note: To access the Eyebox sample option, activate the Visualization per Eyebox Sample and run
the HOA computation.
• Set the Eyebox sample option to True to visualize the Target Image result from different positions on the
Eyebox according to the human vision.
• From the Vision mode drop-down list, select which view to display.
• Set the horizontal and vertical sampling of the eyebox. The position (1,1) corresponds to the bottom left
sample of the Eyebox.
3. Activate the Zoom factor option to zoom the size of the Best Focus Spot, Tangential Spot, Sagittal Spot, Static
Distortion and Pixel Image in the 3D view.
13.3.1.1.2. Eyebox
The Eyebox section allows you to configure the eyebox(es) used by HOA.
1. In the 3D view, click and select a point to place the eyebox center.
2. In Orientation, define the vertical direction of the Eyebox:
• Select Normal to optical axis to set the vertical direction of the Eyebox as normal to the optical axis defined
in the Target Image section.
• Select Normal to vehicle axis to set the vertical direction of the Eyebox as normal to the vehicle axis defined
in the General section.
3. In Sampling mode, select which sampling mode to use for the eyebox:
• Select Adaptive if you want to define the sampling of the eyebox with a file that describes the coordinates of
each sample.
• Select Uniform if you want the sampling of the Eyebox to be a uniform grid. With this mode, three grids can
share sample to represent the Eyebox: One Binocular grid representing the position of the eyes and two
monocular grids, each representing the position of one eye. The binocular Eyebox is the union of the two
monocular Eyeboxes.
• In Interpupillary distance, specify the distance between the eyes.
4. In Eye pupil diameter, define the diameter of the pupil of the driver's eyes.
5. In Eye pupil sampling, define the number of samples on the pupil of the driver's eyes (samplings are on the
circle of the eye).
6. In Size, define the sampling and size of the eyebox:
• If you selected the Adaptive sampling mode, browse and load a file.
The file contains the coordinates of each sample in millimeter where Center is the origin point, Vehicle direction
* Top direction is the X axis and Orientation is the Y axis.
Note: The file must end with an empty line. Download an . example file
• If you selected the Uniform sampling mode, define the sampling and size of the eyebox manually:
• Define the binocular horizontal size and monocular horizontal sampling of the eyebox.
• Define the vertical size and sampling of the eyebox.
Note: Number of shared samples gives the number of sampling shared between the monocular
eyeboxes. The binocular and monocular Eyeboxes share the vertical parameters.
7. If you want to analyze your system according to different eyeboxes, activate the Multieyebox mode:
The default eyebox, nominal Eyebox, is the Central Eyebox with a 0mm-offset. Each added Eyebox is created by
moving in translation from the nominal Eyebox to a distance corresponding to the Offset.
• From the Position Direction drop-down list, specify the direction used to apply the offsets:
• Select Normal to Central Eyebox Optical Axis, to set the position direction as normal to the nominal driver
optical axis defined in the Target Image section.
• Select Normal to Vehicle Axis to set the position direction as normal to the Vehicle Axis defined in the
General section.
1. In Virtual image distance, specify the distance between the center of the nominal Eyebox and the center of the
Target Image.
2. Set the Look over angle (the horizontal angle) between the vehicle axis and the optical axis.
3. Set the Look down angle (the vertical angle) between the vehicle axis and the optical axis.
Note: To define the optical axis, HOA applies first the "Look Over Angle", then the "Look Down Angle".
4. From the Mode drop-down list, specify the mode you to use to define the target image:
• Select Size to define the size of the target image in millimeters.
• Select Field of view to define the field of view of the target image in degrees.
13.3.1.1.4. Windshield
The Windshield section allows you to specify the windshield used by HOA.
Note: Faceted geometries (such as imported *.stl files or other file formats alike) can be used as input
geometry.
2. In the 3D view, click and select the outer surface of the windshield. This surface is optional.
Note: This parameter must be set when using the Ghost or the Overview tests.
4. If you want to select a Cover lens, click and select a surface in the 3D view.
The Cover lens (optional) is a geometry that works like a diaphragm to limit the light beam. It is located between
the Windshield and the first mirror.
All rays passing through the Cover Lens are considered in the results. The rays that do not pass through the Cover
Lens are displayed in dotted lines in the 3D view and are not considered in the results.
13.3.1.1.5. Mirrors
The Mirrors section allows you to specify the mirrors used by HOA.
1. In the 3D view, click and select the mirrors to consider in the projector.
The selection appears in the List of mirrors. The order is defined from the Windshield towards the PGU.
2. In Multieyebox mirror, activate the multieyebox for the selected mirror. You can activate the multieyebox for
only one mirror.
Note: To use this option, activate the multieyebox mode from the Eyebox section and create several
eyeboxes.
• In Eyeboxes, you can apply a rotation to a mirror according to the Tilt rotation axis for each eyebox defined.
• In the 3D view, click and select an axis to define the rotation axis.
• In Tilt Angle, define the rotation angle to apply to the selected eyebox.
13.3.1.1.6. PGU
The PGU section allows you to specify the characteristics of the PGU used by HOA.
º In the 3D view, click and select the point to define the center of the PGU.
º Click and select the line to define the horizontal direction of the PGU.
º Click and select the line to define vertical direction of the PGU.
Note: If you define manually one axis only, the other axis is automatically (and randomly) calculated by
Speos in the 3D view. However, the other axis in the Definition panel may not correspond to the axis in
the 3D view. Please refer to the axis in the 3D view.
• In Horizontal sampling and Vertical sampling, define the number of horizontal and vertical samples to use.
• From the Characteristics drop-down list, select which PGU characteristics are used by HOA:
º Select a predefined PGU amongst the available models.
The name corresponds to the characteristics of the PGU. Example: 1.8" = size (in inches), 2:1 = horizontal to
vertical size ratio of the screen, 480x240 = PGU pixel resolution.
º Select User Defined to manually define the PGU and its Dimensions:
13.3.1.1.7. Warping
Warping is a post processing step used to correct the optical aberrations of the system to provide the optimal target
image for the driver.
Working Principle
The Picture Generation Unit (PGU) displays a deformed image to the driver. This deformation comes from the
propagation of the image through the HUD system.
To solve this issue for the driver to see a rectangular image, the image displayed by the PGU is not rectangular, it is
"pre-deformed" (bent beforehand).
This "pre-deformation" is called Warping.
This also corrects poor image orientation as shown in the following table.
Without Warping
With Warping
Warping Parameters
Tilt
The Tilt allows you to generate a Warping file containing the samples coordinates of each driver warping. Three Tilt
modes are available:
• Predefined Mirror Tilt: generates the Warping file containing the Warping of each driver defined in the
Configurations of the Eyebox section.
• Step Mirror Tilt: generates the Warping file containing the Warping of each driver defined thanks to an angular
range from the initial position of the rotating mirror activated to calculate each Eyebox position.
• Free Mirror Tilt: generates the Warping file containing the Warping of each driver defined thanks to an offset
range from the center of the Central Eyebox to calculate each Mirror rotation.
Warping Algorithms
Ansys provides you with the following Warping Algorithms:
Bottom Left sample of the Target Image ... Bottom Right sample of the Target Image
... ... ...
Top Left sample of the Target Image ... Top Right sample of the Target Image
Tip: The origin of the pixel coordinate is defined by the PGU axis system.
Download files.example
X M N
º defined by the Horizontal Sampling in the Mirrors tab when using the Step or Free Mirror Tilt mode.
• N is the vertical resolution of the Warpings:
º defined by the Vertical Sampling in the PGU tab when using the Predefined Mirror Tilt mode, or
º defined by the Vertical Sampling in the Mirrors tab when using the Step or Free Mirror Tilt mode.
To configure a warping:
1. In the Generation section, select the Warping mode:
• Disable if you do not want HOA to compute any warping. In this case, HOA analyzes the system with a rectangular
grid on the PGU defined in the PGU tab.
Note: Do not use the Disable mode when using the Magnification or the Overview tests.
Note: For more information on the Warping file, refer to Warping File.
• Build if you want HOA to compute the warping without exporting it, and define the Horizontal sampling and
Vertical sampling of the warping used by HOA and sent to the plugins.
• Build & Export if you want HOA to compute the warping and export it into a warping file, and define the
Horizontal sampling and Vertical sampling of the warping used by HOA and the warping file.
2. If you set the Multieyebox to True in the Eyebox definition, in the Tilt section select the Tilt mode:
• Predefined Mirror Tilt to generate a Warping file containing the Warping of each driver defined in the
Configurations of the Eyebox section.
• Step Mirror Tilt to generate a Warping file containing the Warping of each driver defined by rotation of the
mirror.
a. Define the angular range [Shorter Driver Tilt ; Taller Driver Tilt] of the rotating mirror to calcultate each
Eyebox position
b. Define the Tilt Sampling.
Release 2023 R2 - © Ansys, Inc. All rights reserved. 628
Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.
Published: 2023-08-09T19:30:34.218-04:00
Head Up Display
Note: The rotating mirror is the mirror for which the multiconfiguration is activated in the Mirrors tab.
The 0° corresponds to the initial position of the mirror. The rotation axis used is the Tilt Rotation Axis
defined in the Mirrors tab.
• Free Mirror Tilt to generate a Warping file containing the Warping of each driver defined by their Eyebox
position.
a. Define the offset range [Shorter Driver Tilt ; Taller Driver Tilt] of the Eyebox positions to calculate each
Mirror rotation.
b. Define the Tilt Sampling.
Note: The 0mm corresponds to the center of the Central Eyebox. The Translation axis used is the
Position Direction axis defined in the Eyebox.
a) Select Nearest for a low quality warping, or Bilinear for a high quality warping.
b) Select the Algorithm file.
Note: The Algorithm file is an input image file to be pre-distorted by a provided warping algorithm
or your own created warping algorithm thanks to the Image Warping APIs.
Note: HOA generates one image per warping configuration in the tree.
13.3.1.1.8. Report
The Report section allows you to select the tests you want to compute into the report.
The tests come from the plugins (your own and/or the standard).
For a description of the different available standard test, see Plugin .
For a description of the API for your own plugin, see Application Programming Interface.
In the Available Tests, select the test to compute in the report and add them into the Tests To Run.
Note: If some tests are not available, check the plugins you use.
2. Click Export .
3. Browse and select a path for your *.speos folder.
4. Name the file and click Save.
The HOA simulation has been exported in the folder of the same name as the *.speos file along with the input files.
The *.speos file corresponds to the HUD model and contains all inputs from the Axis System, Eyebox, Target Image,
Windshield, Mirrors, PGU, Warping.
Now you can import the *.speos file in a dedicated external software to visualize and experiment your HUD system
to anticipate errors and control its quality.
In detail:
For each sample EBij of the binocular eyebox, the Virtual Image Distance gives the distance from the sample EBij of
the binocular eyebox to the center of the virtual image seen from the sample EBij of the binocular eyebox.
The Virtual Image Distance is expressed in millimeter.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Virtual Image Distance computed for each
sample EBij of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the Look Over Angle gives the angle, expressed in degrees, between
the Vehicle Direction (you specified in the General tab) and the projection of the optical axis on the horizontal plane.
Optical axis: For each sample EBij of the binocular eyebox, the optical axis is the line between the sample EBij of
the binocular eyebox and the center of the virtual image seen from the sample EBij of the binocular eyebox.
Horizontal plane: The horizontal plane is the XY plane of the direct axis system define by -X = Vehicle Direction and
Z = Top Direction. Vehicle Direction and Top Direction are the vectors you specified in the General tab.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Look Over Angle computed for each sample
EBij of the binocular eyebox.
Note: The Minimum (°), Maximum (°), Look Over Angle (°), Look Down Angle (°) values are absolute
values.
In detail:
For each sample EBij of the binocular eyebox, the Look Down Angle gives the angle, expressed in degrees, between
the optical axis and the projection of the optical axis on the horizontal plane.
Optical axis : For each sample EBij of the binocular eyebox, the optical axis is the line between the sample EBij of
the binocular eyebox and the center of the virtual image seen from the sample EBij of the binocular eyebox.
Horizontal plane : The horizontal plane is the XY plane of the direct axis system define by -X = Vehicle Direction and
Z = Top Direction. Vehicle Direction and Top Direction are the vectors you specified in the General tab.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Look Down Angle computed for each sample
EBij of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the Horizontal Field Of View gives the angle, expressed in degrees,
between the left border line and the right border line.
Left border line : For each sample EBij of the binocular eyebox, the left border line is defined as the line between
the sample EBij of the binocular eyebox and the middle left sample of the virtual image seen from the sample EBij
of the binocular eyebox.
Right border line : Similarly, for each sample EBij of the binocular eyebox, the right border line is defined as the line
between the sample EBij of the binocular eyebox and the middle right sample of the virtual image seen from the
sample EBij of the binocular eyebox.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Horizontal Field Of View computed for each
sample EBij of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the "Vertical Field Of View" gives the angle, expressed in degrees,
between the bottom border line and the top border line.
Bottom border line : For each sample EBij of the binocular eyebox, the bottom border line is defined as the line
between the sample EBij of the binocular eyebox and the bottom middle sample of the virtual image seen from the
sample EBij of the binocular eyebox.
Top border line : Similarly, for each sample EBij of the binocular eyebox, the top border line is defined as the line
between the sample EBij of the binocular eyebox and the top middle sample of the virtual image seen from the
sample EBij of the binocular eyebox.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Vertical Field Of View computed for each
sample EBij of the binocular eyebox.
13.3.1.3.6. Ghost
The Ghost test gives the distance between the virtual image and the ghost image.
In detail:
For each sample EBij of the binocular eyebox, the "Ghost" gives the maximum, over the sample VIij of the virtual
image seen from the sample EBij of the binocular eyebox, of the distance from the sample VIij of the virtual image
seen from the sample EBij of the binocular eyebox to the sample VIij of the ghost image seen from the sample EBij
of the binocular eyebox.
Note: The "Ghost" is expressed in arcminute in the report. In Workbench interface, the "Ghost" unit is not
displayed as Workbench is not able to retrieve the HOA plugin's units. However, usually the units between
the report and Workbench match.
The result of the computation is displayed as a table that contains the "Ghost" computed for each sample EBij of
the binocular eyebox.
Note: The Outer Surface parameter of the Windshield tab must be set when using the Ghost test.
In detail:
For each sample VIij of the virtual image, the Static Distortion gives the maximum, over the sample EBij of the
binocular eyebox, of the distance from the sample VIij of the virtual image seen from the sample EBij of the binocular
eyebox to the sample VIij of the target image. The Static Distortion is expressed in arcminute.
The result of the computation is displayed as a table that contains the "Static Distortion" computed for each sample
VIij of the virtual image.
In detail:
For each sample VIij of the virtual image, the "Dynamic Distortion" gives the maximum, over the sample EBij of the
binocular eyebox, of the distance from the sample VIij of the virtual image seen from the center of the binocular
eyebox to the sample VIij of the virtual image seen from the sample EBij of the binocular eyebox. The "Dynamic
Distortion" is expressed in arcminute.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the "Dynamic Distortion" computed for each
sample VIij of the virtual image.
In detail:
EBcentereye is the reference to calculate for each sample EBij the distance "d" between VIEBij and VIEBcentereye.
Then Focus Variation for EBij corresponds to the average of the sum of all distances between VIEBij and VIEBcentereye.
The "Focus Variation" is expressed in millimeters.
The result of the computation is displayed as a table that contains the "Focus Variation" computed for each sample
VIij of the virtual image.
In detail:
For each sample VIij of the virtual image, the "Field Curvature" gives the mean of the Field Curvature for VIij over the
EBij samples of the binocular eyebox, expressed in millimeter.
Field Curvature for VIij: For each sample EBij of the binocular eyebox, the field curvature for VIij is the distance from
the sample VIij of the virtual image seen from the sample EBij of the binocular eyebox to the normal projection of
this sample on the plane normal to the optical axis at the center of the virtual image seen from the sample EBij of
the binocular eyebox.
Optical axis: For each sample EBij of the binocular eyebox, the optical axis is the line between the sample EBij of
the binocular eyebox and the center of the virtual image seen from the sample EBij of the binocular eyebox.
The result of the computation is displayed as a table that contains the "Field Curvature" computed for each sample
VIij of the virtual image.
In detail:
For each sample VIij of the virtual image, the Spot Size gives the mean, over the sample EBij of the binocular eyebox,
of the diameter of the spot on the sample VIij of the virtual image seen by the sample EBij of the binocular eyebox.
The result of the computation is displayed as a table that contains the Spot Size computed for each sample VIij of
the virtual image.
Pixel Size = (Pixel Top Length + Pixel Right Length + Pixel Left Length + Pixel Bottom Length) / 4
Pixel Size = (2 * atan (Pixel Size / (2 * Distance)))
In detail:
For each sample VIij of the virtual image, the Pixel Size gives the mean in arcmin, over the sample EBij of the binocular
eyebox, of the size of the pixel on the sample VIij of the virtual image seen by the sample EBij of the binocular eyebox.
The result of the computation is displayed as a table that contains the Pixel Size computed for each sample VIij of
the virtual image.
The test also displays a sum up, for each configuration, of the Average, the Standard deviation, the Minimum and
the Maximum of the value contained in the table.
13.3.1.3.13. Astigmatism
The Astigmatism test gives the astigmatism of your system as defined on the picture.
In detail:
For each sample VIij of the virtual image, the "Astigmatism" gives the mean of the astigmatism for VIij over the EBij
samples of the binocular eyebox, expressed in dioptre.
Astigmatism for VIij : For each sample EBij of the binocular eyebox, the astigmatism for VIij is egal to Abs(1/Tangential
Image Distance for VIij - 1/Sagittal Image Distance for VIij).
Tangential Image Distance for VIij : For each sample EBij of the binocular eyebox, the tangential image distance
for VIij is the distance from the sample EBij of the binocular eyebox to the sample VIij of the tangential image seen
from the sample EBij of the binocular eyebox. This distance is expressed in meter.
Sagittal Image Distance for VIij : For each sample EBij of the binocular eyebox, the sagittal image distance for VIij
is the distance from the sample EBij of the binocular eyebox to the sample VIij of the sagittal image seen from the
sample EBij of the binocular eyebox. This distance is expressed in meter.
Note: In the plugin, the dynamic distortion of the sagittal and tangential images is computed in three
dimensions.
The result of the computation is displayed as a table that contains the "Astigmatism" computed for each sample
VIij of the virtual image.
13.3.1.3.14. Convergence
The Convergence test gives the horizontal position difference a point of the virtual image seen by the left eye and
the same point seen by the right eye.
Convergence = β + α
In detail:
For each sample EBij of the monocular eyebox, the "Convergence" gives, in milliradian, the maximum, over the
sample VIij of the virtual image, of the sum of the delta left and the delta right.
Delta right: Similar to delta left but for the right eye.
Delta left: For each sample EBij of the left monocular eyebox, the delta left is the angle between the left optical axis
and left horizontal virtual image axis.
left optical axis: For each sample EBij of the left monocular eyebox, the left optical axis is the axis between the
sample VIij of the target image and the sample EBij of the left monocular eyebox.
left horizontal virtual image axis: For each sample EBij of the left monocular eyebox, the left horizontal virtual
image axis is the axis between the projection of the sample VIij of the virtual image seen from the sample EBij of the
left monocular eyebox onto the target image plane and then projected on the horizontal axis of the target image.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the "Convergence" computed for each sample
EBij of the monocular eyebox.
13.3.1.3.15. Dipvergence
The Dipvergence test gives the vertical position difference a point of the virtual image seen by the left eye and the
same point seen by the right eye.
Dipvergence = β + α
In detail:
For each sample EBij of the monocular eyebox, the Dipvergence gives, in milliradian, the maximum, over the sample
VIij of the virtual image, of the sum of the delta left and the delta right.
Delta right: Similar to delta left but for the right eye.
Delta left: For each sample EBij of the left monocular eyebox, the delta left is the angle between the left optical axis
and left vertical virtual image axis.
left optical axis: For each sample EBij of the left monocular eyebox, the left optical axis is the axis between the
sample VIij of the target image and the sample EBij of the left monocular eyebox.
left vertical virtual image axis: For each sample EBij of the left monocular eyebox, the left vertical virtual image
axis is the axis between the projection of the sample VIij of the virtual image seen from the sample EBij of the left
monocular eyebox onto the target image plane and then projected on the vertical axis of the target image (and the
sample EBij of the left monocular eyebox).
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Dipvergence computed for each sample EBij
of the monocular eyebox.
13.3.1.3.16. Rotation
The Rotation test gives the angle between the horizontal line of the virtual image and the horizontal line of the target
image.
In detail:
For each sample EBij of the binocular eyebox, the "Rotation" gives the angle, expressed in degrees, between the
horizontal line of the virtual image and the horizontal line of the target image.
Horizontal line of the virtual image: For each sample EBij of the binocular eyebox, the horizontal line of the virtual
image is the result of a linear regression over the sample of the horizontal center line of the virtual image seen from
the sample EBij projected, with a normal projection, on the target image.
Horizontal line of the target image: The horizontal line of the target image in the line between the center of the
target image and the middle right sample of the target image.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the "Rotation" computed for each sample EBij
of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the Horizontal Trapezoid gives the ratio, left border size / right border
size.
Left border size: For each sample EBij of the binocular eyebox, the left border size is the distance, in millimeter,
from the top left sample of the virtual image seen from the sample EBij of the binocular eyebox and projected on
the target image to the bottom left sample of the virtual image seen from the sample EBij of the binocular eyebox
and projected on the target image.
Right border size: For each sample EBij of the binocular eyebox, the right border size is the distance, in millimeter,
from the top right sample of the virtual image seen from the sample EBij of the binocular eyebox and projected on
the target image to the bottom right sample of the virtual image seen from the sample EBij of the binocular eyebox
and projected on the target image.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Horizontal Trapezoid computed for each
sample EBij of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the Vertical Trapezoid gives the ratio, top border size / bottom border
size.
Top border size: For each sample EBij of the binocular eyebox, the top border size is the distance, in millimeter,
from the top left sample of the virtual image seen from the sample EBij of the binocular eyebox and projected on
the target image to the top right sample of the virtual image seen from the sample EBij of the binocular eyebox and
projected on the target image.
Bottom border size: For each sample EBij of the binocular eyebox, the bottom border size is the distance, in millimeter,
from the bottom left sample of the virtual image seen from the sample EBij of the binocular eyebox and projected
on the target image to the bottom right sample of the virtual image seen from the sample EBij of the binocular
eyebox and projected on the target image.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Vertical Trapezoid computed for each sample
EBij of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the Horizontal Magnification gives the ratio, virtual image horizontal
size / warping horizontal size.
Virtual image horizontal size: For each sample EBij of the binocular eyebox, the virtual image horizontal size is the
distance, in millimeter, from the middle left sample of the virtual image seen from the sample EBij of the binocular
eyebox and projected on the target image to the middle right sample of the virtual image seen from the sample EBij
of the binocular eyebox and projected on the target image.
Warping horizontal size: The warping horizontal size is the distance, in millimeter, from the middle left sample of
the warping to the middle right sample of the warping.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Horizontal Magnification computed for each
sample EBij of the binocular eyebox.
In detail:
For each sample EBij of the binocular eyebox, the Vertical Magnification gives the ratio, virtual image vertical size /
warping vertical size.
Virtual image vertical size: For each sample EBij of the binocular eyebox, the virtual image vertical size is the
distance, in millimeter, from the top middle sample of the virtual image seen from the sample EBij of the binocular
eyebox and projected on the target image to the bottom middle sample of the virtual image seen from the sample
EBij of the binocular eyebox and projected on the target image.
Warping vertical size: The warping vertical size is the distance, in millimeter, from the top middle sample of the
warping to the bottom middle sample of the warping.
Note: In the plugin, the dynamic distortion (the virtual image seen from the sample EBij) is computed in
three dimensions.
The result of the computation is displayed as a table that contains the Vertical Magnification computed for each
sample EBij of the binocular eyebox.
13.3.1.3.21. Overview
The Overview test gives an overview of all the tests of the OPTIS plugin.
In detail:
For each test of the plugin, the Overview gives the average value of the test.
The Ghost and the Magnification tests require some settings to be set. Refer to these chapters for more details.
In detail:
For each test of the plugin, the Overview gives the average, the standard deviation, the minimum and the maximum
value of the test.
Important: All plugin *.dll files are loaded when starting a Speos session. That means the files are locked
and no action can be performed while Speos is opened.
Description
This plugin contains the default metrics delivered by Ansys with the HUD Optical Analysis (HOA) product.
It provides methods to qualify HUD’s virtual image metrics:
• Virtual Image Distance
• Look Over Angle
• Look Down Angle
• Horizontal Field Of View
• Vertical Field Of View
• Ghost
• Static Distortion
• Dynamic Distortion
• Focus Variation
• Field Curvature
• Spot Size
• Pixel Size
• Astigmatism
• Convergence
• Dipvergence
• Rotation
• Horizontal Trapezoid
• Vertical Trapezoid
• Horizontal Magnification
• Vertical Magnification
• PGU Usage
• Overview
• Detailed Overview
PGU Usage
Horizontal Definition
Vertical Definition
Report
Description
This plugin is used to minimize the distance between the virtual image and the target image, and optimize the field
curvature of the virtual image by ajusting the PGU's location in a HUD system with the Hud Optical Analysis (HOA).
It provides methods to qualify HUD’s virtual image metrics:
• Field Curvature
• Virtual Image Distance from Target Image
You can find a used case of the Projector Image here. Use Case available here. Wedge angle optimizes the windshield
wedge angle to minimize the ghost.
Description
This plugin is used to optimize the wedge angle of the windshield to minimize the ghost image in a HUD system with
the Hud Optical Analysis (HOA) product.
It provides methods to qualify HUD’s virtual image metrics Ghost.
You can find a used case of the Projector Image here. The use case optimizes the windshield wedge angle to minimize
the ghost.
Ghost Image
When the image hits the inner surface of the windshield, it is split into two images
• The main image that is reflected by the inner surface of the windshield
• The ghost image that goes through the inner surface of the windshield is reflected by the outer surface of the
windshield
As a result, the driver sees two images that give an impression of blur
Wedge Angle
Image produced with a windshield without wedge angle Image produced with a windshield with a wedge angle
Note: The "Ghost" is expressed in arcminute by default in the report. In Speos and Workbench, the "Ghost"
is always expressed in degrees.
13.3.2. Results
13.3.2.1. Eyebox
The Eyebox result displays the eyebox you specified in the Eyebox tab.
If you specified a Uniform eyebox, it displays a rectangle with the size and the sampling of your binocular eyebox.
If you specified an Adaptive eyebox, it displays the sample described in the .OPTEyeBox file you specified.
Description
The location of a focus point on HOA relies on a set of rays conjugating a point of the PGU and a set of support rays
passing through the contour of the eye pupil (let's call him a research cone). The HOA best focus location is then
defined as the location along the research cone where the section of the cone is the smallest and most circular.
From that definition, the geometry of the research cone, which depends on the imaging mirror and windshield, can
lead to the three results.
Note: This type of case may lead to the following warning message: "Image point not found".
Note: This type of case may lead to the following warning message: "Image point not found".
Conclusion
In case 1, the virtual point is well displayed at the end of HOA simulation, while in case 2 and case 3 no best focus is
displayed since no result/inconsistent result is found with respect to the HOA best focus search criterion.
If you encounter the case 2 or 3, make a luminance analysis of your system to see what human eyes would see. Then,
modify your HUD system with HOD and analyze it with HOA by iterating until you get a correct system.
13.3.2.10. Astigmatism
The Astigmatism result displays the difference between the tangential surface and the sagittal surface.
13.3.2.17. PGU
The Picture Generation Unit (PGU) of your system is also displayed.
You can hide/show the PGU using the Speos Navigator.
You can define the size, the position and the orientation of the PGU from the PGU settings.
13.3.2.18. Warping
The warping is also displayed.
You can hide/show the warping using the Speos Navigator.
The warping is a grid on your PGU representing how the image must be deformed before being displayed by the
PGU.
You can define the sampling of the grid that represent the warping from the PGU settings.
From the specification tree, double-click the .speos360 file to open it using the Virtual Reality Lab.
Speos360 result
13.3.3.1.1. GetPluginType
Description
GetPluginType specifies the type of your plugin. For a HOA Plugin, the type is "HOA".
Syntax
int GetPluginType(wchar_t*& owszType);
Parameters
Output
owszType: type of plugin (HOA for a HOA Plugin).
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int GetPluginType(wchar_t*& owszType)
{
owszType = L"HOA";
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.2. GetPluginGUID
Description
GetPluginGUID returns a valid and unique "Globally Unique IDentifier" (GUID).
Each plugin must have a different GUID.
Note: In Microsoft Visual Studio, you can use "Tools / Create GUID" to create a valid and unique GUID.
Syntax
int GetPluginGUID(wchar_t*& owszGUID);
Parameters
Output
owszGUID: the valid and unique GUID.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int GetPluginGUID(wchar_t*& owszGUID)
{
owszGUID = L"{EA9CC8A1-DD98-4AFE-8739-8448EFB51C24}";
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.3. GetPluginDescription
Description
GetPluginDescription returns information about the plugin. This description appears in the Graphical User Interface
(GUI) and in the report.
Syntax
int GetPluginDescription(wchar_t*& owszDescription);
Parameters
Output
owszDescription: Description of the plugin.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int GetPluginDescription(wchar_t*& owszDescription)
{
owszDescription = L"OPTIS - HOA Plugin";
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.4. GetErrorDescription
Description
GetErrorDescription returns the description of an error identified by its identification number.
Syntax
int GetErrorDescription(const int inError, const wchar_t*& owszDescription);
Parameters
Input
inError: the identification number of the error returned in the description. The identification number is different
from 0 and is returned when an error occurs in a function.
Output
owszDescription: the description of the error identified by the identification number inError.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_UNKNOWN_ERROR 1
#define OPT_PLUGIN_ERROR_NO_TEST 2
#define OPT_PLUGIN_ERROR_UNKNOWN_TEST 3
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
#define OPT_PLUGIN_ERROR_UNKNOWN_SECTION 5
std::map<int, std::wstring> gmErrorDescription;
gmErrorDescription[OPT_PLUGIN_ERROR_UNKNOWN_ERROR] = L"PLUGIN ERROR :
Unknown error";
gmErrorDescription[OPT_PLUGIN_ERROR_NO_TEST] = L"PLUGIN ERROR : No test
defined";
gmErrorDescription[OPT_PLUGIN_ERROR_UNKNOWN_TEST] = L"PLUGIN ERROR :
Test index out of bounds";
gmErrorDescription[OPT_PLUGIN_ERROR_INVALID_DATA] = L"PLUGIN ERROR :
Empty or null data";
gmErrorDescription[OPT_PLUGIN_ERROR_UNKNOWN_SECTION] = L"PLUGIN ERROR :
Section index out of bounds";
int GetErrorDescription(const int inError, const wchar_t*& owszDescription)
{
if (inError >= 0 && inError < (int)gmErrorDescription.size())
{
owszDescription = gmErrorDescription[inError].c_str();
}
else
{
owszDescription = gmErrorDescription[OPT_PLUGIN_ERROR_UNKNOWN_ERROR].c_str();
return OPT_PLUGIN_ERROR_UNKNOWN_ERROR;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.5. GetTestNumber
Description
GetTestNumberreturns the number of test the plugin contains.
Syntax
int GetTestNumber(unsigned int& ounTestNb);
Parameters
Output
ounTestNb: number of test the plugin contains.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int GetTestNumber(unsigned int& ounTestNb)
{
ounTestNb = 4;
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.6. GetTestDescription
Description
GetTestDescription returns the description or the name of the test. This description appears in the Graphical User
Interface (GUI) and in the report.
Syntax
int GetTestDescription(const unsigned int iunTestIndex, wchar_t*&
owszDescription);
Parameters
Input
iunTestIndex: the identification number of the test returned the description.
Output
owszDescription: the description of the test identified by the number "iunTestIndex".
Note: To write the owszDescription in Japanese on an English operating system, convert the Japanese string
in unicode characters. For example: gvTests[0].Description = L" "; becomes gvTests[0].Description
= L"\u65E5\u672C\u8A9E";
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_UNKNOWN_TEST 3
std::map<int, std::wstring> gmTestDescription;
gmTestDescription[0] = L"Virtual Image Distance";
gmTestDescription[1] = L"Look Down Angle";
}
else
{
return OPT_PLUGIN_ERROR_UNKNOWN_TEST;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.7. GetNeededData
Description
GetNeededData asks the plugin which data each test needs. The plugin answers "true" for yes or "false" for no if
the test number "iunTestIndex" needs or not the data "iwsParameterName".
HOA automatically sends the result of the computation to the plugin, but some specific data take more time to be
computed. By default, this kind of data are not computed. If you need it, you have to specify it using the
"GetNeededData" function.
Syntax
int GetNeededData(const wchar_t* iwszParameterName, const unsigned int
iunTestIndex, bool& obNeeded);
Parameters
Input
iwszParameterName: Data name HOA ask the plugin. See the list of possible values bellow.
iunTestIndex: Number of the test HOA asks.
Output
obNeeded:
• True if your test number "iunTestIndex" needs the data "iwszParameterName".
• False if not.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_UNKNOWN_TEST 3
std::map<int, std::wstring> gmTestDescription;
gmTestDescription[0] = L"Virtual Image Distance";
gmTestDescription[1] = L"Look Down Angle";
gmTestDescription[2] = L"Look Over Angle";
gmTestDescritpion[3] = L"My own test";
std::map<int, std::vector<std::wstring> > gmNeededData;
gmNeededData[3].pushback(L"KeyWord1");
gmNeededData[3].pushback(L"KeyWord2");
int GetNeededData(const wchar_t* iwszParameterName, const unsigned int
iunTestIndex, bool& obNeeded)
{
if (iunTestIndex < gmTestDescription.size())
{
obNeeded = std::find(gmNeededData[iunTestIndex].begin(),
gmNeededData[iunTestIndex].end(), std::wstring(iwszParameterName)) !=
gmNeededData[iunTestIndex].end();
}
else
{
return OPT_PLUGIN_ERROR_UNKNOWN_TEST;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.8. SetDataDouble
Description
SetDataDouble gives the data to the plugin.
Syntax
int SetDataDouble(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const double* ipTable);
Parameters
Input
• iwszParameterName: Data name the "ipTable" contains. See the list of possibles values below.
• iunTableSize: Size of the table"ipTable".
• ipTable: A table that contains the data.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
struct DataDouble
{
unsigned int Size;
double* Table;
};
std::map<std::wstring, DataDouble> gmDatas;
int SetDataDouble(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const double* ipTable)
{
if (iunTableSize != 0 && ipTable != NULL)
{
DataDouble theData;
theData.Size = iunTableSize;
theData.Table = ipTable;
std::wstring parameterName(iwszParameterName);
gmDatas[parameterName] = theData;
}
else
{
return OPT_PLUGIN_ERROR_INVALID_DATA;
}
return OPT_PLUGIN_NO_ERROR;
}
Pixel Size PixelSizeMonocular3DX Size of the pixel of the virtual iEB, jEB, iPGU,
image for one eye. jPGU, kMEB,
LREye, kCFG, n
n=0,1,2,3 for the
corners (according
to the PGU axes):
0: up left 1: up right
2: bottom right 3:
bottm left
PixelSizeMonocular3DY Size of the pixel of the virtual iEB, jEB, iPGU,
image for one eye. jPGU, kMEB,
LREye, n, kCFG
PixelSizeMonocular3DZ Size of the pixel of the virtual iEB, jEB, iPGU,
image for one eye. jPGU, kMEB,
LREye, n, kCFG
PixelSizeBinocular3DX Size of the pixel of the virtual iEB, jEB, iPGU,
image for both eyes. jPGU, kMEB, n,
kCFG
PixelSizeBinocular3DY Size of the pixel of the virtual iEB, jEB, iPGU,
image for both eyes. jPGU, kMEB, n,
kCFG
PixelSizeBinocular3DZ Size of the pixel of the virtual iEB, jEB, iPGU,
image for both eyes. jPGU, kMEB, n,
kCFG
PixelSizeAdaptive3DX Size of the pixel of the virtual iEB, iPGU, jPGU,
image for the adaptive eyebox. kMEB, n, kCFG
PixelSizeAdaptive3DY Size of the pixel of the virtual iEB, iPGU, jPGU,
image for the adaptive eyebox. kMEB, n, kCFG
PixelSizeAdaptive3DZ Size of the pixel of the virtual iEB, iPGU, jPGU,
image for the adaptive eyebox. kMEB, n, kCFG
Sharpness SpotDiameterMonocular mm Diameter of the best focus spot iEB, jEB, iPGU,
for one eye. jPGU, kMEB,
LREye, kCFG
SpotDiameterBinocular mm Diameter of the best focus spot iEB, jEB, iPGU,
for both eyes. jPGU, kMEB, kCFG
SpotDiameterAdaptive mm Diameter of the best focus spot iEB, iPGU, jPGU,
for the adaptive eyebox. kMEB, kCFG
Configuration ConfigurationValidation Validation of each configuration kCFG
of the analysis.
NumberOfConfigurations Number of configurations of the kCFG
analysis.
13.3.3.1.9. SetDataString
Description
SetDataString gives the data to the plugin.
Syntax
int SetDataString(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const wchar_t** ipTable);
Parameters
Input
• iwszParameterName: data name the "ipTable" contains. See the list of possible values below.
• iunTableSize: Size of the table "ipTable".
• ipTable: A table that contains the data.
Return
(int) : returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
struct DataString
{
unsigned int Size;
wchar_t** Table;
};
std::map<std::wstring, DataString> gmDatas;
int SetDataString(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const wchar_t** ipTable)
{
if (iunTableSize != 0 && ipTable != NULL)
{
DataString theData;
theData.Size = iunTableSize;
theData.Table = ipTable;
std::wstring parameterName(iwszParameterName);
gmDatas[parameterName] = theData;
}
else
{
return OPT_PLUGIN_ERROR_INVALID_DATA;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.10. RunTest
Description
RunTest is where you compute your test.
Syntax
int RunTest(const unsigned int iunTestIndex, int (*pProgress)(const double&));
Parameters
Input
iunTestIndex: the identification number of the test to run.
pProgress: you can specify the progression of your test using this function to send a double between 0 and 1 (0 for
0% and 1 for 100%).
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int RunTest(const unsigned int iunTestIndex, int (*pProgress)(const double&))
{
switch (iunTestIndex)
{
case 0:
pProgress(0);
[...]
pProgress(1);
break;
case 1:
[...]
break;
case 2:
[...]
break;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.11. GetReportSectionNumber
Description
GetReportSectionNumber returns the number of section in the report a test needs.
Syntax
int GetReportSectionNumber(const unsigned int iunTestIndex, unsigned int&
ounReportSectionNb);
Parameters
Input
iunTestIndex: the identification number of the test for which HOA asks the number of section needed.
Output
ounReportSectionNb: the number of section in the report the test "iunTestIndex" needs.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int GetReportSectionNumber(const unsigned int iunTestIndex, unsigned int&
ounReportSectionNb)
{
switch (iunTestIndex)
{
case 0:
ounReportSectionNb = 1;
break;
case 1:
ounReportSectionNb = 3;
break;
case 2:
ounReportSectionNb = 1;
break;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.12. GetReportSection
Description
GetReportSection returns the information to create a section in the report. A section is composed of a title and a
table.
Syntax
int GetReportSection(const unsigned int iunTestIndex, const unsigned int
iunSectionIndex, wchar_t*& owszSectionTitle, unsigned int& ounSectionLineNb,
unsigned int& ounSectionRowNb, wchar_t**& owszDescription);
Parameters
Input
• iunTestIndex: the identification number of the test for which HOA asks you information about the section.
• iunSectionIndex: the index of the section for which HOA asks information.
Output
• owszSectionTitle: the title of the section.
• ounSectionLineNb: the number of line the table must have in this section.
• ounSectionRowNb: the number of row the table must have in this section.
• owszDescritpion: the content of the table.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
int GetReportSection(const unsigned int iunTestIndex, const unsigned int
iunSectionIndex, wchar_t*& owszSectionTitle, unsigned int& ounSectionLineNb,
unsigned int& ounSectionRowNb, wchar_t**& owszDescription)
{
switch (iunTestIndex)
{
case 0:
switch (iunSectionIndex)
{
case 0:
owszSectionTitle = L"The title of my first section";
ounSectionLineNb = 2;
ounSectionRowNb = 1;
wchar_t[100] V1 = L"First value";
wchar_t[100] V2 = L"Second value";
wchar_t** description = new wchar_t*[2];
description[0] = V1;
description[1] = V2;
owszDescritpion = description;
break;
case 1:
[...]
break;
}
break;
case 1:
[...]
break;
case 2:
[...]
break;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.1.13. GetDataDouble
Description
GetDataDouble returns the parameters and their values and displays them in the tree.
Syntax
GetDataDouble(const unsigned int iunIndex, COptString& owszParameterName,
COptString& owszParameterType, double& odValue) const;
Parameters
Input
iunIndex: index of the parameter returned unique for each one.
owszParameterName: name of the parameter to be displayed in the tree.
owszParameterType: type of the parameter to be displayed in the tree:
• Length (meter)
• Plane angle (radian)
odValue: value of the parameter to be displayed in the tree.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_UNKNOWN_TEST 3
std::vector
int GetDataDouble(const unsigned int iunIndex, wchar_t*& iwszParameterName,
wchar_t*& iwszParameterType, double& odValue)
{
// Check if the test index is not outside of the test vector
if (iunIndex < gvDisplayData.size())
{
13.3.3.1.14. GetDataDoubleNumber
Description
GetDataDoubleNumber returns the number of parameters to be displayed in the tree.
Syntax
GetDataDoubleNumber(unsigned int& ounDataDoubleNb);
Parameters
Output
ounDataDoubleNb: number of parameters returned.
Example
To get a full example of the API use, download the .Plugin
#define OPT_PLUGIN_NO_ERROR 0
std::vector
OPT_HOA_API int GetDataDoubleNumber(unsigned int& ounDataDoubleNb)
{
// Check if input and output vector are not empty
ounDataDoubleNb = (unsigned int)gvDisplayData.size();
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.1. GetPluginType
Description
GetPluginType specifies the type of plugin. For a HIW Plugin, the type is "HIW".
Syntax
int GetPluginType(wchar_t*& owszType);
Parameters
Output
owszType: type of plugin (HIW for a HIW Plugin)
Return
(int): returns the identification number of the error if an error occurs, or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
int GetPluginType(wchar_t*& owszType)
{
owszType = L"HIW";
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.2. GetPluginGUID
Description
GetPluginGUID returns a valid and unique "Globally Unique IDentifier" (GUID).
Each plugin must have a different GUID.
Note: In Microsoft Visual Studio, you can use "Tools / Create GUID" to create a valid and unique GUID.
Syntax
int GetPluginGUID(wchar_t*& owszGUID);
Parameters
Output
owszGUID: the valid and unique GUID.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
int GetPluginGUID(wchar_t*& owszGUID)
{
owszGUID = L"{12E5E507-6E81-4E58-BCFA-01D283C22506}";
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.3. GetPluginDescription
Description
GetPluginDescription returns information about the plugin. This description appears in the Graphical User Interface
(GUI) and in the report.
Syntax
int GetPluginDescription(wchar_t*& owszDescription);
Parameters
Output
owszDescription: Description of the plugin.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
int GetPluginDescription(wchar_t*& owszDescription)
{
owszDescription = L"HIW Warping Plugin";
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.4. GetErrorDescription
Description
GetErrorDescription returns the description of an error identified by its identification number.
Syntax
int GetErrorDescription(const int inError, const wchar_t*& owszDescription);
Parameters
Input
inError: the identification number of the error returned in the description. The identification number is different
from 0 and is returned when an error occurs in a function.
Note: Negative error number refers to an OpenCL error code. Refer to OpenCL documentation for further
details.
Output
owszDescription: the description of the error identified by the identification number inError.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_OPENCL_ERROR 1
#define OPT_PLUGIN_ERROR_NO_ALGO 2
#define OPT_PLUGIN_ERROR_UNKNOWN_ALGO 3
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
#define OPT_PLUGIN_ERROR_UNKNOWN_SECTION 5
#define OPT_PLUGIN_ERROR_UNKNOWN_PARAMETER 6
#define OPT_PLUGIN_ERROR_KERNEL_LOADING 7
#define OPT_PLUGIN_ERROR_UNKNOWN_ENV_VAR 8
#define OPT_PLUGIN_ERROR_WARPING_FILE_LOADING 9
#define OPT_PLUGIN_ERROR_MISSING_INPUT_PARAMETERS 10
#define OPT_PLUGIN_ERROR_ROTATION_DRIVER_HEIGHT 11
#define OPT_PLUGIN_ERROR_CPU_ALLOC 12
#define OPT_PLUGIN_ERROR_PLUGIN_NOT_INITIALIZED 13
#define OPT_PLUGIN_ERROR_UNKNOWN_ERROR 14
owszDescription)
{
if (inError==0)
return OPT_PLUGIN_NO_ERROR;
13.3.3.2.5. GetAlgoNumber
Description
GetAlgoNumber returns the number of algorithms the plugin contains.
Syntax
int GetAlgoNumber(unsigned int& ounAlgoNb);
Parameters
Output
ounAlgoNb: number of algorithms the plugin contains.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
OPT_HIW_API int GetAlgoNumber(unsigned int& uiAlgoNb)
{
// Check if input and output vector are not empty
if (!gvAlgos.empty())
{
uiTestNb = (unsigned int)gvAlgos.size();
}
else
{
return OPT_PLUGIN_ERROR_NO_ALGO;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.6. GetAlgoDescription
Description
GetAlgoDescription returns the description or the name of the algorithm. This description appears in the Graphical
User Interface (GUI) and in the report.
Syntax
int GetAlgoDescription(const unsigned int iunAlgoIndex, wchar_t*&
owszDescription);
Parameters
Input
iunAlgoIndex: the identification number of the algorithm returned in the description:
• 0: nearest algorithm, low quality warping.
• 1: bilinear algorithm, high quality warping.
Output
owszDescription: the description of the algorithm identified by the number "iunAlgoIndex".
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_UNKNOWN_TEST 3
13.3.3.2.7. SetDataDouble
Description
SetDataDouble gives the data to the plugin.
Syntax
int SetDataDouble(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const double* ipTable);
Parameters
Input
• iwszParameterName: Data name the "ipTable" contains. See the list of possibles values bellow.
• iunTableSize: Size of the table "ipTable".
• ipTable: A table that contains the data.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
struct DataDouble
{
unsigned int Size;
double* Table;
};
13.3.3.2.8. SetDataUnsignedInt
Description
SetDataUnsignedInt gives the data to the plugin.
Syntax
int SetDataUnsignedInt(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const unsigned int* ipTable);
Parameters
Input
• iwszParameterName: Data name the "ipTable" contains. See the list of possibles values bellow.
• iunTableSize: Size of the table "ipTable".
• ipTable: A table that contains the data.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
struct DataUnsignedInt
{
unsigned int Size;
unsigned int* Table;
};
std::map<std::wstring, DataUnsignedInt> gmDatas;
int SetDataUnsignedInt(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const unsigned int* ipTable)
{
if (iunTableSize != 0 && ipTable != NULL)
{
DataUnsignedInt theData;
theData.Size = iunTableSize;
theData.Table = ipTable;
std::wstring parameterName(iwszParameterName);
gmDatas[parameterName] = theData;
}
else
{
return OPT_PLUGIN_ERROR_INVALID_DATA;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.9. SetDataUnsignedChar
Description
SetDataUnsignedChar gives the data to the plugin.
Syntax
int SetDataUnsignedChar(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const unsigned char* ipTable);
Parameters
Input
• iwszParameterName: Data name the "ipTable" contains. See the list of possibles values bellow.
• iunTableSize: Size of the table "ipTable".
• ipTable: A table that contains the data.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
struct DataUnsignedChar
{
unsigned int Size;
unsigned char* Table;
};
std::map<std::wstring, DataUnsignedChar> gmDatas;
int SetDataUnsignedChar(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const unsigned char* ipTable)
{
if (iunTableSize != 0 && ipTable != NULL)
{
DataUnsignedInt theData;
theData.Size = iunTableSize;
theData.Table = ipTable;
std::wstring parameterName(iwszParameterName);
gmDatas[parameterName] = theData;
}
else
{
return OPT_PLUGIN_ERROR_INVALID_DATA;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.10. SetDataString
Description
SetDataString gives the data to the plugin.
Syntax
int SetDataString(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const wchar_t** ipTable);
Parameters
Input
• iwszParameterName: data name the "ipTable" contains. See the list of possible values below.
• iunTableSize: Size of the table "ipTable".
• ipTable: A table that contains the data.
Return
(int) : returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_ERROR_INVALID_DATA 4
struct DataString
{
unsigned int Size;
wchar_t** Table;
};
std::map<std::wstring, DataString> gmDatas;
int SetDataString(const wchar_t* iwszParameterName, const unsigned int
iunTableSize, const wchar_t** ipTable)
{
if (iunTableSize != 0 && ipTable != NULL)
{
DataString theData;
theData.Size = iunTableSize;
theData.Table = ipTable;
std::wstring parameterName(iwszParameterName);
gmDatas[parameterName] = theData;
}
else
{
return OPT_PLUGIN_ERROR_INVALID_DATA;
}
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.11. Init
Description
Init initializes the HIW plugin and should be called once before any other HIW API call.
Syntax
OPT_HIW_API int Init();
Parameters
Return
(int) : returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_INIT_ERROR 1
int Init()
{
If (!InitErrorDescription(...)) return OPT_PLUGIN_INIT_ERROR;
If (!InitAlgorithms(...)) return OPT_PLUGIN_INIT_ERROR;
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.12. Run
Description
Run executes the requested algorithm computation with the given inputs and writes down the result into the output
array parameter "OutputImage".
Syntax
OPT_HIW_API int Run(const unsigned int iunAlgoIndex);
Parameters
Input
iunAlgoIndex: Algorithm to be used for the warping computation.
• 0: nearest algorithm, low quality warping.
• 1: bilinear algorithm, high quality warping.
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_INIT_ERROR 1
struct DataUnsignedInt
{
unsigned int ValuesSize; // size of uint table
const unsigned int* Values; // pointer to uint table
};
return OPT_PLUGIN_NO_ERROR;
}
13.3.3.2.13. Release
Description
Release deallocates all resources allocated by the HIW plugin and should be called once before freeing the library.
Syntax
OPT_HIW_API int Release();
Parameters
Return
(int): returns the identification number of the error if an error occurs or 0 if no error occurs.
Example
#define OPT_PLUGIN_NO_ERROR 0
#define OPT_PLUGIN_RELEASE_ERROR 2
int Release()
{
If (!ClearAlgorithms(...)) return OPT_PLUGIN_ RELEASE_ERROR;
return OPT_PLUGIN_NO_ERROR;
}
To debug a plugin:
1. Open Visual Studio.
2. Load the source code of your plugin.
3. Set Visual Studio to Debug x64.
4. In the Build tab, click Build Solution to create the *.dll file.
5. Copy the *.dll file and its *.pdb file to C:\ProgramData\Ansys\v2XX\Optical Products\Plugins.
Note: The *.pdb file is used to link the debugger to the source code by Visual Studio to enable debugging.
6. Start Speos.
7. Load the project containing the HUD Optical Analysis (HOA).
8. In Visual Studio, click Debug, Attach to Process.
The Attach to Process window opens.
9. In Attach to, click Select.
Optimization is an experiment process that helps to find the best solution for an optical system. It mostly consists in trying
to achieve an expected result thanks to parameter variation.
Note: If you need more information on Ansys Workbench functioning and behavior, refer to Ansys Workbench
User's Guide.
Ansys Workbench
Ansys Workbench is the central software environment for performing mechanical, thermal, electromagnetic and
optical analyses with Ansys engineering simulation solutions.
Ansys Workbench uses building blocks called systems. These systems make up a flowchart-like diagram that represent
the data flow through your project.
Each system is a block of one or more components called cells, which represent the sequential steps necessary for
the specific type of analysis.
One or several systems can be added to your analysis. Then, you can decide to work with a standalone system or
make systems interact with each other.
To create a project, a system must be dragged and dropped in the Project Schematic view.
Speos in Workbench
Speos building block or system is directly integrated into Ansys Workbench and can be used to automate simulation
processes, understand and optimize a design or create multi-physics analyses.
This system creates a bridge between Speos and Ansys Workbench allowing the software to exchange input and
output data.
To perform an Speos analysis in Ansys Workbench, you must work through the cells of the Speos system to define
inputs, specify project parameters, run simulations, and investigate the results.
Related tasks
Creating a Speos system in Ansys Workbench on page 725
This page shows the main steps to create a Speos system in Ansys Workbench. The Speos system in Ansys Workbench
allows you to create a link between the two software. This link/bridge allows the software to exchange input and
output data.
Related information
Optimization Tools on page 730
The following section introduces how you can optimize Speos results with Ansys Workbench.
Note: If you need more information on Ansys Workbench functioning and behavior, refer to Ansys Workbench
User's Guide.
Note: As a project cannot mix ACIS data and Parasolid data, a Workbench project created in ACIS should
be recreated after the *.scdoc file conversion to Parasolid.
For more information, refer to Parasolid Must Know.
CAUTION: If you launch Workbench (not Workbench with Speos), the Speos environment will not be
loaded when you open a SpaceClaim session from the geometry system or cell.
2. Before working, make sure Workbench with Speos runs in foreground, and the Geometry Editor is set to SpaceClaim
Direct Modeler:
a) Select the Tools tab and click Options.
b) Select the Solution Process section.
c) In the Default Update Option drop-down list, if not defined, select Run in Foreground.
d) Now, select the Geometry Import section.
e) In the Preferred Geometry Editor drop-down list, if not defined, select SpaceClaim Direct Modeler.
f) Click OK.
3. If Speos extension is not displayed in the Toolbox, click Extensions> Manage Extensions...
4. From the list, select Speos and click Close to import the extension in Workbench.
5. Drag and drop Speos extension from the Toolbox to the Project Schematic pane to create a standalone project.
Note: Project dependencies are not preserved when duplicating a Speos block.
Note: Publish Parameters are not compatible with the control points positions (%) of the Light Guide
feature.
11. In Speos, define the Speos output parameters to import in Ansys Workbench:
a) If not already done, create a simulation and Compute it.
b) Open the result and define Measures.
c) Select File > Export template.
d) In Speos, use the template as XML template in the sensor that generated the result.
This will define the measures as output parameters.
e) Save the current project.
All output data/parameters (measure performed on the simulation results) are automatically imported in
Ansys Workbench.
Note: Once the Publish Parameters are defined, you can close Speos or leave it open and running
in the background.
12. In Ansys Workbench, right-click Simulation Task and click Generate Parameters to import Speos input
parameters.
The input parameters are imported into Ansys Workbench. The loop is created between the two software.
The project is created, you can now interact and adjust parameters directly from Workbench to observe result
variation.
Tip: If you need to refresh or modify input data, repeat the procedure from step 6. Modify input publish
parameters and regenerate the import.
Related information
Speos Parameters' Variation on page 730
This section introduces the Table of Design Points optimization tool and describes how to manually modify input
parameters to observe result variation.
Speos Direct Optimization on page 734
Ansys DesignXplorer uses optimization methods such as Direct Optimization to study and optimize an Speos design.
Note: When defining the project, make sure to create named selections on nominal project (Block A).
These named selections will be replaced by deformed geometries in the block (Block C) after mechanical
block.
Note: To have detailed and extensive information on Workbench optimization processes and tools, refer
to Ansys Workbench User's Guide .
Note: If you need more information on Ansys Workbench table of design points, refer to Ansys Workbench
User's Guide .
The table of design points is directly accessible from the Parameters Set tab.
The Table of Design Points allows you to manually modify input parameters to automatically regenerate output
results.
A design point represents a configuration with its set of values.
You can create various design points and launch simulations for each one of these configurations through Workbench
to observe result variation.
Ansys Workbench triggers Speos simulation when you update a design point and displays the corresponding results
in the current session.
Inputs and outputs are confronted side to side in the table of design points.
Related tasks
Varying Speos parameters in Ansys Workbench on page 732
This page describes how to manually drive and vary input parameters thanks to Ansys Workbench Table of Design
Points.
Related information
Speos in Ansys Workbench on page 723
This page quickly introduces Ansys Workbench, the tool used by Speos to perform result and system optimization.
Note: If you need more information on Ansys Workbench table of design points, refer to Ansys Workbench
User's Guide.
2. In the Table of Design Points, in case of a large number of Design Points, make sure to deactivate the Retain
option. Otherwise, you may face with an increasing time for the Design Points update.
3. In the Project tab, click Update Project to trigger the import of Speos output values.
Note: By default, output parameters are not updated. A lightning bolt indicates when a design point
needs an update.
4. Change or create new input parameters by editing the values to generate new output values:
a) If you only want to modify values, double-click the design point cell and modify an input (increase the source
flux for example).
b) If you want to add a new design point, type a value in a cell of the table's last row.
A lightning bolt is displayed to indicate that output values need to be updated for the newly created point
or for modified design points.
Tip: Right-click a row to be able to duplicate/delete a design point or to show and edit the Update
Order column. The Update Order column allows you to number the design points to prioritize their
update by creating a running order.
CAUTION: We recommend you not to change the units in the Table of Design Points. Units are not
converted alongside in Speos.
• To update a single design point, right-click the point and click Update Selected Design Point.
• To update designated design points, hold the CTRL key and select the row you want to update, then right-click
and click Update Selected Design Points.
• To update all design points at once, click Update All Design Points
.
Ansys Workbench triggers Speos simulations to compute the design points and displays the corresponding output
parameters values in the Table of Design points.
Note: Units displayed in the Output Parameters may sometimes appear inconsistent with the unit used
for simulation (example: "radians" instead of "degrees"")
Related information
Table of Design Points on page 730
This page introduces the Table of Design Points optimization tool.
Speos in Ansys Workbench on page 723
This page quickly introduces Ansys Workbench, the tool used by Speos to perform result and system optimization.
Note: If you need more information about Direct Optimization in Ansys DesignXplorer , you can refer to
Ansys DesignXplorer User's Guide .
Ansys DesignXplorer is a powerful application in Ansys Workbench for studying and optimizing a Speos design
starting from Speos input and output parameters.
The application uses a deterministic method based on Design of Experiments (DOE) and several optimization methods
including Direct Optimization.
Direct Optimization is a goal-driven optimization system that generates arrays of design points and finds out solutions
candidates. Responses can be studied, quantified, and graphed in Ansys Workbench.
Presentation
Direct Optimization allows you to automate the search of the best candidate values to reach a desired result.
To create a Direct Optimization system, drag the system from the Toolbox and drop it under the Speos system so
that the parameters are automatically linked.
Optimization Process
A target or an action to be performed on the results is defined. You can ask the optimizer to seek a target (define a
target value to reach), maximize or minimize a result.
To reduce the optimizer field of action, you can also define constraints like an upper and/or a lower bound to limit
the field of measure.
Note: Defining constraints can help gain time as it reduces the possible candidate values to test.
The system will then basically compute and test out a large number of configurations and parameters set to find
the best candidate values to reach the defined target.
To compute these configurations, the optimizer creates design points with specific parameters set.
At the end of the computation, the best candidate values to reach the expected result are selected.
Related tasks
Creating a Direct Optimization in Ansys Workbench on page 736
This page describes how to use Direct Optimization that allows you to automate the search of the best candidate
values to reach a desired result.
Related information
Speos in Ansys Workbench on page 723
This page quickly introduces Ansys Workbench, the tool used by Speos to perform result and system optimization.
Speos Parameters' Variation on page 730
This section introduces the Table of Design Points optimization tool and describes how to manually modify input
parameters to observe result variation.
Note: If you need more information about Direct Optimization in Ansys DesignXplorer , you can refer to
Ansys DesignXplorer User's Guide .
4. To define the objective and constraints to apply to the created targets, click a target.
Objectives and constraints appear in the Table of Schematic Optimization.
5. From the Objective Type drop-down list, define the action to perform on an output parameter:
• Select No objective if you do not want to specify any action to be performed on the current target.
• Select Minimize to achieve the lowest possible value for the output parameter.
• Select Maximize to achieve the highest possible value for the output parameter.
• Select Seek Target to achieve an output parameter value that is close to the objective target.
6. From the Constraints Type drop-down list, define the constraint to be applied to the output parameter:
• Select No Constraint if you do not want to specify any constraint.
• Select Values=Bound to specify a lower bound and obtain an output parameter value that is close to that
bound.
• Select Values >= Lower Bound to specify a lower bound and obtain an output parameter that is above that
bound.
• Select Values <= Upper Bound to specify an upper bound and obtain an output parameter below that bound.
• Select Lower Bound <= Values <= Upper Bound to specify lower and upper bounds and obtain an output
parameter within the defined range.
Related information
Speos in Ansys Workbench on page 723
This page quickly introduces Ansys Workbench, the tool used by Speos to perform result and system optimization.
Speos Direct Optimization with Ansys DesignXplorer on page 734
Direct Optimization allows you to automate the optimization of a system.
Note: The Optimization feature is in BETA mode for the current release.
Types of Optimization
The Optimization feature provides you with three optimization modes:
• Random Search algorithm is a global optimization method based on random.
• Design of Experiment allows you to strictly define the values of the variables you define through the use of an
Excel File based on the selected variables.
• Plugin allows you to use an optimization algorithm you created yourself to add more flexibility in your analysis.
Types of Variables
The Optimization feature provides you with three variable types according to where they come from.
• Simulation Variable
The Simulation Variables correspond to the Speos Light Simulation parameters which correspond to the numerical
parameters of the Speos features from the Light Simulation tab used in the Speos simulation you select for the
optimization.
• Design Variable
The Design Variables correspond to the Optical Part Design parameters which correspond to the numerical
parameters of the Optical Part Design features geometries from the Design tab used in the Speos simulation you
select for the optimization.
• Document Variable
Document Variables correspond to the input parameters that you can create into the SpaceClaim Groups panel
(Driving Dimension, Script Parameter).
Target
Target correspond to the output elements on which you want to measure/evaluate/assess the impact of the variables
defined.
Basic Workflow
1. Create a Direct or an Inverse Simulation to analyze your optical system.
2. Run the simulation to generate the results.
3. In the XMP result, define measures that you want to use as targets.
4. Create the Optimization in Speos.
5. Add variables.
Merit Function
The Merit function allows you to define the convergence process of the optimization.
• Minimize allows you to get the simulation measurement as close as possible to the target value.
• Maximize allows you to get the simulation measurement as far as possible from the target value.
The Merit function formula is
With:
• Target: Optimization target
• Measure: Measured value of the target
• Weight: level of importance of the target according to the other targets
Mode
Design of Experiment
Design of Experiment allows you to strictly define the values of the variables you define through an Excel File
generated after the variables selection.
Unlike the Random Search and the Plugin modes, the Design of Experiment is not an optimization algorithm as you
do not have to define a Merit function and Stop Conditions because it only depends on the number of variable values
you defined in the Excel file.
Plugin
The Plugin mode allows you to use an optimization algorithm you created yourself.
The Plugin mode is dedicated to advanced users who know how to create an optimization algorithm.
For more information on the Plugin mode, refer to Optimization Plugin. Optimization Plugin chapter will help you
understand how to create a plugin algorithm with a complete optimization plugin example.
Variables
Simulation Variables
The Simulation Variables correspond to the Speos Light Simulation parameters which correspond to the numerical
parameters of the Speos features from the Light Simulation tab used in the Speos simulation you select for the
optimization.
Design Variables
The Design Variables correspond to the Optical Part Design parameters which correspond to the numerical parameters
of the Optical Part Design features from the Design tab used in the Speos simulation you select for the optimization
Document Variables
Document Variables correspond to the input parameters that you can create into the SpaceClaim Groups panel.
They can be:
• A Driving Dimension which is a parameter that has effect on the size or the position of the element.
• A Script Parameter that you created and which is used into a SpaceClaim script.
The classical workflow to create Script Parameters would be:
1. Create a script group in the Groups panel.
2. Write down the script.
You can help yourself using the script Record tool that allows you to record parameters as a script in the script
editor window.
3. Create and name Script Parameters in the Groups panel.
4. Edit the script to replace the parameters with the script parameters.
For a complete understanding of how to create Driving Dimensions and Script Parameters, refer to the SpaceClaim
User Guide:
• Refer to Using Driving Dimensions in Ansys to understand the different ways of creating driving dimensions.
• Refer to Creating a Parameter Group to understand how to add and define a script parameter to a script.
Note: Note that unlike Simulation Variables and Design Variables, the Document Variables appearing in the
Document Variables list do not necessarily impact the simulation. Therefore, make sure to add Document
Variables that are related to your optical system or your simulation.
Example: A Driving Dimension modifying the size of a geometry that is not included in the simulation. Setting
the Driving Dimension as Document Variable will have no impact on the optimization.
Targets
Target correspond to the output elements on which you want to measure/evaluate/assess the impact of the variables
defined.
These targets correspond to measures you created in the XMP results generated from the simulation that you want
to include into the optimization. These measures correspond then to your initial measure generated with the
configuration of the simulation before the optimization.
Weight: the weight corresponds to the level of importance of the target according to the other targets. The higher
the number the more weight the target.
To create an optimization:
• Select Minimize to get the simulation measurement as close as possible to the target value.
• Select Maximize to get the simulation measurement as far as possible from the target value.
2. Define if you want to Keep intermediate results. That means the result of each iteration will be saved into the
SPEOS output files folder.
3. In the Stop Conditions section, define how to stop the optimization:
• If you want the optimization to stop after a certain time spent, set Use maximum time to True and define the
Maximum time in seconds.
Important: In case the time is over and the last simulation run has not yet finished, the optimization process
finishes the simulation before stopping.
• If you want the optimization to stop after a certain number of simulations run, set Use maximum number of
simulations to True and define the Maximum number of simulations.
• You can set both stop conditions to True. In this case, the first stop condition to be reached will end the
optimization.
Now you can add and define the different Simulation, Design, Document variables and Targets.
Note: For more information on how to create a XML plugin configuration file, refer to Configuring the XML
Optimizer Plugin Configuration File.
Now you can add and define the different Simulation, Design, Document variables and Targets.
To add variables according to their type, refer to Adding and Defining Simulation Variables, Design Variables and
Document Variables.
2. Once variables are defined, in the Excel file path drop-down list, click Create Excel.
Important: You must add the variables first in the Variables panel. Otherwise the Excel file will not
generate the table of the variable values.
a) In the Iteration number column, insert a line for each iteration to run and number them.
b) In the following variable columns, define for each iteration the value of the variables.
Note: The Min value and Max value of the variables in the Variables panel do not need to be set as the
optimization process will take the values defined in the Excel file.
The list shows you the available numerical parameters that you can add as variable that come from the Speos
Light Simulation features included in the simulation.
3. Check the parameters to vary.
Tip: In case of a simulation containing a large number of numerical parameters, you can use the filter
tool to help you quickly find the parameters.
Now you can add other variable types as Design variables or Document variables or directly add the optimization
Targets.
The list shows you the available numerical parameters that you can add as variable that come from the Optical
Part Design features included in the simulation.
3. Check the parameters to vary.
Tip: In case of a simulation containing a large number of numerical parameters, you can use the filter
tool to help you quickly find the parameters.
Now you can add other variable types as Simulation variables or Document variables or directly add the optimization
Targets.
Note: We recommend you to add a simulation to the optimization. You can however define document
variables even if no simulation is selected yet.
1. In the Optimization definition, in the Variables panel, select the Document variables tab.
Note: Unlike Simulation Variables and Design Variables, the Document Variables appearing in the
Document Variables list do not necessarily impact the simulation. Therefore, make sure to add Document
Variables that are related to your optical system or your simulation.
The list shows you the available numerical parameters that you can add as variable that come from the Driving
Dimension parameters and/or Script parameters created.
3. Check the parameters to vary.
Tip: In case of a large number of numerical parameters, you can use the filter tool to help you quickly
find the parameters.
Now you can add other variable types as Design variables or Simulation variables or directly add the optimization
Targets.
2. In the Optimization definition, in the Variables panel, select the Targets tab.
3. Click to open the Document variables list.
The list shows you the available measures that you can add as target that come from the results of the simulation.
4. Check the measures to target.
Tip: In case of a large number of measures, you can use the filter tool to help you quickly find them.
The weight corresponds to the level of importance of the target according to the other targets. The higher the
number the more weight the target.
Now you can add variable types if not yet done as Design variables, Simulation variables or Document variables.
1. In the Simulation tree, right-click the optimization feature and select Compute or GPU Compute to
run the optimization.
2. At the end of the optimization process, Speos asks you if you want to replace the initial variables value with the
best solution found.
• Click Yes to replace.
• Click No to keep the initial values.
An HTML report is generated at the end of the optimization process under the Optimization feature node.
In case of a Random search optimization, if you set Keep intermediate results to True, then a folder is created in
the SPEOS output files folder with the name of the Optimization feature containing all iteration results.
Note: The following HTML report corresponds to the report for a Random Search optimization.
Time Analysis
The Time Analysis section provides you with the date and time of the initialization/termination and duration of the
optimization.
Variables
Variables section sums up all the variables used in the simulation as well as their Min value and Max value.
Targets
Targets section sums up all the targets select, targets value, and weight.
Parameters
Parameters sums up the parameters defined according to the Optimization mode selected.
In case of a Random search optimization the Merit function is developed.
Evaluations
Evaluations lists for each iteration all the variable values used, the values found for each target and the Merit function
value.
The best solution is highlighted in green.
In case of a Random search optimization, if Keep intermediate results is set to True, then you have direct link to
download the XMP results from the HTML Report.
Results
Results repeat the best solution found (highlighted in green in the Evaluations section).
Note: The Plugin mode is dedicated to advanced users who know how to create an optimization algorithm.
Note: Of course, you can use another development environment. If you do so, make sure to use the .NET
framework 4.7.2 tools.
3. In the Create a new project window, select Class Library (.NET Framework).
The Class Library (.NET Framework) lets you create a *.dll file using the .NET framework.
4. Click Next.
5. In the Configure your new project, fill in the different fields:
Note: For the plugin example, the project and solution will have the same name: OptimPluginSample.
6. Click Create.
The project is created with a default class in it.
namespace OptimPluginSample
{
public class PluginSample
{
private readonly string _arguments;
public PluginSample(string arguments)
{
_arguments=arguments;
}
/// <summary>
/// Callback called upon starting the optimization loop (optional)
/// <summary>
public void StartRun()
{
}
/// <summary>
/// Callback called upon starting a new iteration in the
optimization loop
/// <summary>
/// <param name="iteration">the iteration that is starting</param>
/// <returns> whether we should continue on this iteration or
not</returns>
public bool StartIteration(int iteration)
{
return false;
}
/// <summary>
/// Callback called upon ending an iteration (optional)
/// <summary>
/// <param name="iteration">the iteration that is ending</param>
public void EndIteration(int iteration)
{
}
/// <summary>
/// Called to return the new value to set for this parameter
/// <summary>
public double GetNewValue(string parameterId)
{
return 0;
}
/// <summary>
/// Called to register a new variable as input
/// <summary>
public void AddVariable(string parameterId, string parameterUserName,
double startingValue, double min, double max)
{
}
/// <summary>
/// Called to register a new target as output (optional)
/// <summary>
public void AddTarget(string parameterId, string parameterUserName,
double startingValue, double targetValue, double weight)
{
}
/// <summary>
/// Callback called after the simulation's update to update the
values of the targets
/// <summary>
/// <param name="parameterId"/>
/// <param name="value"/>
public void SetMeasures(string parameterId, double value)
{
}
/// <summary>
/// Callback called to inform the simulation details
/// <summary>
/// <param name="simulationName"/>
/// <param name="reportPath"/>
public void SetSimulationInfos(string simulationName, string reportPath)
{
}
}
}
14.2.8.3.2. Reading Input Arguments for the Incrementation Value and Iteration
Number
1. Reading input arguments for the increment value and the number of iteration
2. Registering the variables to increment
3. Registering the targets to have in the report
4. Implementing the incrementation of the variables
5. Implementing the reception of the new measures
// check that the argument list represents the number of arguments that
we need
if(argumentsList.Length != 2)
{
throw new ArgumentException("We should only have two arguments");
}
Note: The parameterId is an unique string that identify the parameter, it has no other purpose.
Note: The min and max parameters are not used in the sample, but they represent the values displayed in
the definition panel, if you need them in your implementation.
// dictionary containing a map from a parameter unique id and its display name,
for readability in a report
private IDictionarystring<string, string> _variableNames = new
Dictionarystring<string, string>();
// dictionary containing a map from a parameter unique id and its current
value, to send to Speos
private IDictionarystring<string, double> _variableValues = new
Dictionary<string, double>();
/// <summary>
/// Called to register a new variable as input
/// </summary>
public void AddVariable(string parameterId, string parameterUserName, double
startingValue, double min, double max)
{
// add the parameter id to the two dictionaries
_variableNames.Add(parameterId, parameterUserName);
_variableValues.Add(parameterId, startingValue);
}
Note: weight is not in the sample, but this is the same value that appears in the definition panel.
/// <summary>
/// Called to register a new target as output (optional)
/// </summary>
public void AddTarget(string parameterId, string parameterUserName, double
startingValue, double targetValue, double weight)
{
_targetNames.Add(parameterId, parameterUserName);
_targetValues.Add(parameterId, startingValue);
/// <summary>
/// Callback called upon starting a new iteration in the optimization loop
/// </summary>
/// <param name="iteration">the iteration that is starting</param>
/// <returns> whether we should continue on this iteration or not</returns>
public bool StartIteration(int iteration)
{
foreach(var key in _variableValues.Keys)
{
_variableValues[key] = _variableValues[key] + _incrementValue;
}
// ....
/// <summary>
/// Called to return the new value to set for this parameter
/// </summary>
public double GetNewValue(string parameterId)
{
return _variableValues[parameterId];
}
/// ....
/// <summary>
/// Callback called upon ending an iteration (optional)
/// </summary>
/// <param name="iteration">the iteration that is ending</param>
public void EndIteration(int iteration)
{
// copying the list of values for this iteration
_iterationResults.Add(_targetValues.ToDictionary(entry => entry.Key, entry
=> entry.Value));
}
/// ....
/// <summary>
/// Callback called after the simulation's update to update the values of
the targets
/// </summary>
/// <param name="parameterId"></param>
/// <param name="value"></param>
public void SetMeasures(string parameterId, double value)
{
_targetValues[parameterId] = value;
}
/// <summary>
/// Callback called upon starting a new iteration in the optimization loop
/// </summary>
/// <param name="iteration">the iteration that is starting</param>
/// <returns> whether we should continue on this iteration or not</returns>
Note: The report is a HTML file report. However, you can write in other files, but they will not be displayed
in the Speos tree.
// ...
/// <summary>
/// Callback called to inform the simulation details
/// </summary>
/// <param name="simulationName"></param>
/// <param name="reportPath"></param>
public void SetSimulationInfos(string simulationName, string reportPath)
{
reportPath = reportPath ?? string.Empty;
simulationName= simulationName ?? string.Empty;
}
The WriteReport method creates an html table with the values of the targets for each iteration.
stringBuilder.AppendLine("<tr><th>Iteration Number</th>");
var idx = 0;
foreach(var names in _targetNames)
{
stringBuilder.AppendLine($"<th>{names.Value}</th>");
parameterIdOrder[idx] = names.Key;
idx++;
}
stringBuilder.AppendLine("<tr/>");
idx = 0;
foreach (var iterationResult in _iterationResults)
{
stringBuilder.AppendLine("<tr>");
stringBuilder.AppendLine($"<td>{idx}</td>");
foreach(var id in parameterIdOrder)
{
stringBuilder.AppendLine($"<td>{iterationResult[id]}</td>");
}
stringBuilder.AppendLine("<tr/>");
idx++;
}
stringBuilder.AppendLine("</table></body></html>");
File.WriteAllText(_reportPath, stringBuilder.ToString());
}
/// <summary>
/// Callback called upon starting a new iteration in the optimization loop
/// </summary>
/// <param name="iteration">the iteration that is starting</param>
/// <returns> whether we should continue on this iteration or not</returns>
public bool StartIteration(int iteration)
{
if(iteration >= _iterationNumber)
{
// Start : new line to write the report
WriteReport();
// End : new line to write the report
return false;
}
foreach(var key in _variableValues.Keys)
{
_variableValues[key] = _variableValues[key] + _incrementValue;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
namespace OptimPluginSample
{
public class PluginSample
{
private double _incrementValue = 0;
private int _iterationNumber = 0;
{
throw new ArgumentException("first argument cannot be read as
a double for increment value");
}
/// <summary>
/// Callback called upon starting the optimization loop (optional)
/// </summary>
public void StartRun()
{
/// <summary>
/// Callback called upon starting a new iteration in the
optimization loop
/// </summary>
/// <param name="iteration">the iteration that is starting</param>
/// <returns> whether we should continue on this iteration or
not</returns>
public bool StartIteration(int iteration)
{
if(iteration >= _iterationNumber)
{
WriteReport();
return false;
}
var keys = _variableValues.Keys.ToArray();
foreach (var key in keys)
{
_variableValues[key] = _variableValues[key] + _incrementValue;
/// <summary>
/// Called to return the new value to set for this parameter
/// </summary>
public double GetNewValue(string parameterId)
{
return _variableValues[parameterId];
}
/// <summary>
/// Called to register a new variable as input
/// </summary>
public void AddVariable(string parameterId, string parameterUserName,
double startingValue, double min, double max)
{
_variableNames.Add(parameterId, parameterUserName);
_variableValues.Add(parameterId, startingValue);
}
/// <summary>
/// Called to register a new target as output (optional)
/// </summary>
public void AddTarget(string parameterId, string parameterUserName,
double startingValue, double targetValue, double weight)
{
_targetNames.Add(parameterId, parameterUserName);
_targetValues.Add(parameterId, startingValue);
}
/// <summary>
/// Callback called after the simulation's update to update the
values of the targets
/// </summary>
/// <param name="parameterId"></param>
/// <param name="value"></param>
public void SetMeasures(string parameterId, double value)
{
_targetValues[parameterId] = value;
}
/// <summary>
/// Callback called to inform the simulation details
/// </summary>
/// <param name="simulationName"></param>
/// <param name="reportPath"></param>
public void SetSimulationInfos(string simulationName, string reportPath)
{
_reportPath = reportPath ?? string.Empty;
_simulationName= simulationName ?? string.Empty;
}
stringBuilder.AppendLine("<tr><th>Iteration Number</th>");
var idx = 0;
foreach(var names in _targetNames)
{
stringBuilder.AppendLine($"<th>{names.Value}</th>");
parameterIdOrder[idx] = names.Key;
idx++;
}
stringBuilder.AppendLine("<tr/>");
idx = 0;
foreach (var iterationResult in _iterationResults)
{
stringBuilder.AppendLine("<tr>");
stringBuilder.AppendLine($"<td>{idx}</td>");
foreach(var id in parameterIdOrder)
{
stringBuilder.AppendLine($"<td>{iterationResult[id]}</td>");
}
stringBuilder.AppendLine("<tr/>");
idx++;
}
stringBuilder.AppendLine("</table></body></html>");
File.WriteAllText(_reportPath, stringBuilder.ToString());
}
}
}
The *.dll is created in the folder you specified when creating the project.
Arguments
CSV Data
As in the plugin example, you can use CSV line data:
<OptimizerPluginConfig>
<PluginPath>...</PluginPath>
<PluginClass>...</PluginClass>
<Arguments>value1,value2,value3</Arguments>
</OptimizerPluginConfig>
XML String
If you want to have a XML string as input, you must integrate it inside a CDATA tag:
<OptimizerPluginConfig>
<PluginPath>...</PluginPath>
<PluginClass>...</PluginClass>
<Arguments><![CDATA[<test><test2>value<test2/><test3>other
value<test3/><test/>]]></Arguments>
</OptimizerPluginConfig>
JSON
You can apply JSON:
<OptimizerPluginConfig>
<PluginPath>...</PluginPath>
<PluginClass>...</PluginClass>
<Arguments>{"test2": "value", "test3": "other value"}</Arguments>
</OptimizerPluginConfig>
Note: If you need more information on optiSLang functioning and behavior, refer to optiSLang User's
Guide.
optiSLang
optiSLang is a software platform for Computer-Aided Engineering-based (CAE) optimization in virtual prototyping.
Based on design variations or measurement and observation points, you can perform efficient variation analyses
with minimal user input and few solver calls.
It supports you with:
• Calibration of virtual models to physical tests
• Analysis of parameter sensitivity and importance
• Metamodeling
• Optimization of product performance
• Quantification of product robustness and reliability also referred as to Uncertainty Quantification (UQ)
• Robust Design Optimization (RDO) also referred as to Design for Six Sigma (DFSS)
optiSLang also includes a powerful simulation workflow environment. The software is the perfect tool for simulation
driven workflow generation using parametric models for sensitivity analysis, optimization, and robustness evaluation.
• Sensitivity analysis helps you understand the design, focus on key parameters, check your response variation’s
forecast quality and automatically generate your optimum metamodel.
• Optimization helps improve your design performance.
• Robustness evaluation helps you verify the design robustness regarding scattering material parameters, production
tolerances and varying environmental conditions.
Presentation
From the version 2023 R2, a wizard-based Speos Integration has been integrated into Ansys optiSLang.
The Solver wizard helps you easily:
• connect to your Speos project
• define the parameters and parameter ranges for the variation analysis
• define the Speos simulations to be exported and solved
• create the workflow for an automated Speos simulation.
The automated Speos workflow consists of 3 nodes:
• the Ansys Speos node (input node), that updates the geometry based on the parameter values and export the
Speos simulation file
• the Ansys Speos Core node (solver node), that launches and processes the simulation
• the Ansys Speos report reader node (output node), that extract response values from the Speos simulation report
Once you finished setting the Speos workflow, you can define the criteria in the parametric system and follow up
to set up the variation analysis (example: sensitivity analysis, optimization, etc.) using the available wizards.
Generic Workflow
1. In Speos define the parameters to use in optiSLang using Publish Parameters.
2. In optiSLang, drag the Solver Wizard and drop it to the Scenery.
3. Select the Ansys Speos solver paradigma and open a *.scdocx project.
4. Define the Ansys Speos solver node.
5. Define the Ansys Speos Core solver node.
6. Define the Ansys Speos Report Reader solver node.
General Description
Automation refers to the process of generating tools meant to automate the execution of tasks.
You can use Automation as follows:
• to generate and define any Speos object.
• to post process simulation results.
Technical Description
The interface is described using the IDL (Interface Description Language).
The exposed data are called Properties.
The exposed functions are called Methods.
Development Tools
If you want to use Automation, you need to use IronPython Script language.
You can use the Speos built-in Script Editor to create or edit scripts.
15.2. Methodology
This page helps you understand the Speos API methodology in order to help you generate automation scripts.
Presentation
Speos APIs are based on the Speos user interface. This means that for any feature/item currently available in the
GUI, an associated automation function is available. As the automation functions are derived from the GUI, they are
completely aligned with the actions that you would have to perform in the software.
Three interfaces are available to declare the API functions:
• SpeosSim allows you to access all the Light Simulation features
• SpeosDes allows you to access all the Optical Part Design features
• SpeosAsm allows you to access the geometry update features
Interfaces are systematically called on feature creation.
#Interface declaration
3DTexture = SpeoSim.Component3DTexture.Create()
OpticalSurface = SpeosDes.TIRLens.Create()
In addition to the HTML resource file, some common cross functional and more specific APIs are provided in the
Methods section.
Related reference
Methods on page 774
The following section gathers cross-functional methods and specific methods that are not covered in the Speos API
Docs.
Related information
Creating a Script on page 772
This page shows how to create a script group in Speos. A script group has to be created to use Speos APIs.
Note: Scripts Groups can be created on the root part or on the active part level.
3. Right-click the Group and click Edit Script top open the built-in script editor.
4. From the Script Editor, if you want to use the new SpaceClaim methods available, select the latest SpaceClaim
API version available.
Note: Do not confuse the SpaceClaim API version and the Speos API version. Unlike the SpaceClaim API
version, the Speos API version is always the latest version available.
Note: If your script version is not the latest version available, and you want to use geometry objects
retrieved from a Speos object selection attribute (i.e. Items, LinkedObjects), you need to convert the
retrieved geometry objects thanks to the method ConvertToScriptVersion.
The Script Group is created and ready to be used. You can now load an existing script or create a script from scratch
using Speos APIs.
Note: When you run a script on SpaceClaim in headless mode (without User Interface), the rendering
calculations are performed in all cases.
If you do not want to perform the rendering calculations in headless mode, create the environment variable
SPEOS_DISABLE_RENDERING_WHEN_HEADLESS, and set it to 1.
Related reference
Methodology on page 771
This page helps you understand the Speos API methodology in order to help you generate automation scripts.
Methods on page 774
The following section gathers cross-functional methods and specific methods that are not covered in the Speos API
Docs.
Example
material = SpeosSim.Material.Create() (reflectance =100)
material.SOPReflectance =50; //ok
material.SOPReflectance =150; // keeps the previous value since it should be
lower than 100
Decimal Values
If a value set in the script uses decimals whereas it only allows integer, then the value is rounded up or down to the
nearest whole number.
Example
material = SpeosSim.Material.Create()
material.SOPReflectance =15,4; // set the value to 15 since it should be an
integer
15.5. Methods
The following section gathers cross-functional methods and specific methods that are not covered in the Speos API
Docs.
Note: The Speos Simulation and Design objects currently do not correctly take into account dimension
functions (such as "MM()"). Specify the dimension, not the unit. (e.g. use irradiance_x=5 instead of
irradiance_x=MM(5)).
Note: Computation is not automatic on features during the script process as Compute events are executed
at the end of the "Script" Command. So if you want your features to be up to date, you need to call explicitly
the Compute() method on objects that are usually supposed to update automatically in the interactive
session.
print irradianceSensor.XMPTemplateFile
irradianceSensor.XMPTemplateFile = ".\\SPEOS input file\\xmpTemplate.xml"
For lists that are not predefined, for example the possible locations of a Natural Light Ambient Source, the complete
list of enums can be obtained though a Get method and parsed to find the useful value.
naturalLight = SpeosSim.SourceAmbientNaturalLight.Create()
locations = naturalLight.GetLocationPossibleValues()
locationLabel = ""
for locationLabel in locations:
print locationLabel
if locationLabel.Contains("Boston"):
break
naturalLight.Location = locationLabel
Selection.Create(GetRootPart().Bodies[3]).SetActive()
Selection.Create(GetRootPart().Bodies[8]).AddToActive()
Selection.Create(GetRootPart().Bodies[10]).AddToActive()
newMaterial.VolumeGeometries.Set(Selection.GetActive())
displaySource = SpeosSim.SourceDisplay.Create()
displayOrigin = Selection.Create(GetRootPart().CoordinateSystems[0])
displaySource.OriginPoint.Set(displayOrigin)
displayXDirection = Selection.Create(GetRootPart().CoordinateSystems[0].Axes[0])
displaySource.XDirection.Set(displayXDirection)
displayYDirection = Selection.Create(GetRootPart().CoordinateSystems[0].Axes[1])
displaySource.YDirection.Set(displayYDirection)
• Oriented faces selection (used in surface sources or FOPs to orientate the normal to the selected faces)
orientedFace = SpeosSim.OrientedFace.Create()
emissiveFace = Selection.Create(GetRootPart().Bodies[7].Faces[0])
orientedFace.Face.Set(emissiveFace)
orientedFace.ReverseDirection = True
surfaceSource = SpeosSim.SourceSurface.Create()
surfaceSource.EmissiveFaces.Add(orientedFace)
15.5.1.5. ConvertToScriptVersion
Description
This function allows you to convert retrieved geometry objects, from a Speos object selection attribute (i.e. Items,
LinkedObjects), from the latest Speos API version to the anterior script version that you want to edit.
Python Definition
import SpaceClaim.Api.V20 as scriptNameSpace
def ConvertToScriptVersion(obj):
doc = Window.ActiveWindow.Document
res =
scriptNameSpace.Moniker[scriptNameSpace.IDocObject].FromString(obj.Moniker.ToString()).Resolve(doc)
return res
Example
docObjInScriptVersion = ConvertToScriptVersion( docObjInSpeosVersion )
print docObjInScriptVersion # => V20 => OK
inverseSimulation = SIM.SimulationInverse.Create()
lxpSensor = SpeosSim.LXPEnabledSensor.Create()
lxpSensor.LXPSensor.Set(radianceSensor)
lxpSensor.IsLXP = True
inverseSimulation.Sensors.Add(lxpSensor)
results = directSimulation.GetResultFilePaths()
for resultFile in results:
print resultFile
copiedMaterial.Delete()
• GetSimulationSettings returns a SimulationSettings object that allows you to modify geometry and simulation
settings.
• SetSimulationSettings applies the changes made with the GetSimulationSettings function.
simulationSettings = interactiveSimulation.GetSimulationSettings()
simulationSettings.MeshingStepValue = 10
interactiveSimulation.SetSimulationSettings(simulationSettings)
• SelectAll selects all the related features of a same category (source, sensor or geometry).
directSimulation.Sources.SelectAll
directSimulation.Geometries.SelectAll
directSimulation.Sensors.SelectAll
simulationSettings = interactiveSimulation.GetInteractiveSimulationSettings(
)
for simulationSetting in simulationSettings:
print simulationSetting
interactiveSimulation.SetInteractiveSimulationSettings(True, True, False)
simulationSettings = directSimulation.GetDirectSimulationSettings( )
for simulationSetting in simulationSettings:
print simulationSetting
directSimulation.SetDirectSimulationSettings(False, True, 1800)
settings = inverseSimulation2.GetInverseSimulationSettings()
settings.SetDeterminist(SIM.InverseSimulationSettings.EnumPhotonMapMode.Load,
100, 10, False, 0)
settings.SetDeterministPhotonMapBuild(10000,100)
settings.SetDeterministPhotonMap(100, 10000000, True, False)
inverseSimulation2.SetInverseSimulationSettings(settings)
selection = Selection.Create([interactiveSimulation.Subject,
directSimulation.Subject])
SpeosSim.Command.SetActiveSelection(selection)
SpeosSim.Command.ComputeOnActiveSelection()
• GetInputFolder returns the path to the SPEOS input files directory: print =
Speos.Sim.Comand.GetInputFolder()
• GetOutputFolder returns the path to the SPEOS output files directory: print =
Speos.Sim.Comand.GetOutputFolder()
• HpcCompute launches the Speos HPC compute command on the specified objects:
SpeosSim.Command.HpcCompute(directSimulation)
• HpcComputeOnActiveSelection launches the Speos HPC compute command on the specified objects:
selection = Selection.Create([interactiveSimulation.Subject,
directSimulation.Subject])
SpeosSim.Command.SetActiveSelection(selection)
SpeosSim.Command.HpcComputeOnActiveSelection()
selection = Selection.Create([inverseSimulation.Subject,
directSimulation.Subject])
SpeosSim.Command.SetActiveSelection(selection)
SpeosSim.Command.PreviewComputeOnActiveSelection()
rayFileSource = SpeosSim.SourceRayFile.Find("Ray-file.1")
files\\rayfile_LT_QH9G_100k_270114_Speos.RAY"
canRayFileBeOptimized = rayFileSource.OptimizeRayFile()
# Prism geometries
LightGuide1.StepType = "Control points"
stepConfig = LightGuide1.StepConfigurations
print "Number of configurations: " + str(stepConfig.Count)
config = 0
while config < stepConfig.Count:
print "Position: " + str(stepConfig[config].Position) + ", Value: " +
str(stepConfig[config].Value)
config += 1
controlPoint = LightGuide1.StepConfigurations.AddNew(0)
controlPoint.Position = 50
controlPoint.Value = 3
child = LightGuide1.StepConfigurations.AddNew(0)
child.Position = 30
child.Value = 4
LightGuide1.OffsetType = "Constant"
LightGuide1.OffsetValue = 4
LightGuide1.CSVFile = ".\\LightGuide_export.csv"
LightGuide1.Compute()
# Light Guide
LightGuide1 = SpeosDes.LightGuide.Create()
# Guide curve
Curve_LightGuide = GetRootPart().Curves[4]
LightGuide1.GuideCurve.Set(Curve_LightGuide)
# Body
LightGuide1.BodyProfileDiameter = 5
# Prisms orientation
X_Axis = GetRootPart().Curves[1]
LightGuide1.OpticalAxis.Set(X_Axis)
LightGuide1.PrismsOperationType =
SpeosDes.LightGuide.EnumPrismsOperationType.Hybrid
LightGuide1.ReverseOpticalAxisDirection = False
# Distances
LightGuide1.DistancesType = SpeosDes.LightGuide.EnumDistancesType.Curvilinear
LightGuide1.DistanceStart = 2
LightGuide1.DistanceEnd = 2
# Prism geometries
LightGuide1.StepType = "Control points"
stepConfig = LightGuide1.StepConfigurations
print "Number of configurations: " + str(stepConfig.Count)
controlPoint = LightGuide1.StepConfigurations.AddNew(0)
controlPoint.Position = 50
controlPoint.Value = 3
LightGuide1.OffsetType = "Constant"
LightGuide1.OffsetValue = 4
LightGuide1.CSVFile = ".\\LightGuide_export.csv"
LightGuide1.Compute()
pl = SpeosDes.ProjectionLens.Find("TIR Lens.1")
pl.BackFaceAspherics[2] = 2.3
print pl.BackFaceAspherics[2]
perSurface.SourcePoint.Set(GetRootPart().Curves[0])
perSurface.ImagePoint.Set(GetRootPart().Curves[6])
perSurface.OrientationAxis.Set(GetRootPart().Curves[3])
perSurface.Symmetry = SpeosDes.PER.EnumSymmetry.SymmetryTo0Plane
angularSectionConfig = perSurface.AngularSections
ParsePER(angularSectionConfig)
angularSectionConfig = perSurface.AngularSections
ParsePER(angularSectionConfig)
angularSectionConfig = perSurface.AngularSections
ParsePER(angularSectionConfig)
perSurface.Compute()
perSurface.Compute()
perSurface2 = perSurface.Clone()
perSurface2.Compute()
fittingWorked = perAngularSection.FittingControlPlane()
print "After fitting control planes"
print "Fitting did work? " + str(fittingWorked)
perSurface2.Compute()
perAngularSection = perSurface2.AngularSections[0]
fittingWorked = perAngularSection.FittingControlPlane()
print "Fitting did work? " + str(fittingWorked)
multiEyeBoxMirrors = hoaSimulation.Mirrors.GetMultiEyeBoxMirrorPossibleValues()
break
hoaSimulation.Mirrors.TiltRotationAxis.Set(GetRootPart().Curves[1])
ebConfigurations = hoaSimulation.Mirrors.EBMirrorConfigurations
currentFilePath = GetRootPart().Document.Path
currentPath = Path.GetDirectoryName(currentFilePath)
test1 = SpeosAsm.CADUpdate.Import(speFile1)
test2 = SpeosAsm.CADUpdate.Import(speFile2)
def updateImportedPart(importedPart):
lastUpdate = SpeosAsm.CADUpdate.GetLastImportedFileDateTime(importedPart)
bUpdate = SpeosAsm.CADUpdate.Update(importedPart, True, True)
print "Update did work (unmodified parts skipped)? " + str(bUpdate)
# Update
previousUpdate = lastUpdate
lastUpdate = SpeosAsm.CADUpdate.GetLastImportedFileDateTime(importedPart)
if lastUpdate != previousUpdate:
print "Last update: " +
str(SpeosAsm.CADUpdate.GetLastImportedFileDateTime(importedPart))
15.5.3.1. OpenFile
Description
This function allows you to open a *.sv5 or *.speos file.
Syntax
object.OpenFile(BSTR strFileName) As Int
• object: SPEOSCore
• BSTR strFileName: This variable is composed of the path, the filename and the extension
• Int return: returns 0 if succeeded
Example
from System import Type, Activator
15.5.3.2. RunSimulation
Description
This function allows you to run a simulation.
Syntax
object.RunSimulation(Int nSimulationIndex, BSTR strCommandLine) As Int
• object: SPEOSCore
• Int nSimulationIndex: Simulation index, 0 by default
• BSTR strCommandLine: This variable corresponds to the command line
• Int return: returns 0 if succeeded
Example
from System import Type, Activator
#Runs simulation
retval = SPEOSCore.RunSimulation(0, commandline)
15.5.3.3. ShowWindow
Description
This function allows you to display the Speos Core window.
Syntax
object.ShowWindow(Int nShowWindow) As Int
• object: SPEOSCore
• Int nShowWindow: 1 to show the window, 0 to hide it
• Int return: returns 0 if succeeded
Example
from System import Type, Activator
Description
The Speos Core command lines for Speos allow you to create scripts to automate multiple simulation launches
without using the Speos Core interface.
Command Lines
Command Line Description
FILENAME Speos Core system file to open
-C Command line mode (no GUI)
-G Launch Speos GPU Solver
-S (SSS) Launch simulation number SSS on FILENAME
-t (ttt) Specified simulation thread number
-p (ppp) Specify process priority (ppp)
0-5
• 0: idle
• 2: normal
• 5: realtime
The Block Recording tool enables you to record operations you perform in Speos, and play back these recorded operations
while keeping Speos feature links to the imported external CAD geometries.
Note: The Block Recording tool is in BETA mode for the current release.
Description
The goal of recording operations (SpaceClaim Design operations and/or Speos features) on imported external CAD
geometries is to be able to play again these operations after you updated or modified the geometries in the external
CAD software. This saves you from applying again manually the different operations in the Speos environment.
For more information on how to use the Block Recording tool, refer to the SpaceClaim Recording documentation.
Generic Workflow
Important: The Block Recording tool can be opened for only one document in a session. It does not work
for multiple Parasolid documents opened in the same session.
Warning: Once the Block Recording tool is activated, do not deactivate it or you will loose all the added
blocks.
.
3. If you want, you can apply SpaceClaim Design Modeling operations.
4. Create Speos feature (apply material, create a source, a sensor, a simulation, an Optical Part Design feature).
To add a parameter to the Block Recording you have two possibilities:
• Modify the parameter's value in the feature definition.
• Check the check box that appears in front of the parameters that can be exported into the Block Recording or
Workbench.
The parameter will be exported to the block recording with its current value, or with its default value if you
have not modified the value.
For more information on non-compatible Speos feature, refer to Non-Compatible Speos Features and Actions
with the Recording Tool on page 793.
5. Exit the definition of the feature when you are done with it to create the feature block in the Block Recording
tree: press [Esc] or uncheck Edit in the Design tab.
The feature appears with the different parameters modified in one same block.
Note: When you run a simulation, only CPU compute and GPU compute are recorded. HPC compute
is not recorded.
Note: If you modified the filename of the CAD part, make sure to modify it in the Start Block.
Operations are played back on the modified geometries, and you can see that Speos links are kept.
b. In the CAD Selection tab, check the CADs you want and select Workbench Associate Interface (in case of
Catia V5, select CADNexus/CAPRI CAE Gateway).
c. Click Next.
d. In the CAD Configuration tab, click Configure Selected CAD Interfaces.
e. Click Exit.
The option Always use SpaceClaim's reader when possible is activated by default.
The Block Recording tool is ready to be used with Speos and imported CAD files.
Important: The following list may not be exhaustive. Some features or actions might be recorded, however
they are not considered as officially supported/compatible.
Note: Note that you can record the Show Grid action if you use the Options from the sensor definition.
Known Issue
In case of a selection (Smart Variable), sometimes SpaceClaim cannot find which geometry to select, and may select
the wrong one.
Solution
In the script editor, add a custom block that defines the correct selection by using a filter function provided in the
SpaceClaim API scripting.
Example
The green part needs to be kept, and the orange part needs to be removed.
The code to write would be:
Selection = B1.CreatedBodies.ConvertToBodies().FilterMaxSurfaceArea()
Delete.Execute(Selection)
17: Speos Sensor System Exporter
The Speos Sensor System Exporter is a standalone tool to post-process Exposure Maps calculated by Speos.
Note: The Speos Sensor System Exporter is in BETA mode for the current release.
How It Works
Speos Sensor System Exporter comes with a minimal GUI that provides feedback about the calculation performed.
Inputs and main parameters are saved in a file coded with YAML standard. This file is easily editable with a TXT editor.
It contains instructions that Speos Sensor System Exporter can execute.
Another YAML code file contains sensor properties.
Speos Sensor System Exporter can be started using a *.bat file with the following command lines inside:
The following chapters presents you how to define The YAML inputs file and YAML sensor properties file.
How to Start
From version 2023 R2, Speos Sensor System Exporter is integrated into the Speos installation. The *.exe file is in
Speos viewer folder.
To download the last updated version of the Speos Sensor System Exporter, please click the following AWS link.
AWS Link information:
• Expiration date: 2024/01/16
• MD5 origin: e132ffe227278381acf676dbb52c5fac
• MD5 AWS: e132ffe227278381acf676dbb52c5fac
You can generate a template version of both YAML input files directly from the Speos Sensor System Exporter, using
the following command:
Description
YAML is a human-readable data serialization standard that can be used in conjunction with all programming languages
and is often used to write configuration files.
Note: For more information, you can refer to the YAML documentation on Fileformat or Circleci websites.
Example
The following example is based on a YAML file corresponding to the inputs parameters to give to the Speos Sensor
System Exporter using the Given files mode.
Given files mode along with All in folder modes correspond to the two modes used to define the input parameters.
They are explained in the Working Modes chapter.
Logging Level:
File: 'DEBUG'
Console: 'INFO'
Default Working Folder: /Inputs # '' if working folder is the same as this
file. Else set path to folder that will be use by default to find inputs.
Mode: # 'All in folder' or 'Given files'
All in folder: # needs to be filled if 'All in folder'
mode is chosen
Input folder: # path to input folder (format: 'c:\folder')
Note: By default, Console is set to Info level whereas File is set to Debug.
It’s possible to change both options with the main first key of the Input file Logging Level:
Logging Level:
File: 'DEBUG'
Console: 'INFO'
Principle
Speos Sensor System Exporter does a recurrent scan of a given folder. Exposure Maps are processed as soon as they
are stored in the folder. All maps are processed with the same sensor and conditions.
To run the Speos Sensor System Exporter using this mode, refer to the section below that lists the keys that must
be filled (even if some are optional).
Keys
Input folder
The Input folder represents the scanned folder. Process starts as soon as an Exposure Map is detected in this folder.
If several maps are detected at the same time, those will be processed one by one.
Output folder
The Output folder corresponds to the folder where results are saved.
Xmp backup folder: # path to folder where processed xmp can be backup
Sensor
The Sensor corresponds to the path to the YAML Sensor Properties file.
Export
5 different types of export are available. For each export, a tuple can be set to indicate the output file format required:
• Electron export
Note: Electron export corresponds to the Noise export and the Signal export combined.
• Noise export
• Raw export
• Processed export
• Signal export
xmp format corresponds to files that can be opened using Virtual Photometric Lab.
bin format corresponds to binary files that can be loaded using a Python script:
• using 'fromfile' function from the 'NumPy' library
• data type is np.uint16 for all outputs except Noise export which is np.int16
Important: Signal export and bin format are available only from the updated version of Speos Sensor
System Exporter that you can download from this link.
Example: You want to get Raw export as dng and xmp, plus Processed export as png, and neither Electron export
nor Noise export
exe Termination
To end the exe, a filename terminate (without extension) must be added in the scanned folder.
Principle
With the Given files mode:
• Exposure maps to process are explicitly given to the Speos Sensor System Exporter.
• Each process is referenced with the key Set and a number.
Set 0:
Set 1:
...
Set n:
• Each Set can be processed with a specific sensor, conditions and export options.
• A Set key contains a list of sub keys.
• The first Set index must be 0, then next Set are continuously incremented.
Exposure maps
The Exposure maps key is a link to one Speos Exposure Map or Tuple of links to Exposures Maps. Tuples are used
in case of HDRI processing (several Exposure Maps used to generate a single high dynamic range image).
Single map example
Sensor
The Sensor key is the link to one sensor file or Tuple of links to sensor. Tuples are used in case of HDRI processing
(in this case, you must specify a same number of sensors as number of maps).
Single sensor example
Sensor: 'c:/temp/sensor.yaml'
Output folder
The Output folder corresponds to the folder where results are saved.
Rename
Rename: by default, results have a same name basis as the input exposure map. Then a suffix is added:
• _electron_ for Electronic export
• _noise_ for noise export
• _raw_ for raw export
Export
5 different types of export are available. For each export, a tuple can be set to indicate the output file format required:
• Electron export
Note: Electron export corresponds to the Noise export and the Signal export combined.
• Noise export
• Raw export
• Processed export
• Signal export
xmp format corresponds to files that can be opened using Virtual Photometric Lab.
bin format corresponds to binary files that can be loaded using a Python script:
• using 'fromfile' function from the 'NumPy' library
• data type is np.uint16 for all outputs except Noise export which is np.int16
Important: Signal export and bin format are available only from the updated version of Speos Sensor
System Exporter that you can download from this link.
Example: You want to get Raw export as dng and xmp, plus Processed export as hdr, and neither Electron export
nor Noise export
Save previous
If Save previous is set to True and if a result with a same filename already exists, then the previous result is renamed
with additional suffix that contains the original date/time of file creation.
If Save previous is set to False or if not set at all, the previous result is replaced by a new one.
HDR
The HDR key is used only in case of HDRI processing. The sub keys define the tone mapping to compress HDR data
into standard image:
• Min: positive number between 0 and 1. 0 has no effect.
• Ratio: positive value. 1 has no effect.
• Gamma: positive value. 1 has no effect.
The corresponding code is:
Inputs (sensor properties, options and used conditions) used for both calculations are saved into the YAML Sensor
Properties file. The YAML file is divided into 8 main keys:
Lumerical data
Unit: # 'Percentage' or 'O to 1'
Quantum Efficiency: #(nu) or From Lumerical data
Filename: # link to a *.spectrum file
Unit: # 'Percentage' or 'O to 1'
Bayer Matrix:
Type: # None or i*j (with i and j for bayer structure
size) or From Lumerical data
#(example for a 2x2 bayer structure size)
00 spectrum: # link to *.spectrum file
01 spectrum: # link to *.spectrum file
10 spectrum: # link to *.spectrum file
11 spectrum: # link to *.spectrum file
Unit: # 'Percentage' or 'O to 1'
System Gain: # Overall system Gain (K)
Value:
Unit: # 'DN/electron'
Temporal Dark Noise: # (sigma_d or sigma_y.dark)
Value:
Unit: # 'electron' or 'bits' or 'dB'
AST: # Absolute sensitivity threshold
Value:
Unit: # 'electron' or 'photon'
Wavelength: # in nm (used only in case of given unit in
photon)
DR: # Dynamic Range
Value:
Unit: # 'DN' or 'bits' or 'dB'
Dark Current:
Mean:
Value:
Unit: # 'electron/s'
Standard Variation:
Value:
Unit: # 'electron/s'
Td: # Doubling Temperature Interval
Value:
Unit: # 'K' or 'C'
Tref: # reference temperature
Value:
Unit: # 'K' or 'C'
Spatial Non Uniformity:
DSNU:
Value:
Unit: # 'DN'
PRNU:
Value:
Unit: # 'Percentage'
Development:
Demosaizing Method: # bilinear, Malvar2004, Menon2007, DDFAPD
Linearization:
Type: # None, 'Table' or 'Polynome'
Data: # [] if 'None', lookup table from 0 to max DN
value if 'Table', DN_out = sum(Pi*(DN_in)**i) if 'Polynome'
Rescaling:
Black Level: # None, Auto or Digital Number coded on 16bits
(only for DNG output)
White Level: # None, Auto or Digital Number coded on 16bits
(only for DNG output)
Orientation: # 1: normal, 3: 180° rotation. See Tiff-Ep 6
specifications for other referenced values
# Adaptation Method:
# Camera Neutral: # [x, x, x]
# Color Calibration: # [[x, x, x],[x, x, x],[x, x, x]]
Configuration x: # please leave this line as it is.
17.2.3.4. References
The References key lists sub-keys that are only optional information, and not used by Speos Sensor System Exporter.
References:
Vendor: # Name of the manufacturer (Optional)
Camera: # Name of the camera (Optional)
Sensor: # Name of the sensor (Optional)
type: # Sensor type (CMOS, CDD, other) (Optional)
• Gain
• Offset
• Exposure Time
• Temperature
Gain:
Value: 1
Unit: RN # in Db or RN (RN for Real Number)
Offset:
Value: 0
Unit: DN # in DN only (DN for Digital Number)
Exposure Time
The Exposure Time key gives the duration on which the sensor has collected photons. The value must be the same
as the one used for the Speos simulation used to generate the Exposure map.
Exposure Time unit is us (not "µ", but "u"), ms or s.
Exposure Time:
Value: 10
Unit: ms # s, ms, us (do not use 'µ')
Temperature
The Temperature key gives the camera temperature when used.
Temperature unit is C (not "°C") or K.
Temperature:
Value: 25
Unit: C # in C or K
17.2.3.6. Properties
The Properties key gives information about sensor general properties. There are 3 sub-keys to fill:
• Resolution
• Pixel Size
• Bits depth
Resolution
The Resolution key gives the number of lines and rows of the sensor pixels.
Note: By default, Auto value is used. In this case information are calculated from the Exposure Map.
Pixel Size
The Pixel Size key gives the width and height of one pixel.
Note: By default, Auto value is used. In this case information are calculated from the Exposure Map.
Bits depth
The Bits depth key defines how many bits of tonal or color data are associated with each pixel or channel.
Bits depth: 12 # ADC capacity (possible values: 8, 10, 12, 14, 16)
17.2.3.7. Pre-Processing
Speckle removing
The Speckle removing key is a convolution filter applied on Exposure Map to remove high peak due to Speos
simulation.
If Speckle removing is set to False, nothing is done. Otherwise, function is automatically activated if a you define
a threshold ratio as value.
Colorimetric Filtering
The Colorimetric Filtering key is a filter applied on Exposure Map to remove colorimetric noise due to Speos
simulation.
If Colorimetric Filtering is set to False, nothing is done. If set to True, filtering calculation is done.
Lumerical data:
Mode: V2
Filename: RGB-IR basic EQE.json
From version 2023 R2, there are two Lumerical models and formats for *.json file:
• Legacy version (V1) that takes into account only Chief ray angle
• New version (V2) that considers Chief ray angle and Marginal ray angles.
The Mode key indicates which type of input is provided.
In this case, IR & UV Filter, Quantum Efficiency, and Bayer Matrix sub-keys in the EMVA data key are ignored.
If you want to activate Lumerical input, and the sub-keys IR & UV Filter, Quantum Efficiency, and Bayer Matrix,
set the sub-keys to From Lumerical data as follows:
EMVA data:
IR & UV Filter:
Filename: From Lumerical data
Unit:
Quantum Efficiency:
Filename: From Lumerical
Unit:
Bayer Matrix:
Type: From Lumerical data
00 spectrum:
01 spectrum:
10 spectrum:
11 spectrum:
Unit:
If you want to use the EMVA 1288 model standard and not the Lumerical data, do not fill in the sub-keys.
• AST
• DR
• Dark Current
• Spatial Non Uniformity
IR & UV Filter
A UV and/IR cut can be added on the sensor cover window. If present, the IR & UV Filter key allows you to take this
filter into account.
The value taken is a link to a *.spectrum file (Speos native format).
Note: Setting the value to None define the filter as not present (or already specified into the Speos Camera
sensor definition).
By default, *.spectrum files are expressed in percentage. In this case Unit key must be Percentage.
Note: You can also define the Unit sub-key in [0;1] if the data in the *.spectrum file correspond to that.
IR & UV Filter:
Filename: c:/uv_ir.spectrum # None or link to a *.spectrum file
Unit: # 'Percentage' or 'O to 1'
Quantum Efficiency
The Quantum Efficiency key value is a link to a *.spectrum file that represents the sensor quantum efficiency.
Usually, data given by sensor manufacturers are the combination of the sensor efficiency and the RGB Bayer filter.
In this case, the value should be a *.spectrum file with 100% constant value.
Bayer Matrix
Monochrome Sensor
In case of a monochrome sensor, The Bayer Matrix key is not used and the Type sub-key value must be set to None.
Bayer Matrix:
Type: None
RGB Sensor
In case of a RGB sensor:
• the Type sub-key value must be a combination of 4 letters chosen in ‘R’, ‘G’ and ‘B’.
• The following four sub-keys 00 spectrum, 01 spectrum, 10 spectrum, 11 spectrum must be set with a link to a
*.spectrum file.
Bayer Matrix:
Type: RGGB
00 spectrum: c:/Red.spectrum # link to *.spectrum file
01 spectrum: c:/Green1.spectrum # link to *.spectrum file
10 spectrum: c:/Green2.spectrum # link to *.spectrum file
11 spectrum: c:/Blue.spectrum # link to *.spectrum file
System Gain
In the sensor, the charge units accumulated by the photo irradiance is converted into a voltage, amplified, and finally
converted into a digital signal by an analog-to-digital converter (ADC). The whole process is assumed to be linear
and can be described by a single quantity, the System Gain key.
The System Gain units is DN/e- (digits per electrons).
System Gain:
Value: 0.5
Unit: DN/electron
AST
Absolute sensitivity threshold (AST) is the number of photons needed to get a signal equivalent to the noise observed
by the sensor.
The Value sub-key must be a positive number in electrons or in photons Unit.
In case of photon-based value, you must define the Wavelength sub-key corresponding to the wavelength of the
photon (to be able to convert into electron using Gain system). Wavelength unit is nm.
DR
Dynamic range (DR) is defined as the ratio of the signal saturation to the Absolute Sensitivity Threshold (AST).
Dark Current
The dark signal is mainly caused by thermally induced electrons. Therefore, the dark signal has an offset (value at
zero exposure time) and increases linearly with the exposure time. Because of the thermal generation of charge
units, the dark current increases roughly exponentially with the temperature.
The Dark Current key is based on four sub-keys:
• the Mean sub-key corresponds to the average value of e-/s for the Tref temperature.
• the Standard Variation sub-key corresponds to the variation around the mean value in e-/s.
• the Tref sub-key corresponds to the value for Reference Temperature.
• the Td sub-key corresponds to the temperature interval that doubles the dark current.
Dark Current:
Mean:
Value: 8
Unit: electron/s # 'electron/s'
Standard Variation:
Value: 4
Unit: electron/s # 'electron/s'
Td: # Doubling Temperature Interval
Value: 10
Unit: K # 'K' or 'C'
Tref: # reference temperature
Value: 20
Unit: C # 'K' or 'C'
17.2.3.10. Development
Development corresponds to the second main step in Exposure Map post-processing.
The "Development" term is an echo to analog/film photography. Regarding digital photography, it is a software
process coded into the camera that converts raw image (most of the time not given to user) and final/output image.
Two development types are available:
• Generic basic development directly implemented in the Speos Sensor System Exporter as described in the Generic
Development section.
• Customizable development done from an external python script, as described in the External Development section.
Generic Development
Parameters used in the development calculations are defined into the main Development key divided into 5 sub-keys:
• Demosaizing Method
• Linearization
• Orientation
• Color Saturation Correction
• Colorimetry
Demosaizing Method
If the Bayer Matrix Type sub-key of the EMVA data key is set to a value different from None, a Demosaizing step is
performed to get raw R, G and B channels.
4 methods are available:
• bilinear
• Malvar2004
• Menon2007
• DDFAPD
For more information on the methods, refer to the Bayer CFA Demosaicing and Mosaicing API Reference.
The Demosaizing Method key must specify one of those methods.
Linearization
The Linearization key allows you to correct sensor non-linearity. If sensor is assumed to be linear, the Type sub-key
must be set to None.
Linearization:
Type: None
Orientation
The Orientation key allows you to change the image orientation and/or apply a symmetry:
• 1 = The 0th row represents the visual top of the image, and the 0th column represents the visual left-hand side.
• 2 = The 0th row represents the visual top of the image, and the 0th column represents the visual right-hand side.
• 3 = The 0th row represents the visual bottom of the image, and the 0th column represents the visual right-hand
side.
• 4 = The 0th row represents the visual bottom of the image, and the 0th column represents the visual left-hand
side.
• 5 = The 0th row represents the visual left-hand side of the image, and the 0th column represents the visual top.
• 6 = The 0th row represents the visual right-hand side of the image, and the 0th column represents the visual top.
• 7 = The 0th row represents the visual right-hand side of the image, and the 0th column represents the visual
bottom.
• 8 = The 0th row represents the visual left-hand side of the image, and the 0th column represents the visual bottom.
Orientation: 3
Colorimetry
The final image colorimetry depends on 3 inputs to be set into sub-keys:
• Shoot Illuminant
• Target Color Space
• Adaptation method
From these inputs, 2 intermediate data are automatically calculated and added into the sensor file:
• Camera Neutral vector
• Color Calibration Matrix
Note: For more information on matrix, you can refer to the following websites Strollwithmydog and
Brucelindbloom.
• Color Space
• Adaptation Method
• Configuration x
If colorimetric data have been previously calculated, an additional Configuration i (do not confuse it with
Configuration x) is added to each combination of [Shoot Illuminant, Color Space, Adaptation method].
Shoot Illuminant
The Shoot Illuminant key gives the main illuminant used for the scene calculation with Speos.
• The Type sub-key defines if the illuminant is predefined or comes from a Speos *.spectrum file.
• The Data sub-key sets the predefined illuminants (A, B, C D50, D55, D65, or D75) or the *.spectrum file to use.
Example: Predefined Illuminant
Shoot Illuminant:
Type: Predefined
Data: D50
Unit: NA
Shoot Illuminant:
Type: File
Data: c:/shoot.spectrum
Unit: Percentage #Percentage or '0 to 1'
Color Space
The Color Space key gives the color space in which the image will be targeted.
The available Color Space values are: sRGB, Adobe RGB (1998), Apple RGB, Best RGB, Beta RGB, Bruce RGB, CIE RGB,
ColorMatch RGB, Don RGB 4, ECI RGB v2, Ekta Space PS5, NTSC RGB, PAL/SECAM RGB, ProPhoto RGB, SMPTE-C RGB,
Wide Gamut RGB.
Adaptation Method
The Adaptation Method key defines the adaptation method used to switch from one to another color space.
Available methods are:
• XYZ Scaling
• Bradford
• Von Kries
Configuration i
If colorimetric data have been previously calculated, an additional Configuration i (do not confuse it with
Configuration x) is added to each combination of [Shoot Illuminant, Color Space, Adaptation method].
The Configuration i key contains data calculated during a previous post-process. As this calculation takes time,
results are added to the file to allow you to reuse them.
Configuration x
External Development
Development:
Type: External Script #'Internal Generic' or 'External Script'
External Script: Development.py
Script requirements
Script must be written in Python version superior to 3.9.
Script must contain a function named Development with:
• two arguments:
º Rawimages: the set of raw images generated by the Speos Sensor System Exporter
º Sensors: the set of sensor data
• one output: image data coded as a 3D numpy array (color channel, i, j)
18: Troubleshooting
This section describes known non-operational behaviors, errors or limitations found in Speos.
A workaround is given if available.
Inclusion in this document does not imply the issues or limitations are applicable to future releases.
Additional known issues and limitations relevant to the 2023 R2 release may be found in the Known Issues and Limitations
document, and in the Ansys Customer Portal in the Class 3 Error Reports.
18.1.1. Materials
Known Issue Workaround or Solution
18.1.2. Sources
Known Issue Workaround or Solution
18.1.3. Sensors
Known Issue Workaround or Solution
In a sensor definition, when applying a XMP template with a Do not consider the error as it does not prevent
certain type (photometric, radiometric, etc.), and defining the the simulation to run and the results to be correct.
same type in the general section, an error occurs. (TFS 450549)
18.1.4. Components
Known Issue Workaround or Solution
When creating a 3D Texture, the projection of some 3D Try to:
Texture elements on the edges of the surface may appear
• manually modify the mapping file to correct the points
as floating along the vertical axis and not positioned on
that are in the wrong vertical axis.
the surface. (TFS 777597)
• change the CAD model to reduce the edge tolerance
as much as possible and re-compute the 3D texture.
If you modify a Speos Light Box that is used in a Speos Clear the Pattern File field in the Speos Pattern feature,
Pattern feature, the Speos Pattern does not update with then reimport the Speos Light Box to get the
the modified Speos Light Box. (TFS 706919) modifications.
Speos Light Box: if you select a sub-feature of a Speos Select the Speos Light Box Import instead of the
Light Box to add to a simulation, the link in the simulation sub-feature of the Speos Light Box.
to the sub-feature might be lost after the Speos Light Box
Import re-compute. (TFS 639072)
No OPT3D Mapping file is generated after creating a 3D Create a new 3D Texture to generate the OPT3DMapping
Texture. (TFS 425524 SF 32620) file.
Placing the origin point of the 3D texture tangent to the Slightly shift the point from the support surface so that
support geometry cause construction issues. (TFS it is no longer tangent with the support and regenerate
202323) the 3D texture.
Components cannot be imported as pattern files when Use a body instead of a component.
creating a 3D texture. (TFS 256663)
18.1.5. Simulation
Known Issue Workaround or Solution
When the meshing is very thin and the Ray tracer precision is Try setting the Ray tracer precision to Double or
set to Automatic, simulation may sometimes generate light increase the Meshing step value.
leakage. (TFS 756373)
As a result of an Inverse Simulation using a Camera Sensor or
an Irradiance Sensor when the Light Expert is activated, rays
generated using the Gathering algorithm are not integrated in
the *.lpf result. Therefore few rays may appear in the *.lpf result.
However all rays are correctly integrated in the XMP result. (TFS
646080)
Local Meshing: the Meshing can be wrong when the Step value Check the meshing quality using Preview Meshing
applied is smaller than the face size. (TFS 552272 551166 551355) and adjust if needed.
When the Automatic Compute is activated for an Interactive Deactivate the Automatic Compute.
Simulation, the Preview Meshing on geometries is not available.
(TFS 279412)
Canceling a Monte-Carlo Inverse Simulation does not create
intermediate result in the Simulation tree. (TFS 451371)
Using special characters in feature names prevent simulations
from running correctly. (TFS 378374 SF 31224)
Inverse simulations can take more time than expected when
several sensors are involved. For a simulation with Y sensors
and X time stopping criteria, the simulation may take X * Y time
to run. (TFS 338429)
Freeform Lens: In some case, you may have a collimating lens Try modifying the distance between the source
providing inaccurate results. This may be due to different and the lens' surface, and the lens' aperture.
factors: the lens' surface may be too close to the source and the
size of the lens aperture may be incorrectly defined. (TFS 817681)
When the Progress bar seems to be blocked, it may correspond
to a sub-operation on the modeler that cannot be precisely
monitored. (TFS 702752)
Opening an Optical Part Design project in two different Speos
versions may present different results. (TFS 202587)
Settings defined from the SpaceClaim Display tab are not kept
after the recompute of an OPD feature. (TFS 569412)
Optical Lens: When using script (only), the assignment of ID on
faces is reverted between freestyle and non-freestyle Optical
Lens, causing different selections. (TFS 599871)
Optical Surface: When using script (only), the assignment of ID
on faces is reverted between freestyle and non-freestyle Optical
Surface, causing different selections. (TFS 599871)
Link between OPD features and other Speos features are not Recreate the links when necessary.
necessarily preserved for some elements’ faces that have specific
statuses (example: sewing faces of the Optical Surface, back
face of the Optical Lens, etc.).
When prisms have the exact same width as the light guide's Decrease the prisms width (even of 0.001mm) and
width, some prisms might fail to build. (TFS 238577 - SF 26584) rebuild the feature.
18.1.8. Results
Known Issue Workaround or Solution
When using an *.ies source file with a very peaky intensity
distribution in an Interactive Simulation or a LiDAR Simulation,
if you generate a Projected Grid out of the simulation, then the
Projected Grid may not be visible in the 3D view due to the peaky
distribution. (TFS 652669 SF 38651)
Results of Sequence Detection (*.OptHash *.OptSequence) are
not removed if you run again the simulation with the sensor
Layer parameter set to another value than Sequence. (TFS
674798)
18.1.9. Automation
Known Issue Workaround or Solution
Block Recording: Sometimes you cannot play the recorded 1. Right-click in the Block Recording panel and
blocks due to one or several hidden blocks in error. A message deactivate Hide Non-Model Changing Blocks
warns you of hidden blocks in error. (TFS 820381) to show hidden blocks.
2. Delete the hidden blocks in error.
18.1.10. Miscellaneous
Known Issue Workaround or Solution
When using a SpaceClaim document that references an external Make sure to open Spaceclaim (via Workbench)
document, if SpaceClaim is not opened the process to copy the before saving the Workbench project. Then you
external document into the "otherDocs" folder is not done. can save the Workbench project.
(810170)
Locked document are not supported. (TFS 812291 SF 56983) Unlock document to modify your Speos project.
Beware that unlocking must be performed in the
same version as the version in which it has been
locked.
Licensing Management: opening a version 2022 R2 then opening First open the version 2023 R1, then open the
a version 2023 R1 does not work. (TFS 730734) version 2022 R2.
Axes are not exported when saving a project to another format
than scdocx. (TFS 656985 - SF 38823)
When a project is added to a geometry in Workbench, the Output
Files are not copied. (TFS 202762)
Automatically copy selected files under document's folder
option and Copy under document action which allow to copy
file in the Speos Input Files folder of the current project do not
apply to *.SPEOSLightBox files, 3D Textures, and CAD parts
imported using the Geometry Update tool. (TFS 558361)
The following characters are not supported in Speos features'
name: " < > | : * ? \ / (TFS 263147)
After opening a component (Open Component) from the Refresh the tree.
Structure tree, Speos objects added this component are not
displayed in the tree. (TFS 449252)
After a Speos tree refresh, the UV Mapping and Local Meshing
sections may not appear at the same place in the tree. (TFS
449445)
Speos session closes when a Speos instance is opened from
workbench.(TFS 372481)
Unwanted 3D view rendering artifacts can be seen with certain
geometries when the distance from origin of the scene to the
point of view is very far away. (TFS 435735 SF 32841)
If you import an external CAD part using the Open file command,
no CAD Update link is created; then you cannot use the
Geometry Update tool to update the part. (TFS 408007 SF 32027)
In some cases, a version upgrade might prevent Speos block Delete or adjust the version of the following
from being correctly loaded in Ansys Workbench. environment variables: SPEOSSC_BIN_PATH,
SPACECLAIM_PATH. Then, reload the Speos block
in Ansys Workbench.
When using a comma as decimal separator, trying to modify From your regional settings (Control Panel>Clock
values from the definition panel causes all decimals to be erased. and Region>Region>Additional Settings) change
(TFS 202092) the decimal symbol from "," (comma) to "." (dot).
When importing a HUD feature from Speos for CATIA V5, the
HUD elements fail to be imported with the geometry. (TFS
214218)
Problem
The following message is displayed:
"Licensing Error
Not enough Speos HPC licenses"
Cause
The parameter Number of threads takes into account the cores of the machine first, then the threads, leading to
the error if your license has a lower number of core than your computer.
Example: Your computer has 6 cores (12 threads) and your license 4 cores. If you define Number of threads to more
than 4, let's say 8, the parameter first checks the 6 cores of the machine to assign them to the license, then 2 threads.
But you only have a 4-core license, which leads to the error.
Solution
From the Speos options, specify the number of threads that are available in your license:
1. Go to File > Speos Options.
2. In the Light Simulation section, adjust the Number of threads so that they match the number of threads that
are available in your license.
Note: While needing to define a number of threads to be used for simulation, it is actually cores that are
going to be taken from the license.
18.2.2. Proportional to Body size STEP and SAG parameters are not respected
Problem
The following message is displayed:
"Proportional to Body size" STEP and SAG parameters are not respected.
After an external part import, some microscopic or empty bodies may have been created. When using a "Proportional
to Body size" meshing on these bodies, ratios "size of body"/STEP and "size of body"/SAG are set to a minimum of
10nm, leading to larger meshing on those bodies.
Please review:
• Value of SAG
• Value of STEP
• Bodies that may be too small or empty
Cause
After an external part import, some microscopic or empty bodies may have been created. When using a "Proportional
to Body size" meshing on these bodies, ratios "size of body"/STEP and "size of body"/SAG are set to a minimum of
10nm, leading to larger meshing on those bodies.
Solution
If the solutions provided in the error message do not work, try the following solution:
In the Repair tab, click Small Faces.
Note: To find the body which size is not proportional, in the Measure tab, you can use the Measure tool.
Note: The rounded value done does not impact the simulation as the light behavior on a body with a
bounding box lower than 10nm is not managed as the body is lower than the wavelength (which otherwise
would create artifacts).
Problem
The following message is displayed:
"Surface Extrapolated"
Cause
The technique used to calculate the image position requires to launch some rays around the image. When the
constraints are tensed in the HUD Optical Design (HOD), the image projection may be very close to the mirror sides.
That means some vignetting may happen: some of the additional rays launched for HUD Optical Analysis (HOA)
calculation may miss the border of the mirror.
Solution
1. Modify the mirror size and/or the pupil diameter.
2. Analyze the system with HOA.
3. Iterate until you get a correct system.
Problem
The defined Freestyle Lens cannot be generated on the given support.
Cause
Solution
1.
2.
3.
ANSYS, ANSYS Workbench, AUTODYN, CFX, FLUENT and any and all ANSYS, Inc. brand, product, service and feature
names, logos and slogans are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries located in the
United States or other countries. ICEM CFD is a trademark used by ANSYS, Inc. under license. CFX is a trademark of
Sony Corporation in Japan. All other brand, product, service and feature names or trademarks are the property of
their respective owners. FLEXlm and FLEXnet are trademarks of Flexera Software LLC.
Disclaimer Notice
THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFIDENTIAL
AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products and
documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement that
contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting laws,
warranties, disclaimers, limitations of liability, and remedies, and other provisions. The software products and
documentation may be used, disclosed, transferred, or copied only in accordance with the terms and conditions of
that software license agreement
ANSYS, Inc. and ANSYS Europe, Ltd. are UL registered ISO 9001: 2015
Third-Party Software
See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software
and third-party software. If you are unable to access the Legal Notice, contact Ansys, Inc.
Published in the U.S.A.
Protected by US Patents 7,639,267, 7,733,340, 7,830,377, 7,969,435, 8,207,990, 8,244,508, 8,253,726, 8,330,775,
10,650,172, 10,706,623, 10,769,850, D916,099, D916,100, 11,269,478, 11,475,184, and 2023/0004695.
Copyright © 2003-2023 ANSYS, Inc. All Rights Reserved. SpaceClaim is a registered trademark of ANSYS, Inc.
Portions of this software Copyright © 2010 Acresso Software Inc. FlexLM and FLEXNET are trademarks of Acresso
Software Inc.
Portions of this software Copyright © 2008 Adobe Systems Incorporated. All Rights Reserved. Adobe and Acrobat
are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other
countries
Ansys Workbench and GAMBIT and all other ANSYS, Inc. product names are trademarks or registered trademarks of
ANSYS, Inc. or its subsidiaries in the United States or other countries.
Contains BCLS (Bound-Constrained Least Squares) Copyright (C) 2006 Michael P. Friedlander, Department of
Computer Science, University of British Columbia, Canada, provided under a LGPL 3 license which is included in the
SpaceClaim installation directory (lgpl-3.0.txt). Derivative BCLS source code available upon request.
Contains SharpZipLib Copyright © 2009 C#Code
Anti-Grain Geometry Version 2.4 Copyright © 2002-2005 Maxim Shemanarev (McSeem).
Some SpaceClaim products may contain Autodesk® RealDWG by Autodesk, Inc., Copyright © 1998-2010 Autodesk,
Inc. All rights reserved. Autodesk, AutoCAD, and Autodesk Inventor are registered trademarks and RealDWG is a
trademark of Autodesk, Inc.
CATIA is a registered trademark of Dassault Systèmes.
Portions of this software Copyright © 2010 Google. SketchUp is a trademark of Google.
Portions of this software Copyright © 1999-2006 Intel Corporation. Licensed under the Apache License, Version 2.0.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.
Contains DotNetBar licensed from devcomponents.com.
KeyShot is a trademark of Luxion ApS.
MatWeb is a trademark of Automation Creations, Inc.
2008 Microsoft ® Office System User Interface is licensed from Microsoft Corporation. Direct3D, DirectX, Microsoft
PowerPoint, Excel, Windows, Windows Vista and the Windows Vista Start button are trademarks or registered
trademarks of Microsoft Corporation in the United States and/or other countries.
Portions of this software Copyright © 2005 Novell, Inc. (http://www.novell.com)
Creo Parametric and PTC are registered trademarks of Parametric Technology Corporation.
Persistence of Vision Raytracer and POV-Ray are trademarks of Persistence of Vision Raytracer Pty. Ltd.
Portions of this software Copyright © 1993-2009 Robert McNeel & Associates. All Rights Reserved. openNURBS is a
trademark of Robert McNeel & Associates. Rhinoceros is a registered trademark of Robert McNeel & Associates.
Portions of this software Copyright © 2005-2007, Sergey Bochkanov (ALGLIB project). *
Portions of this software are owned by Siemens PLM © 1986-2011. All Rights Reserved. Parasolid and Unigraphics
are registered trademarks and JT is a trademark of Siemens Product Lifecycle Management Software, Inc.
This work contains the following software owned by Siemens Industry Software Limited: D-CubedTM 2D DCM ©
2021. Siemens. All Rights Reserved.
SOLIDWORKS is a registered trademark of SOLIDWORKS Corporation.
Portions of this software are owned by Spatial Corp. © 1986-2011. All Rights Reserved. ACIS and SAT are registered
trademarks of Spatial Corp.
Contains Teigha for .dwg files licensed from the Open Design Alliance. Teigha is a trademark of the Open Design
Alliance.
Development tools and related technology provided under license from 3Dconnexion. © 1992 – 2008 3Dconnexion.
All rights reserved.
TraceParts is owned by TraceParts S.A. TraceParts is a registered trademark of TraceParts S.A.
Contains a modified version of source available from Unicode, Inc., copyright © 1991-2008 Unicode, Inc. All rights
reserved. Distributed under the Terms of Use in http://www.unicode.org/copyright.html.
Portions of this software Copyright © 1992-2008 The University of Tennessee. All rights reserved. [1]
Portions of this software Copyright © XHEO INC. All Rights Reserved. DeployLX is a trademark of XHEO INC.
This software incorporates information provided by American Institute of Steel Construction (AISC) for shape data
available at http://www.aisc.org/shapesdatabase.
This software incorporates information provided by ArcelorMittal® for shape data available at
http://www.sections.arcelormittal.com/products-services/products-ranges.html.
All other trademarks, trade names or company names referenced in SpaceClaim software, documentation and
promotional materials are used for identification only and are the property of their respective owners.
*Additional notice for LAPACK and ALGLIB Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:-Redistributions of source code must
retain the above copyright notice, this list of conditions and the following disclaimer.-Redistributions in binary form
must reproduce the above copyright notice, this list of conditions and the following disclaimer listed in this license
in the documentation and/or other materials provided with the distribution.-Neither the name of the copyright
holders nor the names of its contributors may be used to endorse promote products derived from this software
without specific prior written permission.
BCLS is licensed under the GNU Lesser General Public License (GPL) Version 3, Copyright (C) 2006 Michael P.
Friedlander, Department of Computer Science, University of British Columbia, Canada. A copy of the LGPL license
is included in the installation directory (lgpl-3.0.txt).
Please contact [email protected] for a copy of the source code for BCLS.
Eigen is licensed under the Mozilla Public License (MPL) Version 2.0, the text of which can be found at:
https://www.mozilla.org/media/MPL/2.0/index.815ca599c9df.txt. Please contact [email protected] for a
copy of the Eigen source code.
HDF5 (Hierarchical Data Format 5) Software Library and Utilities
Copyright (c) 2006, The HDF Group.
NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
Copyright (c) 1998-2006, The Board of Trustees of the University of Illinois.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted for any purpose
(including commercial purposes) provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following
disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following
disclaimer in the documentation and/or materials provided with the distribution.
3. In addition, redistributions of modified forms of the source or binary code must carry prominent notices stating
that the original code was changed and the date of the change.
4. All publications or advertising materials mentioning features or use of this software are asked, but not required,
to acknowledge that it was developed by The HDF Group and by the National Center for Supercomputing Applications
at the University of Illinois at Urbana-Champaign and credit the contributors.
5. Neither the name of The HDF Group, the name of the University, nor the name of any Contributor may be used to
endorse or promote products derived
from this software without specific prior written permission from The HDF Group, the University, or the Contributor,
respectively.
DISCLAIMER:
THIS SOFTWARE IS PROVIDED BY THE HDF GROUP AND THE CONTRIBUTORS "AS IS" WITH NO WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED. In no
event shall The HDF Group or the Contributors be liable for any damages suffered by the users arising out of the use
of this software, even if advised of the possibility of such damage. Anti-Grain Geometry - Version 2.4 Copyright (C)
2002-2004 Maxim Shemanarev (McSeem)
Permission to copy, use, modify, sell and distribute this software is granted provided this copyright notice appears
in all copies. This software is provided "as is" without express or implied warranty, and with no claim as to its
suitability for any purpose.
Some ANSYS-SpaceClaim products may contain Autodesk® RealDWG by Autodesk, Inc., Copyright © 1998-2010
Autodesk, Inc. All rights reserved. Autodesk, AutoCAD, and Autodesk Inventor are registered trademarks and RealDWG
is a trademark of Autodesk, Inc.
CATIA is a registered trademark of Dassault Systèmes.
Portions of this software Copyright © 2013 Trimble. SketchUp is a trademark of Trimble Navigation Limited.
This software is based in part on the work of the Independent JPEG Group.
Portions of this software Copyright © 1999-2006 Intel Corporation. Licensed under the Apache License, Version 2.0.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Contains DotNetBar licensed from devcomponents.com.
Portions of this software Copyright © 1988-1997 Sam Leffler and Copyright (c) 1991-1997 Silicon Graphics, Inc.
KeyShot is a trademark of Luxion ApS.
MatWeb is a trademark of Automation Creations, Inc.
2010 Microsoft ® Office System User Interface is licensed from Microsoft Corporation. Direct3D, DirectX, Microsoft
PowerPoint, Excel, Windows/Vista/Windows 7/Windows 8/Windows 10 and their respective Start Button designs are
trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.
Portions of this software Copyright © 2005 Novell, Inc. (Licensed at
http://stuff.mit.edu/afs/athena/software/mono_v3.0/arch/i386_linux26/mono/mcs/class/Managed.Windows.Forms/System.Windows.Forms.RTF/)
Pro/ENGINEER and PTC are registered trademarks of Parametric Technology Corporation.
POV-Ray is available without charge from http://www.pov-ray.org. No charge is being made for a grant of the license
to POV-Ray.
POV-Ray License Agreement
DISTRIBUTOR'S LICENCE AGREEMENT
Persistence of Vision Raytracer(tm) (POV-Ray(tm))
13 August 2004
Licensed Versions: Versions 3.5 and 3.6
Please read through the terms and conditions of this license carefully. This is a binding legal agreement between
you, the "Distributor" and Persistence of Vision Raytracer Pty. Ltd. ACN 105 891 870 ("POV"), a company incorporated
in the state of Victoria, Australia, for the product known as the "Persistence of Vision Raytracer(tm)", also referred
to herein as "POV-Ray(tm)". The terms of this agreement are set out at http://www.povray.org/distribution-license.html
("Official Terms"). The Official Terms take precedence over this document to the extent of any inconsistency.
1. INTRODUCTION
1.1. In this agreement, except to the extent the context requires otherwise, the following capitalized terms have the
following meanings:
(a) Distribution means:
(i) a single item of a distribution medium, including a CD Rom or DVD Rom, containing software programs and/or
data;
(ii) a set of such items;
(iii) a data file in a generally accepted data format from which such an item can be created using generally available
standard tools;
(iv) a number of such data files from which a set of such items can be created; or
(v) a data file in a generally accepted data storage format which is an archive of software programs and/or data;
(b) Derived Code means all software which is derived from or is an adaptation of any part of the Software other than
a scene file;
(c) Intellectual Rights means:
(i) all copyright, patent, trade mark, trade secret, design, and circuit layout rights;
(ii) all rights to the registration of such rights; and
(iii) all rights of a similar nature which exist anywhere in the world;
(d) Licensed Version means the version set out at the top of this agreement against the heading "Licensed Version"
and all minor releases of this version (ie releases of the form x.y.z);
(e) POV Associate means any person associated directly or indirectly with POV whether as a director, officer, employee,
subcontractor, agent, representative, consultant, licensee or otherwise;
(f) Modification Terms means the most recent version from time to time of the document of that name made available
from the Site (g) Revocation List means the list of that name linked to from the Official Terms;
(h) Site means www.povray.org;
(i) Software means the Licensed Version of the Persistence of Vision Raytracer(tm) (also known as POV-Ray(tm))
(including all POV-Ray program source files, executable (binary) files, scene files, documentation files, help files,
bitmaps and other POV-Ray files associated with the Licensed Version) in a form made available by
POV on the Site;
(j) User Licence means the most recent version from time to time of the document of that name made available from
the Site.
2. OPEN SOURCE DISTRIBUTIONS
2.1. In return for the Distributor agreeing to be bound by the terms of this agreement, POV grants the Distributor
permission to make a copy of the Software by including the Software in a generally recognised Distribution of a
recognised operating system where the kernel of that operating system is made available under licensing terms:
(a) which are approved by the Open Source Initiative (www.opensource.org) as complying with the "Open Source
Definition" put forward by the Open Source Initiative; or
(b) which comply with the "free software definition" of the Free Software Foundation (www.fsf.org). 2.2. As at June
2004, and without limiting the generality of the term, each of the following is a "generally recognised Distribution"
for the purposes of clause 2.1: Debian, Red Hat (Enterprise and Fedora), SuSE, Mandrake, Xandros, Gentoo and
Knoppix Linux distributions, and officially authorized distributions of the FreeBSD, OpenBSD, and NetBSD projects.
2.3. Clause 2.1 also applies to the Software being included in the above distributions 'package' and 'ports' systems,
where such exist;
2.4. Where the Distributor reproduces the Software in accordance with clause 2.1:
(a) the Distributor may rename, reorganise or repackage (without omission) the files comprising the Software where
such renaming, reorganisation or repackaging is necessary to conform to the naming or organisation scheme of the
target operating environment of the Distribution or of an established package management system of the target
operating environment of the Distribution; and (b) the Distributor must not otherwise rename, reorganise or repackage
the Software.
3. DISTRIBUTION LICENCE
3.1. Subject to the terms and conditions of this agreement, and in return for Distributor agreeing to be bound by the
terms of this agreement, POV grants the Distributor permission to make a copy of the Software in any of the following
circumstances:(a) in the course of providing a mirror of the POV-Ray Site (or part of it), which is made available
generally over the internet to each person without requiring that person to identify themselves and without any
other restriction other than restrictions designed to manage traffic flows;
(b) by placing it on a local area network accessible only by persons authorized by the Distributor whilst on the
Distributor's premises;
(c) where that copy is provided to a staff member or student enrolled at a recognised educational institution;
(d) by including the Software as part of a Distribution where:
(i) neither the primary nor a substantial purpose of the distribution of the Distribution is the distribution of the
Software. That is, the distribution of the Software
is merely incidental to the distribution of the Distribution; and
(ii) if the Software was not included in the Distribution, the remaining software and data included within the
Distribution would continue to function effectively and
according to its advertised or intended purpose;
(e) by including the Software as part of a Distribution where:
(i) there is no data, program or other files apart from the Software on the Distribution;
(ii) the Distribution is distributed by a person to another person known to that person; or
(iii) the Distributor has obtained explicit written authority from POV to perform the distribution, citing this clause
number, prior to the reproduction being
made.
3.2. In each case where the Distributor makes a copy of the Software in accordance with clause 3.1, the Distributor
must, unless no payment or other consideration of any type is received by Distributor in relation to the Distribution:
(a) ensure that each person who receives a copy of the Software from the Distributor is aware prior to acquiring that
copy:
(i) of the full name and contact details of the Distributor, including the Distributor's web site, street address, mail
address, and working email address;
(ii) that the Software is available without charge from the Site;
(iii) that no charge is being made for the granting of a licence over the Software.
(b) include a copy of the User Licence and this Distribution License with the copy of the Software. These licences
must be stored in the same subdirectory on the distribution medium as the Software and named in such a way as
to prominently identify their purpose;
3.3. The Distributor must not rename, reorganise or repackage any of the files comprising the Software without the
prior written authority of POV.
3.4. Except as explicitly set out in this agreement, nothing in this agreement permits Distributor to make any
modification to any part of the Software.
4. RESTRICTIONS ON DISTRIBUTION
4.1. Nothing in this agreement gives the Distributor: (a) any ability to grant any licence in respect of the use of the
Software or any part of it to any person;
(b) any rights or permissions in respect of, including rights or permissions to distribute or permit the use of, any
Derived Code;
(c) any right to bundle a copy of the Software (or part thereof), whether or not as part of a Distribution, with any
other items, including books and magazines. POV may, in response to a request, by notice in writing and in its
absolute discretion, permit such bundling on a case by case basis. This clause 4.1(c) does not apply to Distributions
permitted under clause 2;
(d) any right, permission or authorisation to infringe any Intellectual Right held by any third party.
4.2. Distributor may charge a fee for the making or the provision of a copy of the Software.
4.3. Where the making, or the provision, of a copy of the Software is authorised under the terms of clause 3 but not
under those of clause 2 of this agreement, the total of all fees charged in relation to such making or provision and
including all fees (including shipping and handling fees) which are charged in respect
of any software, hardware or other material provided in conjunction with or in any manner which is reasonably
connected with the making, or the provision, of a copy of the Software must not exceed the reasonable costs incurred
by the Distributor in making the reproduction, or in the provision, of that copy for which the fee
is charged.
4.4. Notwithstanding anything else in this agreement, nothing in this agreement permits the reproduction of any
part of the Software by, or on behalf of:
(a) Any person currently listed on the Revocation List from time to time;
(b) Any related body corporate (as that term is defined in section 50 of the Corporations Law 2001 (Cth)) of any
person referred to in clause 4.4(a);
(c) Any person in the course of preparing any publication in any format (including books, magazines, CD Roms or
on the internet) for any of the persons identified in paragraph (a);
(d) Any person who is, or has been, in breach of this Agreement and that breach has not been waived in writing
signed by POV; or
(e) Any person to whom POV has sent a notice in writing or by email stating that that person may not distribute the
Software.
4.5. From the day two years after a version of the Software more recent than the Licensed Version is made available
by POV on the Site clause 3 only permits reproduction of the Software where the Distributor ensures that each
recipient of such a reproduction is aware, prior to obtaining that reproduction, that that reproduction of the Software
is an old version of the Software and that a more recent version of the Software is available from the Site.
5. COPYRIGHT AND NO LITIGATION
5.1. Copyright subsists in the Software and is protected by Australian and international copyright laws.
5.2. Nothing in this agreement gives Distributor any rights in respect of any Intellectual Rights in respect of the
Software or which are held by or on behalf of POV. Distributor acknowledges that it does not acquire any rights in
respect of such Intellectual Rights.
5.3. Distributor acknowledges that if it performs out any act in respect of the Software without the permission of
POV it will be liable to POV for all damages POV may suffer (and which Distributor acknowledges it may suffer) as
well as statutory damages to the maximum extent permitted by law and that it may also be liable to
criminal prosecution.
5.4. Distributor must not commence any action against any person alleging that the Software or the use or distribution
of the Software infringes any rights, including Intellectual Rights of the Distributor or of any other person. If Distributor
provides one or more copies of the Software to any other person in accordance with the agreement, Distributor
waives all rights it has, or may have in the future, to bring any action, directly or indirectly, against any person to
the extent that such an action relates to an infringement of any rights, including Intellectual Rights of any person in
any way arising from, or in relation to, the use, or distribution, (including through the authorisation of such use or
distribution) of:(a) the Software;
(b) any earlier or later version of the Software; or
(c) any other software to the extent it incorporates elements of the software referred to in paragraphs (a) or (b) of
this clause
5.4.
6. DISCLAIMER OF WARRANTY
6.1. To the extent permitted by law, all implied terms and conditions are excluded from this agreement. Where a
term or condition is implied into this agreement and that term cannot be legally excluded, that term has effect as
a term or condition of this agreement. However, to the extent permitted by law, the liability
of POV for a breach of such an implied term or condition is limited to the fullest extent permitted by law.
6.2. To the extent permitted by law, this Software is provided on an "AS IS" basis, without warranty of any kind,
express or implied, including without limitation, any implied warranties of merchantability, fitness for a particular
purpose and non-infringement of intellectual property of any third party. The Software has inherent limitations
including design faults and programming bugs.
6.3. The entire risk as to the quality and performance of the Software is borne by Distributor, and it is Distributor's
responsibility to ensure that the Software fulfils Distributor's requirements prior to using it in any manner (other
than testing it for the purposes of this paragraph in a non-critical and non-production environment), and prior to
distributing it in any fashion.
6.4. This clause 6 is an essential and material term of, and cannot be severed from, this agreement. If Distributor
does not or cannot agree to be bound by this clause, or if it is unenforceable, then Distributor must not, at any time,
make any reproductions of the Software under this agreement and this agreement gives the
Distributor no rights to make any reproductions of any part of the Software.
7. NO LIABILITY
7.1. When you distribute or use the Software you acknowledge and accept that you do so at your sole risk. Distributor
agrees that under no circumstances will it have any claim against POV or any POV Associate for any loss, damages,
harm, injury, expense, work stoppage, loss of business information, business interruption,
computer failure or malfunction which may be suffered by you or by any third party from any cause whatsoever,
howsoever arising, in connection with your use or distribution of the Software even where POV was aware, or ought
to have been aware, of the potential of such loss.
7.2. Neither POV nor any POV Associate has any liability to Distributor for any indirect, general, special, incidental,
punitive and/or consequential damages arising as a result of a breach of this agreement by POV or which arises in
any way related to the Software or the exercise of a licence granted to Distributor under this
agreement.
7.3. POV's total aggregate liability to the Distributor for all loss or damage arising in any way related to this agreement
is limited to the lesser of: (a) AU$100, and (b) the amount received by POV from Distributor as payment for the grant
of a licence under this agreement.
7.4. Distributor must bring any action against POV in any way related to this agreement or the Software within 3
months of the cause of action first arising. Distributor waives any right it has to bring any action against POV and
releases POV from all liability in respect of a cause of action if initiating process in relation to that action is not served
on POV within 3 months of the cause of action arising. Where a particular set of facts give rise to more than one cause
of action this clause 7.4 applies as if all such causes of action arise at the time the first such cause of action arises.
7.5. This clause 7 is an essential and material term of, and cannot be severed from, this agreement. If Distributor
does not or cannot agree to be bound by this clause, or if it is unenforceable, then Distributor must not, at any time,
make any reproductions of the Software under this agreement and this agreement gives the Distributor no rights
to make any reproductions of any part of the Software.
8. INDEMNITY
8.1. Distributor indemnifies POV and each POV Associate and holds each of them harmless against all claims which
arise from any loss, damages, harm, injury, expense, work stoppage, loss of business information, business
interruption, computer failure or malfunction, which may be suffered by Distributor or any other
party whatsoever as a consequence of:
(a) any act or omission of POV and/or any POV Associate, whether negligent or not;
(b) Distributor's use and/or distribution of the Software; or
(c) any other cause whatsoever, howsoever arising, in connection with the Software. This clause 8 is binding on
Distributor's estate, heirs, executors, legal successors, administrators, parents and/or guardians.
8.2. Distributor indemnifies POV, each POV Associate and each of the authors of any part of the Software against all
loss and damage and for every other consequence flowing from any breach by Distributor of any Intellectual Right
held by POV.
8.3. This clause 8 constitutes an essential and material term of, and cannot be severed from, this agreement. If
Distributor does not or cannot agree to be bound by this clause, or if it is unenforceable, then Distributor must not,
at any time, make any reproductions of the Software under this agreement and this agreement gives the Distributor
no rights to make any reproductions of any part of the Software.
9. HIGH RISK ACTIVITIES
9.1. This Software and the output produced by this Software is not fault-tolerant and is not designed, manufactured
or intended for use as on-line control equipment in hazardous environments requiring fail-safe performance, in
which the failure of the Software could lead or directly or indirectly to death, personal injury, or severe physical or
environmental damage ("High Risk Activities"). POV specifically disclaims all express or implied warranty of fitness
for High Risk Activities and, notwithstanding any other term of this agreement, explicitly prohibits the use or
distribution of the Software for such purposes.
10. ENDORSEMENT PROHIBITION
10.1. Distributor must not, without explicit written permission from POV, claim or imply in any way that:
(a) POV or any POV Associate officially endorses or supports the Distributor or any product (such as CD, book, or
magazine) associated with the Distributor or any reproduction of the Software made in accordance with this
agreement; or(b) POV derives any benefit from any reproduction made in accordance with this agreement.
11. TRADEMARKS
11.1. "POV-Ray(tm)", "Persistence of Vision Raytracer(tm)" and "POV-Team(tm)" are trademarks of Persistence of
Vision Raytracer Pty. Ltd. Any other trademarks referred to in this agreement are the property of their respective
holders. Distributor must not use, apply for, or register anywhere in the world, any word, name
(including domain names), trade mark or device which is substantially identical or deceptively or confusingly similar
to any of Persistence of Vision Raytracer Pty. Ltd's trade marks.
12. MISCELLANEOUS
12.1. The Official Terms, including those documents incorporated by reference into the Official Terms, and the
Modification Terms constitute the entire agreement between the parties relating to the distribution of the Software
and, except where stated to the contrary in writing signed by POV, supersedes all previous
negotiations and correspondence in relation to it.
12.2. POV may modify this agreement at any time by making a revised licence available from the Site at
http://www.povray.org/distribution-license.html.
This agreement is modified by replacing the terms in this agreement with those of the revised licence from the time
that the revised licence is so made available. It is your responsibility to ensure that you have read and agreed to the
current version of this agreement prior to distributing the Software.
12.3. Except where explicitly stated otherwise herein, if any provision of this Agreement is found to be invalid or
unenforceable, the invalidity or unenforceability of such provision shall not affect the other provisions of this
agreement, and all provisions not affected by such invalidity or unenforceability shall remain in
full force and effect. In such cases Distributor agrees to attempt to substitute for each invalid or unenforceable
provision a valid or enforceable provision which achieves to the greatest extent possible, the objectives and intention
of the invalid or unenforceable provision.
12.4. A waiver of a right under this agreement is not effective unless given in writing signed by the party granting
that waiver. Unless otherwise stipulated in the waiver, a waiver is only effective in respect of the circumstances in
which it is given and is not a waiver in respect of any other rights or a waiver in respect of
LICENSE AGREEMENT MUST ACCOMPANY ALL POV-RAY FILES WHETHER IN THEIR OFFICIAL OR CUSTOM VERSION
FORM. IT MAY NOT BE REMOVED OR MODIFIED. THIS GENERAL LICENSE AGREEMENT GOVERNS THE USE OF
POV-RAY WORLDWIDE. THIS DOCUMENT SUPERSEDES AND REPLACES ALL PREVIOUS GENERAL LICENSES.
INTRODUCTION
This document pertains to the use of the Persistence of Vision Ray Tracer (also known as POV-Ray). It applies to all
POV-Ray program source files, executable (binary) files, scene files, documentation files, help files, bitmaps and
other POV-Ray files contained in official Company archives, whether in full or any part thereof, and are herein referred
to as the "Software". The Company reserves the right to revise these rules in future versions and to make additional
rules to address new circumstances at any time. Such rules, when made, will be posted in a revised license file, the
latest version of which is available from the Company website at
http://www.povray.org/povlegal.html.
USAGE PROVISIONS
Subject to the terms and conditions of this agreement, permission is granted to the User to use the Software and
its associated files to create and render images. The creator of a scene file retains all rights to any scene files they
create, and any images generated by the Software from them. Subject to the other terms of this license, the User is
permitted to use the Software in a profit-making enterprise, provided such profit arises primarily from use of the
Software and not from distribution of the Software or a work including the Software in whole or part.
Please refer to http://www.povray.org/povlegal.html for licenses covering distribution of the Software and works
including the Software. The User is also granted the right to use the scene files, fonts, bitmaps, and include files
distributed in the INCLUDE and SCENES\INCDEMO sub-directories of the Software in their own scenes. Such permission
does not extend to any other files in the SCENES directory or its sub-directories. The SCENES files are for the User's
enjoyment and education but may not be the basis of any derivative works unless the file in question explicitly grants
permission to do such.
This licence does not grant any right of re-distribution or use in any manner other than the above. The Company
has separate license documents that apply to other uses (such as re-distribution via the internet or on CD) ; please
visit http://www.povray.org/povlegal.html for links to these. In particular you are advised that the sale, lease, or
rental of the Software in any form without written authority from the Company is explicitly prohibited. Notwithstanding
anything in the balance of this licence agreement, nothing in this licence agreement permits the installation or use
of the Software in conjunction with any product (including software) produced or distributed by any party who is,
or has been, in violation of this licence agreement or of the distribution licence
(http://www.povray.org/distribution-license.html)
(or any earlier or later versions of those documents) unless:
a. the Company has explicitly released that party in writing from the consequences of their non compliance; or
b. both of the following are true:
i. the installation or use of the Software is without the User being aware of the abovementioned violation; and
ii. the installation or use of the Software is not a result (whether direct or indirect) of any request or action of the
abovementioned party (or any of its products), any agent of that party (or any of their products), or any person(s)
involved in supplying any such product to the User.
COPYRIGHT
Copyright © 1991-2003, Persistence of Vision Team.
Copyright © 2003-2004, Persistence of Vision Raytracer Pty. Ltd.
Windows version Copyright © 1996-2003, Christopher Cason.
Copyright subsists in this Software which is protected by Australian and international copyright laws. The Software
is NOT PUBLIC DOMAIN. Nothing in this agreement shall give you any rights in respect of the intellectual property
of the Company and you acknowledge that you do not acquire any rights in respect of such intellectual property
rights. You acknowledge that the Software is the valuable intellectual property of the Company and that if you use,
modify or distribute the Software for unauthorized purposes or in an unauthorized manner (or cause or allow the
forgoing to occur), you will be liable to the Company for any damages it may suffer (and which you acknowledge it
may suffer) as well as statutory damages to the maximum extent permitted by law and also that you may be liable
to
criminal prosecution. You indemnify the Company and the authors of the Software for every single consequence
flowing from the aforementioned events.
DISCLAIMER OF WARRANTY
express or implied, including without limitation, any implied warranties of merchantability, fitness for a particular
purpose and non-infringement of intellectual property of any third party. This Software has inherent limitations
including design faults and programming bugs. The entire risk as to the quality and performance of the Software is
borne by you, and it is your responsibility to ensure that it does what you require it to do prior to using it for any
purpose (other than testing it), and prior to distributing it in any fashion. Should the Software prove defective, you
agree that you alone assume the entire cost resulting in any way from such defect.
This disclaimer of warranty constitutes an essential and material term of this agreement. If you do not or cannot
accept this, or if it is unenforceable in your jurisdiction, then you may not use the Software in any manner.
NO LIABILITY
When you use the Software you acknowledge and accept that you do so at your sole risk. You agree that under no
circumstances shall you have any claim against the Company or anyone associated directly or indirectly with the
Company whether as employee, subcontractor, agent, representative, consultant, licensee or otherwise ("Company
Associates") for any loss, damages, harm, injury, expense, work stoppage, loss of business information, business
interruption, computer failure or malfunction which may be suffered by you or by any third party from any cause
whatsoever, howsoever arising, in connection with your use or distribution of the Software even where the Company
were aware, or ought to have been aware, of the potential of such loss. Damages referred to above shall include
direct, indirect, general, special, incidental, punitive and/or consequential. This disclaimer of liability constitutes
an essential and material term of this agreement. If you do not or cannot accept this, or if it is unenforceable in your
jurisdiction, then you may not use the Software.
INDEMNITY
You indemnify the Company and Company Associates and hold them harmless against any claims which may arise
from any loss, damages, harm, injury, expense, work stoppage, loss of business information, business interruption,
computer failure or malfunction, which may be suffered by you or any other party whatsoever as a consequence of
any act or omission of the Company and/or Company Associates, whether negligent or not, arising out of your use
and/or distribution of the Software, or from any other cause whatsoever, howsoever arising, in connection with the
Software. These provisions are binding on your estate, heirs, executors, legal successors, administrators, parents
and/or guardians.
This indemnification constitutes an essential and material term of this agreement. If you do not or cannot accept
this, or if it is unenforceable in your jurisdiction, then you may not use the Software.
HIGH RISK ACTIVITIES
This Software and the output produced by this Software is not fault-tolerant and is not designed, manufactured or
intended for use as on-line control equipment in hazardous environments requiring fail-safe performance, in which
the failure of the Software could lead or directly or indirectly to death, personal injury, or severe physical or
environmental damage ("High Risk Activities"). The Company specifically disclaims any express or implied warranty
of fitness for High Risk Activities and explicitly prohibits the use of the Software for such purposes.
CRYPTOGRAPHIC SIGNING OF DOCUMENTS
Changes to this Agreement and documents issued under its authority may be cryptographically signed by the POV-Ray
Team Co-ordinator's private PGP key.
In the absence of evidence to the contrary, such documents shall be considered, under the terms of this Agreement,
to be authentic provided the signature is
valid. The master copy of this Agreement at http://www.povray.org/povlegal.html will also be signed by the current
version of the team-coordinator's key.
The public key for the POV-Ray Team-coordinator can be retrieved from the location https://secure.povray.org/keys/.
The current fingerprint for it is
B4DD 932A C080 C3A3 6EA2 9952 DB04 4A74 9901 4518.
MISCELLANEOUS
This Agreement constitutes the complete agreement concerning this license. Any changes to this agreement must
be in writing and may take the form of
notifications by the Company to you, or through posting notifications on the Company website. THE USE OF THIS
SOFTWARE BY ANY PERSON OR ENTITY IS
EXPRESSLY MADE CONDITIONAL ON THEIR ACCEPTANCE OF THE TERMS SET FORTH HEREIN. Except where explicitly
stated otherwise herein, if any provision of this
Agreement is found to be invalid or unenforceable, the invalidity or unenforceability of such provision shall not
affect the other provisions of this agreement, and all provisions not affected by such invalidity or unenforceability
shall remain in full force and effect. In such cases you agree to attempt to substitute for each invalid or unenforceable
provision a valid or enforceable provision which achieves to the greatest extent possible, the objectives and intention
of the invalid or unenforceable
provision. The validity and interpretation of this agreement will be governed by the laws of Australia in the state of
Victoria (except for conflict of law provisions).
CONTACT INFORMATION
License inquiries can be made via email; please use the following address (but see below prior to emailing) :
team-coord-[three-letter month]-[four-digit year]@povray.org for example, [email protected] should
be used if at the time you send the email it is the month of June 2004. The changing email addresses are necessary
to combat spam and email viruses. Old email addresses may be deleted at our discretion.
Note that the above address may change for reasons other than that given above; please check the version of this
document at http://www.povray.org/povlegal.html for the current address. Note that your inability or failure to
contact us for any reason is not an excuse for violating this licence.
Do NOT send any attachments of any sort other than by prior arrangement.
EMAIL MESSAGES INCLUDING ATTACHMENTS WILL BE DELETED UNREAD.
The following postal address is only for official license business. Please note that it is preferred that initial queries
about licensing be made via email ; postal mail should only be used when email is not possible, or when written
documents are being exchanged by prior arrangement.
Persistence of Vision Raytracer Pty. Ltd.
PO Box 407
Williamstown,
Victoria 3016
Australia
Portions of this software are owned by Siemens PLM © 1986-2013. All Rights Reserved. Parasolid, Unigraphics, and
SolidEdge are registered trademarks and JT is a trademark of Siemens Product Lifecycle Management Software,
Inc.
SolidWorks is a registered trademark of SolidWorks Corporation.
Portions of this software are owned by Spatial Corp. © 1986-2013. All Rights Reserved. ACIS, SAT and SAB are registered
trademarks of Spatial Corp.
Contains Teigha for .dwg files licensed from the Open Design Alliance. Teigha is a trademark of the Open Design
Alliance.
Development tools and related technology provided under license from 3Dconnexion. © 1992 – 2008 3Dconnexion.
All rights reserved.
•TraceParts is owned by TraceParts S.A. TraceParts is a registered trademark of TraceParts S.A.
Copyright © 1991-2017 Unicode, Inc. All rights reserved.
Distributed under the Terms of Use in http://www.unicode.org/copyright.html. Permission is hereby granted, free
of charge, to any person obtaining a copy of the Unicode data files and any associated documentation (the "Data
Files") or Unicode software and any associated documentation (the "Software") to deal in the Data Files or Software
without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or
sell copies of the Data Files or Software, and to permit persons to whom the Data Files or Software are furnished to
do so, provided that either (a) this copyright and permission notice appear with all copies of the Data Files or Software,
or
(b) this copyright and permission notice appear in associated Documentation.
THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD
PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE
FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER
RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THE DATA FILES OR
SOFTWARE.
Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to
promote the sale, use or other dealings in these Data Files or Software without prior written authorization of the
copyright holder.
Portions of this software Copyright © 1992-2008 The University of Tennessee. All rights reserved.
This product includes software developed by XHEO INC (http://xheo.com).
Portions of this software are owned by Tech Soft 3D, Inc. Copyright © 1996-2013. All rights reserved. HOOPS is a
registered trademark of Tech Soft 3D, Inc.
Portions of this software are owned by MachineWorks Limited. Copyright ©2013. All rights reserved. Polygonica is
a registered trademark of MachineWorks Limited.
Apache License
Version 2.0, January 2004 http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1
through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are
under common control with that entity. For the purposes of this definition,
"Control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial
ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source
code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form,
including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as
indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix
below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the
Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole,
an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications
or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the
Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright
owner. For the purposes of this definition,
"Submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its
representatives, including but not limited to communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and
improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing
by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been
received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants
to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce,
prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative
Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to
You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such
license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit)
alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent
infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date
such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium,
with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark,
and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part
of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute
must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text
file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the
Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices
normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License.
You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum
to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying
the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license
terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works
as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions
stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution
intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede
or modify the terms of any separate license agreement you may have executed with Licensor regarding such
Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product
names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work
and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and
each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely
responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated
with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or
otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing,
shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential
damages of any character arising as a result of this License or out of the use or inability to use the Work (including
but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other
commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may
choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or
rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf
and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend,
and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by
reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS