SAP HANA SQL Script Reference en
SAP HANA SQL Script Reference en
SAP HANA SQL Script Reference en
3 What is SQLScript? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 SQLScript Security Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 SQLScript Processing Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Orchestration Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Declarative Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
15 Supportability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
15.1 M_ACTIVE_PROCEDURES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
15.2 Query Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
SQLScript Query Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
15.3 Type and Length Check for Table Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
15.4 SQLScript Debugger. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Conditional Breakpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Watchpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Break on Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Save Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
15.5 EXPLAIN PLAN FOR Call. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
15.6 SQLScript Code Analyzer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
15.7 SQLScript Plan Profiler. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
18 Appendix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
18.1 Example code snippets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
ins_msg_proc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
This reference describes how to use the SQL extension SAP HANA SQLScript to embed data-intensive
application logic into SAP HANA.
SQLScript is a collection of extensions to the Structured Query Language (SQL). The extensions include:
● Data extension, which allows the definition of table types without corresponding tables
● Functional extension, which allows the definition of (side-effect free) functions which can be used to
express and encapsulate complex data flows
● Procedural extension, which provides imperative constructs executed in the context of the database
process.
● Data extension, which allows the definition of table types without corresponding tables
● Functional extension, which allows the definition of (side-effect free) functions that can be used to express
and encapsulate complex data flows
● Procedural extension, which provides imperative constructs executed in the context of the database
process.
The motivation behind SQLScript is to embed data-intensive application logic into the database. Currently,
applications only offload very limited functionality into the database using SQL, most of the application logic is
normally executed on an application server. The effect of that is that data to be operated upon needs to be
copied from the database onto the application server and vice versa. When executing data-intensive logic, this
copying of data can be very expensive in terms of processor and data transfer time. Moreover, when using an
imperative language like ABAP or JAVA for processing data, developers tend to write algorithms which follow a
one-tuple-at-a-time semantics (for example, looping over rows in a table). However, these algorithms are hard
to optimize and parallelize compared to declarative set-oriented languages like SQL.
The SAP HANA database is optimized for modern technology trends and takes advantage of modern hardware,
for example, by having data residing in the main memory and allowing massive parallelization on multi-core
CPUs. The goal of the SAP HANA database is to support application requirements by making use of such
hardware. The SAP HANA database exposes a very sophisticated interface to the application, consisting of
many languages. The expressiveness of these languages far exceeds that attainable with OpenSQL. The set of
SQL extensions for the SAP HANA database, which allows developers to push data-intensive logic to the
database, is called SQLScript. Conceptually SQLScript is related to stored procedures as defined in the SQL
standard, but SQLScript is designed to provide superior optimization possibilities. SQLScript should be used in
cases where other modeling constructs of SAP HANA, for example analytic views or attribute views are not
sufficient. For more information on how to best exploit the different view types, see "Exploit Underlying Engine".
The set of SQL extensions are the key to avoiding massive data copies to the application server and to
leveraging sophisticated parallel execution strategies of the database. SQLScript addresses the following
problems:
● Decomposing an SQL query can only be performed by using views. However, when decomposing complex
queries by using views, all intermediate results are visible and must be explicitly typed. Moreover, SQL
views cannot be parameterized, which limits their reuse. In particular they can only be used like tables and
embedded into other SQL statements.
● SQL queries do not have features to express business logic (for example a complex currency conversion).
As a consequence, such business logic cannot be pushed down into the database (even if it is mainly based
on standard aggregations like SUM(Sales), and so on).
● An SQL query can only return one result at a time. As a consequence, the computation of related result
sets must be split into separate, usually unrelated, queries.
● As SQLScript encourages developers to implement algorithms using a set-oriented paradigm and not
using a one-tuple-at-a-time paradigm, imperative logic is required, for example by iterative approximation
algorithms. Thus, it is possible to mix imperative constructs known from stored procedures with
declarative ones.
Related Information
You can develop secure procedures using SQLScript in SAP HANA by observing the following
recommendations.
Using SQLScript, you can read and modify information in the database. In some cases, depending on the
commands and parameters you choose, you can create a situation in which data leakage or data tampering
can occur. To prevent this, SAP recommends using the following practices in all procedures.
● Mark each parameter using the keywords IN or OUT. Avoid using the INOUT keyword.
● Use the INVOKER keyword when you want the user to have the assigned privileges to start a procedure.
The default keyword, DEFINER, allows only the owner of the procedure to start it.
● Mark read-only procedures using READS SQL DATA whenever it is possible. This ensures that the data and
the structure of the database are not altered.
Tip
● Ensure that the types of parameters and variables are as specific as possible. Avoid using VARCHAR, for
example. By reducing the length of variables you can reduce the risk of injection attacks.
● Perform validation on input parameters within the procedure.
Dynamic SQL
In SQLScript you can create dynamic SQL using one of the following commands: EXEC and EXECUTE
IMMEDIATE. Although these commands allow the use of variables in SQLScript where they might not be
supported. In these situations you risk injection attacks unless you perform input validation within the
procedure. In some cases injection attacks can occur by way of data from another database table.
To avoid potential vulnerability from injection attacks, consider using the following methods instead of dynamic
SQL:
● Use static SQL statements. For example, use the static statement, SELECT instead of EXECUTE
IMMEDIATE and passing the values in the WHERE clause.
● Use server-side JavaScript to write this procedure instead of using SQLScript.
● Perform validation on input parameters within the procedure using either SQLScript or server-side
JavaScript.
● Use APPLY_FILTER if you need a dynamic WHERE condition
● Use the SQL Injection Prevention Function
Escape Code
You might need to use some SQL statements that are not supported in SQLScript, for example, the GRANT
statement. In other cases you might want to use the Data Definition Language (DDL) in which some <name>
To avoid potential vulnerability from injection attacks, consider using the following methods instead of escape
code:
Tip
For more information about security in SAP HANA, see the SAP HANA Security Guide.
Related Information
To better understand the features of SQLScript and their impact on execution, it can be helpful to understand
how SQLScript is processed in the SAP HANA database.
When a user defines a new procedure, for example using the CREATE PROCEDURE statement, the SAP HANA
database query compiler processes the statement in a similar way it processes an SQL statement. A step-by-
step analysis of the process flow follows below:
When the procedure starts, the invoke activity can be divided into two phases:
1. Compilation
○ Code generation - for declarative logic the calculation models are created to represent the data flow
defined by the SQLScript code. It is optimized further by the calculation engine, when it is instantiated.
For imperative logic the code blocks are translated into L-nodes.
○ The calculation models generated in the previous step are combined into a stacked calculation model.
2. Execution - the execution commences with binding actual parameters to the calculation models. When the
calculation models are instantiated they can be optimized based on concrete input provided. Optimizations
include predicate or projection embedding in the database. Finally, the instantiated calculation model is
executed by using any of the available parts of the SAP HANA database.
With SQLScript you can implement applications by using both imperative orchestration logic and (functional)
declarative logic, and this is also reflected in the way SQLScript processing works for those two coding styles.
Orchestration logic is used to implement data-flow and control-flow logic using imperative language constructs
such as loops and conditionals. The orchestration logic can also execute declarative logic, which is defined in
the functional extension by calling the corresponding procedures. In order to achieve an efficient execution on
both levels, the statements are transformed into a dataflow graph to the maximum extent possible. The
compilation step extracts data-flow oriented snippets out of the orchestration logic and maps them to data-
flow constructs. The calculation engine serves as execution engine of the resulting dataflow graph. Since the
language L is used as intermediate language for translating SQLScript into a calculation model, the range of
mappings may span the full spectrum – from a single internal L-node for a complete SQLScript script in its
simplest form, up to a fully resolved data-flow graph without any imperative code left. Typically, the dataflow
graph provides more opportunities for optimization and thus better performance.
To transform the application logic into a complex data-flow graph two prerequisites have to be fulfilled:
● All data flow operations have to be side-effect free, that is they must not change any global state either in
the database or in the application logic.
● All control flows can be transformed into a static dataflow graph.
In SQLScript the optimizer will transform a sequence of assignments of SQL query result sets to table variables
into parallelizable dataflow constructs. The imperative logic is usually represented as a single node in the
dataflow graph, and thus it is executed sequentially.
This procedure features a number of imperative constructs including the use of a cursor (with associated
state) and local scalar variables with assignments.
Declarative logic is used for efficient execution of data-intensive computations. This logic is represented
internally as data flows which can be executed in a parallel manner. As a consequence, operations in a data-
flow graph have to be free of side effects. This means they must not change any global state neither in the
database, nor in the application. The first condition is ensured by only allowing changes to the data set that is
passed as input to the operator. The second condition is achieved by allowing only a limited subset of language
features to express the logic of the operator. If those prerequisites are fulfilled, the following types of operators
are available:
Logically each operator represents a node in the data-flow graph. Custom operators have to be implemented
manually by SAP.
This document uses BNF (Backus Naur Form) which is the notation technique used to define programming
languages. BNF describes the syntax of a grammar by using a set of production rules and by employing a set of
symbols.
Symbol Description
<> Angle brackets are used to surround the name of a syntax element (BNF non-terminal) of the SQL
language.
::= The definition operator is used to provide definitions of the element appearing on the left side of
the operator in a production rule.
[] Square brackets are used to indicate optional elements in a formula. Optional elements may be
specified or omitted.
{} Braces group elements in a formula. Repetitive elements (zero or more elements) can be specified
within brace symbols.
| The alternative operator indicates that the portion of the formula following the bar is an alternative
to the portion preceding the bar.
... The ellipsis indicates that the element may be repeated any number of times. If ellipsis appears
after grouped elements, the grouped elements enclosed with braces are repeated. If ellipsis ap
pears after a single element, only that element is repeated.
!! Introduces normal English text. This is used when the definition of a syntactic element is not ex
pressed in BNF.
Throughout the BNF used in this document each syntax term is defined to one of the lowest term
representations shown below.
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<letter> ::= a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q |
r | s | t | u | v | w | x | y | z
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q |
R | S | T | U | V | W | X | Y | Z
<comma> ::= ,
<dollar_sign> ::= $
<hash_symbol> ::= #
<left_bracket> ::= [
<period> ::= .
<pipe_sign> ::= |
<right_bracket> ::= ]
<right_curly_bracket> ::= }
<sign> ::= + | -
<underscore> ::= _
Besides the built-in scalar SQL data types, SQLScript allows you to use user-defined types for tabular values.
The SQLScript type system is based on the SQL-92 type system. It supports the following primitive data types:
Note
This also holds true for SQL statements, apart from the TEXT and SHORTTEXT types.
For more information on scalar types, see SAP HANA SQL and System Views Reference, Data Types.
The SQLScript data type extension allows the definition of table types. These types are used to define
parameters for procedures representing tabular results.
Syntax
Syntax Elements
Identifies the table type to be created and, optionally, in which schema it should be created.
For more information on data types, see Scalar Data Types [page 16].
Description
Example
Syntax
Syntax Elements
The identifier of the table type to be dropped, with optional schema name
When the <drop_option> is not specified, a non-cascaded drop is performed. This drops only the specified
type, dependent objects of the type are invalidated but not dropped.
The invalidated objects can be revalidated when an object with the same schema and object name is created.
Example
You can declare a row type variable, which is a collection of scalar data types. You can use this to easily fetch a
single row from a table.
To declare row type variable, you can enumerate a list of columns, or use the TYPE LIKE keyword.
To assign values to a row type variable or to reference values of a row variable, proceed as follows.
DO BEGIN
DECLARE x, y ROW (a INT, b VARCHAR(16), c TIMESTAMP);
x = ROW(1, 'a', '2000-01-01');
x.a = 2;
y = :x;
SELECT :y.a, :y.b, :y.c FROM DUMMY;
-- Returns [2, 'a', '2000-01-01']
END;
You can fetch or select multiple values into a single row variable.
DO BEGIN
DECLARE CURSOR cur FOR SELECT 1 as a, 'a' as b, to_timestamp('2000-01-01')
as c FROM DUMMY;
DECLARE x ROW LIKE :cur;
OPEN cur;
FETCH cur INTO x;
SELECT :x.a, :x.b, :x.c FROM DUMMY;
-- Returns [1, 'a', '2000-01-01']
SELECT 2, 'b', '2000-02-02' INTO x FROM DUMMY;
SELECT :x.a, :x.b, :x.c FROM DUMMY;
-- Returns [2, 'b', '2000-02-02']
END;
Limitations
In SQLScript there are two different logic containers: Procedure and User-Defined Function.
The User-Defined Function container is separated into Scalar User-Defined Function and Table User-Defined
Function.
The following sections provide an overview of the syntactical language description for both containers.
6.1 Procedures
Procedures allows you to describe a sequence of data transformations on data passed as input and database
tables.
Data transformations can be implemented as queries that follow the SAP HANA database SQL syntax by
calling other procedures. Read-only procedures can only call other read-only procedures.
● You can parameterize and reuse calculations and transformations described in one procedure in other
procedures.
● You can use and express knowledge about relationships in the data; related computations can share
common sub-expressions, and related results can be returned using multiple output parameters.
● You can define common sub-expressions. The query optimizer decides if a materialization strategy (which
avoids recomputation of expressions) or other optimizing rewrites are best to apply. In any case, it eases
the task of detecting common sub-expressions and improves the readability of the SQLScript code.
● You can use scalar variables or imperative language features if required.
Syntax
Note
The default is IN. Each parameter is marked using the keywords IN/OUT/INOUT. Input and output
parameters must be explicitly assigned a type (that means that tables without a type are note
supported)
● The input and output parameters of a procedure can have any of the primitive SQL types or a table type.
INOUT parameters can only be of the scalar type.
Note
For more information on data types see Data Types in the SAP HANA SQL and System Views Reference
on the SAP Help Portal.
● A table type previously defined with the CREATE TYPE command, see CREATE TYPE [page 17].
LANGUAGE <lang>
<lang> ::= SQLSCRIPT | R
● Indication that that the execution of the procedure is performed with the privileges of the definer of the
procedure
DEFINER
● Indication that the execution of the procedure is performed with the privileges of the invoker of the
procedure
INVOKER
● Specifies the schema for unqualified objects in the procedure body; if nothing is specified, then the
current_schema of the session is used.
● Marks the procedure as being read-only and side-effect free - the procedure does not make modifications
to the database data or its structure. This means that the procedure does not contain DDL or DML
statements and that it only calls other read-only procedures. The advantage of using this parameter is that
certain optimizations are available for read-only procedures.
● Defines the main body of the procedure according to the programming language selected
● This statement forces sequential execution of the procedure logic. No parallelism takes place.
SEQUENTIAL EXECUTION
For more information on inserting, updating and deleting data records, see Modifying the Content of Table
Variables [page 164].
● You can modify a data record at a specific position. There are two equivalent syntax options:
● You can delete data records from a table variable. Wth the following syntax you can delete a single record.
● Sections of your procedures can be nested using BEGIN and END terminals
● Assignment of values to variables - an <expression> can be either a simple expression, such as a character,
a date, or a number, or it can be a scalar function or a scalar user-defined function.
● The ARRAY_AGG function returns the array by aggregating the set of elements in the specified column of
the table variable. Elements can optionally be ordered.
The CARDINALITY function returns the number of the elements in the array, <array_variable_name>.
The TRIM_ARRAY function returns the new array by removing the given number of elements,
<numeric_value_expression>, from the end of the array, <array_value_expression>.
The ARRAY function returns an array whose elements are specified in the list <array_variable_name>. For
more information see the chapter ARRAY [page 156].
● Assignment of values to a list of variables with only one function evaluation. For example,
<function_expression> must be a scalar user-defined function and the number of elements in
<var_name_list> must be equal to the number of output parameters of the scalar user-defined function.
● The MAP_MERGE operator is used to apply each row of the input table to the mapper function and unite all
intermediate result tables. For more information, see Map Merge Operator [page 87].
● For more information about the CE operators, see Calculation Engine Plan Operators [page 179].
● APPLY_FILTER defines a dynamic WHERE-condition <variable_name> that is applied during runtime. For
more information about that, see the chapter APPLY_FILTER [page 127].
● The UNNEST function returns a table including a row for each element of the specified array.
WITH ORDINALTIY
● You use WHILE to repeatedly call a set of trigger statements while a condition is true.
● You use FOR - EACH loops to iterate over all elements in a set of data.
● Terminates a loop
● Skips a current loop iteration and continues with the next value.
● You use the SIGNAL statement to explicitly raise an exception from within your trigger procedures.
● You use the RESIGNAL statement to raise an exception on the action statement in an exception handler. If
an error code is not specified, RESIGNAL will throw the caught exception.
● You use SET MESSAGE_TEXT to deliver an error message to users when specified error is thrown during
procedure execution.
For information on <insert_stmt>, see INSERT in the SAP HANA SQL and System Views Reference.
For information on <delete_stmt>, see DELETE in the SAP HANA SQL and System Views Reference.
For information on <update_stmt>, see UPDATE in the SAP HANA SQL and System Views Reference.
For information on <replace_stmt> and <upsert_stmt>, see REPLACE and UPSERT in the SAP HANA
SQL and System Views Reference.
For information on <truncate_stmt>, see TRUNCATE in the SAP HANA SQL and System Views Reference.
● <var_name> is a scalar variable. You can assign selected item value to this scalar variable.
● Cursor operations
● Procedure call. For more information, see CALL: Internal Procedure Call [page 34]
Description
The CREATE PROCEDURE statement creates a procedure by using the specified programming language
<lang>.
Example
The procedure features a number of imperative constructs including the use of a cursor (with associated state)
and local scalar variables with assignments.
Syntax
Syntax Elements
If you do not specify the <drop_option>, the system performs a non-cascaded drop. This will only drop the
specified procedure; dependent objects of the procedure will be invalidated but not dropped.
The invalidated objects can be revalidated when an object that uses the same schema and object name is
created.
CASCADE
RESTRICT
This parameter drops the procedure only when dependent objects do not exist. If this drop option is used and a
dependent object exists an error will be sent.
Description
This statement drops a procedure created using CREATE PROCEDURE from the database catalog.
Examples
You drop a procedure called my_proc from the database using a non-cascaded drop.
You can use ALTER PROCEDURE if you want to change the content and properties of a procedure without
dropping the object.
For more information about the parameters, refer to CREATE PROCEDURE [page 21].
For instance, with ALTER PROCEDURE you can change the content of the body itself. Consider the following
GET_PROCEDURES procedure that returns all procedure names on the database.
The procedure GET_PROCEDURES should now be changed to return only valid procedures. In order to do so, use
ALTER PROCEDURE:
Besides changing the procedure body, you can also change the language <lang> of the procedure, the default
schema <default_schema_name> as well as change the procedure to read only mode (READS SQL DATA).
Note
If the default schema and read-only mode are not explicitly specified, they will be removed. Language is
defaulted to SQLScript.
Note
You must have the ALTER privilege for the object you want to change.
Syntax
Syntax Elements
The identifier of the procedure to be altered, with the optional schema name.
WITH PLAN
Specifies that internal debug information should be created during execution of the procedure.
Description
You trigger the recompilation of the my_proc procedure to produce debugging information.
A procedure can be called either by a client on the outer-most level, using any of the supported client
interfaces, or within the body of a procedure.
Recommendation
SAP recommends that you use parameterized CALL statements for better performance. The advantages
follow.
● The parameterized query compiles only once, thereby reducing the compile time.
● A stored query string in the SQL plan cache is more generic and a precompiled query plan can be
reused for the same procedure call with different input parameters.
● By not using query parameters for the CALL statement, the system triggers a new query plan
generation.
6.1.5.1 CALL
Syntax
Syntax Elements
Procedure parameters
For more information on these data types, see Backus Naur Form Notation [page 14] and Scalar Data Types
[page 16].
Parameters passed to a procedure are scalar constants and can be passed either as IN, OUT or INOUT
parameters. Scalar parameters are assumed to be NOT NULL. Arguments for IN parameters of table type can
either be physical tables or views. The actual value passed for tabular OUT parameters must be`?`.
WITH OVERVIEW
Defines that the result of a procedure call will be stored directly into a physical table.
Calling a procedure WITH OVERVIEW returns one result set that holds the information of which table contains
the result of a particular table's output variable. Scalar outputs will be represented as temporary tables with
only one cell. When you pass existing tables to the output parameters WITH OVERVIEW will insert the result-set
tuples of the procedure into the provided tables. When you pass '?' to the output parameters, temporary tables
holding the result sets will be generated. These tables will be dropped automatically once the database session
is closed.
Description
CALL conceptually returns a list of result sets with one entry for every tabular result. An iterator can be used to
iterate over these results sets. For each result set you can iterate over the result table in the same manner as
you do for query results. SQL statements that are not assigned to any table variable in the procedure body are
added as result sets at the end of the list of result sets. The type of the result structures will be determined
during compilation time but will not be visible in the signature of the procedure.
CALL when executed by the client the syntax behaves in a way consistent with the SQL standard semantics, for
example, Java clients can call a procedure using a JDBC CallableStatement. Scalar output variables are a
scalar value that can be retrieved from the callable statement directly.
Note
Unquoted identifiers are implicitly treated as upper case. Quoting identifiers will respect capitalization and
allow for using white spaces which are normally not allowed in SQL identifiers.
It is also possible to use scalar user defined function as parameters for procedure call:
CALL proc(udf(),’EUR’,?,?);
CALL proc(udf()* udf()-55,’EUR’, ?, ?);
In this example, udf() is a scalar user-defined function. For more information about scalar user-defined
functions, see CREATE FUNCTION [page 47]
Syntax:
Syntax Elements:
Note
Description:
For an internal procedure, in which one procedure calls another procedure, all existing variables of the caller or
literals are passed to the IN parameters of the callee and new variables of the caller are bound to the OUT
parameters of the callee. The result is implicitly bound to the variable given in the function call.
Example:
When the procedure addDiscount is called, the variable <:lt_expensive_books> is assigned to the
function and the variable <lt_on_sales> is bound by this function call.
Related Information
You can call a procedure passing named parameters by using the token =>.
For example:
When you use named parameters, you can ignore the order of the parameters in the procedure signature. Run
the following commands and you can try some of the examples below.
or
Parameter Modes
The following table lists the parameters you can use when defining your procedures.
Parameter modes
Mode Description
IN An input parameter
INOUT Specifies a parameter that passes in and returns data to and from the procedure
Note
This is only supported for scalar values. The parameter needs to be parameterized if you
call the procedure, for example CALL PROC ( inout_var=>?). A non-parameter
ized call of a procedure with an INOUT parameter is not supported.
Both scalar and table parameter types are supported. For more information on datatypes, see Datatype
Extension
Related Information
Scalar Parameters
Table Parameters
You can pass tables and views to the parameter of this function.
Note
You should always use SQL special identifiers when binding a value to a table variable.
Note
In the signature you can define default values for input parameters by using the DEFAULT keyword:
The usage of the default value will be illustrated in the next example. Therefore the following tables are needed:
The procedure in the example generates a FULLNAME by the given input table and delimiter. Whereby default
values are used for both input parameters:
END;
For the tabular input parameter INTAB the default table NAMES is defined and for the scalar input parameter
DELIMITER the ‘,’ is defined as default. To use the default values in the signature, you need to pass in
parameters using Named Parameters. That means to call the procedure FULLNAME and using the default value
would be done as follows:
FULLNAME
--------
DOE,JOHN
Now we want to pass a different table, i.e. MYNAMES but still want to use the default delimiter value, the call
looks then as follows:
And the result shows that now the table MYNAMES was used:
FULLNAME
--------
DOE,ALICE
Note
Please note that default values are not supported for output parameters.
For a tabular IN and OUT parameter the EMPTY keyword can be used to define an empty input table as a
default:
Although the general default value handling is supported for input parameters only, the DEFAULT EMPTY is
supported for both tabular IN and OUT parameters.
In the following example use the DEFAULT EMPTY for the tabular output parameter to be able to declare a
procedure with an empty body.
END;
Creating the procedure without DEFAULT EMPTY causes an error indicating that OUTTAB is not assigned. The
PROC_EMPTY procedure can be called as usual and it returns an empty result set:
call CHECKINPUT(result=>?)
OUT(1)
-----------------
'Input is empty'
An example of calling the funtion without passing an input table looks as follows:
When a procedure is created, information about the procedure can be found in the database catalog. You can
use this information for debugging purposes.
The procedures observable in the system views vary according to the privileges that a user has been granted.
The following visibility rules apply:
● CATALOG READ or DATA ADMIN – All procedures in the system can be viewed.
● SCHEMA OWNER, or EXECUTE – Only specific procedures where the user is the owner, or they have
execute privileges, will be shown.
Procedures can be exported and imported as are tables, see the SQL Reference documentation for details. For
more information see Data Import Export Statements in the SAP HANA SQL and System Views Referenece.
Related Information
Structure
Structure
6.1.7.3 SYS.OBJECT_DEPENDENCIES
Dependencies between objects, for example, views that refer to a specific table
Structure
● 0: NORMAL (default)
● 1: EXTERNAL_DIRECT (direct de
pendency between dependent ob
ject and base object)
● 2: EXTERNAL_INDIRECT (indirect
dependency between dependent
object und base object)
● 5: REFERENTIAL_DIRECT (foreign
key dependency between tables)
This section explores the ways in which you can query the OBJECT_DEPENDENCIES system view.
Find all the (direct and indirect) base objects of the DEPS.GET_TABLES procedure using the following
statement.
Look at the DEPENDENCY_TYPE column in more detail. You obtained the results in the table above using a
select on all the base objects of the procedure; the objects shown include both persistent and transient
objects. You can distinguish between these object dependency types using the DEPENDENCY_TYPE column,
as follows:
Finally, to find all the dependent objects that are using DEPS.MY_PROC, use the following statement.
6.1.7.4 PROCEDURE_PARAMETER_COLUMNS
PROCEDURE_PARAMETER_COLUMNS provides information about the columns used in table types which
appear as procedure parameters. The information is provided for all table types in use, in-place types and
externally defined types.
There are two different kinds of user-defined functions (UDF): Table User-Defined Functions and Scalar User-
Defined Functions. They are referred to as Table UDF and Scalar UDF in the following table. They differ in terms
of their input and output parameters, functions supported in the body, and in the way they are consumed in
SQL statements.
Functions Calling A table UDF can only be called in the A scalar UDF can be called in SQL state
FROM-clause of an SQL statement in ments in the same parameter positions
the same parameter positions as table as table column names. That takes
names. For example, SELECT * FROM place in the SELECT and WHERE
myTableUDF(1) clauses of SQL statements. For exam
ple, SELECT myScalarUDF(1) AS my
Column FROM DUMMY
Output Must return a table whose type is de Must return scalar values specified in
fined in <return_type>. <return_parameter_list>
This SQL statement creates read-only user-defined functions that are free of side effects. This means that
neither DDL, nor DML statements (INSERT, UPDATE, and DELETE) are allowed in the function body. All
functions or procedures selected or called from the body of the function must be read-only.
Syntax
Syntax Elements
To look at a table type previously defined with the CREATE TYPE command, see CREATE TYPE [page 17].
Table UDFs must return a table whose type is defined by <return_table_type>. And scalar UDF must return
scalar values specified in <return_parameter_list>.
The following expression defines the structure of the returned table data.
LANGUAGE <lang>
<lang> ::= SQLSCRIPT
Default: SQLSCRIPT
Note
DEFINER
INVOKER
Specifies that the execution of the function is performed with the privileges of the invoker of the function.
Specifies the schema for unqualified objects in the function body. If nothing is specified, then the
current_schema of the session is used.
Defines the main body of the table user-defined functions and scalar user-defined functions. Since the function
is flagged as read-only, neither DDL, nor DML statements (INSERT, UPDATE, and DELETE), are allowed in the
function body. A scalar UDF does not support table operations in the function body and variables of type
TABLE as input.
Note
Scalar functions can be marked as DETERMINISTIC, if they always return the same result any time they are
called with a specific set of input parameters.
Defines one or more local variables with associated scalar type or array type.
An array type has <type> as its element type. An Array has a range from 1 to 2,147,483,647, which is the
limitation of underlying structure.
You can assign default values by specifying <expression>s. See Expressions in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
For further information of the definitions in <func_stmt>, see CREATE PROCEDURE [page 21]..
Example
How to call the table function scale is shown in the following example:
How to create a scalar function of name func_add_mul that takes two values of type double and returns two
values of type double is shown in the following example:
In a query you can either use the scalar function in the projection list or in the where-clause. In the following
example the func_add_mul is used in the projection list:
Besides using the scalar function in a query you can also use a scalar function in scalar assignment, e.g.:
You can use ALTER FUNCTION if you want to change the content and properties of a function without dropping
the object.
For more information about the parameters please refer to CREATE FUNCTION. For instance, with ALTER
FUNCTION you can change the content of the body itself. Consider the following procedure GET_FUNCTIONS
that returns all function names on the database.
AS
BEGIN
return SELECT schema_name AS schema_name,
function_name AS name
FROM FUNCTIONS;
END;
The function GET_FUNCTIONS should now be changed to return only valid functions. In order to do so, we will
use ALTER FUNCTION:
AS
BEGIN
return SELECT schema_name AS schema_name,
function_name AS name
FROM FUNCTIONS
WHERE IS_VALID = 'TRUE';
END;
Besides changing the function body, you can also change the default schema <default_schema_name>.
Note
You need the ALTER privilege for the object you want to change.
Syntax
Syntax Elements
When <drop_option> is not specified a non-cascaded drop will be performed. This will only drop the specified
function, dependent objects of the function will be invalidated but not dropped.
The invalidated objects can be revalidated when an object that has same schema and object name is created.
CASCADE
RESTRICT
Drops the function only when dependent objects do not exist. If this drop option is used and a dependent
object exists an error will be thrown.
Description
Drops a function created using CREATE FUNCTION from the database catalog.
You drop a function called my_func from the database using a non-cascaded drop.
The following tables list the parameters you can use when defining your user-defined functions.
Function Parameter
Table user-defined functions ● Can have a list of input parameters and must return a
table whose type is defined in <return type>
● Input parameters must be explicitly typed and can have
any of the primitive SQL type or a table type.
Scalar user-defined functions ● Can have a list of input parameters and must returns
scalar values specified in <return parameter list>.
● Input parameters must be explicitly typed and can have
any primitive SQL type.
● Using a table as an input is not allowed.
The implicit SELECT statements used within a procedure (or an anonymous block) are executed after the
procedure is finished and scalar user-defined functions (SUDF) are evaluated at the fetch time of the SELECT
statement, due to the design of late materialization. To avoid unexpected results for statements, that are out of
the statement snapshot order within a procedure or a SUDF, implicit result sets will now be materialized in case
the SUDF references a persistent table.
When a function is created, information about the function can be found in the database catalog. You can use
this information for debugging purposes. The functions observable in the system views vary according to the
privileges that a user has been granted. The following visibility rules apply:
● CATALOG READ or DATA ADMIN – All functions in the system can be viewed.
● SCHEMA OWNER, or EXECUTE – Only specific functions where the user is the owner, or they have
execute privileges, will be shown.
6.2.6.1 SYS.FUNCTIONS
Structure
6.2.6.2 SYS.FUNCTION_PARAMETERS
Structure
6.2.6.3 FUNCTION_PARAMETER_COLUMNS
FUNCTION_PARAMETER_COLUMNS provides information about the columns used in table types which
appear as function parameters. The information is provided for all table types in use, in-place types and
externally defined types.
In the signature you can define default values for input parameters by using the DEFAULT keyword:
The usage of the default value will be illustrated in the next example. Therefore the following tables are needed:
The function in the example generates a FULLNAME by the given input table and delimiter. Whereby default
values are used for both input parameters:
END;
For the tabular input parameter INTAB the default table NAMES is defined and for the scalar input parameter
DELIMITER the ‘,’ is defined as default.
That means to query the function FULLNAME and using the default value would be done as follows:
FULLNAME
--------
DOE,JOHN
And the result shows that now the table MYNAMES was used:
FULLNAME
--------
DOE,ALICE
In a scalar function, default values can also be used, as shown in the next example:
Calling that function by using the default value of the variable delimiter would be the following:
Note
Please note that default values are not supported for output parameters.
Related Information
Deterministic scalar user-defined functions always return the same result any time they are called with a
specific set of input values.
When you use such functions, it is not necessary to recalculate the result every time - you can refer to the
cached result. If you want to make a scalar user-defined function explicitly deterministic, you need to use the
optional keyword DETERMINISTIC when you create your function, as demonstrated in the example below. The
lifetime of the cache entry is bound to the query execution (for example, SELECT/DML). After the execution of
the query, the cache is destroyed.
Sample Code
Note
In the system view SYS.FUNCTIONS, the column IS_DETERMINISTIC provides information about whether a
function is deterministic or not.
Non-Deterministic Functions
The following not-deterministic functions cannot be specified in deterministic scalar user-defined functions.
They return an error at function creation time.
● nextval/currval of sequence
● current_time/current_timestamp/current_date
● current_utctime/current_utctimestamp/current_utcdate
● rand/rand_secure
● window functions
Procedure Result Cache (PRC) is a server-wide in-memory cache that caches the output arguments of
procedure calls using the input arguments as keys.
Note
Related Information
Syntax
create procedure add (in a int, in b int, out c int) deterministic as begin
c = :a + :b;
end
Description
You can use the keyword DETERMINISTIC when creating a new procedure, if the following conditions are met:
● The procedure always returns the same output arguments when it is called with the same input arguments,
even if the session and database state is not the same.
● The procedure has no side effects.
You can also create a procedure with the keyword DETERMINISTIC, even if it does not satisfy the above
conditions, by changing the configuration parameters described in the configuration section. Procedures
created with the keyword DETERMINISTIC are described below as "deterministic procedures", regardless of
whether they are logically deterministic or not.
By default, you cannot create a deterministic procedure that contains the following:
You can skip the determinism check when creating deterministic procedures on your responsibility. It is useful
when you want to create logically deterministic procedures that may contain non-deterministic statements.
When disabling the check, please be aware that the cache can be shared among users, so if the procedure
results depend on the current user (for example, the procedure security is invoker and there are user-specific
functions or use of tables with analytic privileges), it may not behave as you expect. Disabling the check is not
recommended.
● If a deterministic procedure has side effects, the side effects may or may not be visible when you call the
procedure.
● If a deterministic procedure has implicit result sets, they may or may not be returned when you call the
procedure.
● If a deterministic procedure returns different output arguments for the same input arguments, you may or
may not get the same output arguments when you call the procedure multiple times with the same input
arguments.
The configuration parameters below refer to Procedure Result Cache (PRC) under the section "sqlscript".
There are also session variables that can be set for each session and which override the settings above.
__SQLSCRIPT_ENABLE_DETERMINISTIC_PROCE enable_deterministic_procedure_check
DURE_CHECK
__SQLSCRIPT_ENABLE_DETERMINISTIC_PROCEDURE_RE enable_deterministic_procedure_cache
SULT_CACHE
Note
Related Information
Description
The scope of the cache is the current server (for example, indexserver or cacheserver). If you call the same
deterministic procedure in the same server with the same arguments multiple times, the cached results will be
used except for the first call, unless the cached results are evicted. Since the cache is global in the current
server, the results are shared even among different query plans.
Note
Currently, only scalar parameters are supported for PRC. You can create deterministic procedures having
table parameters, but automatic caching will be disabled for such procedures.
The same keyword, DETERMINISTIC, can be used for both procedures and functions, but currently the
meaning is not the same.
For scalar user-defined functions, a new cache is created for each statement execution and destroyed after
execution. The cache is local to the current statement which has a fixed snapshot of the persistence at a point
in time. Due to this behavior, more things can be considered "deterministic" in deterministic scalar UDFs, such
as reading a table.
Related Information
Syntax
Code Syntax
Description
A library is a set of related variables, procedures and functions. There are two types of libraries: built-in libraries
and user-defined libraries. A built-in library is a system-provided library with special functions. A user-defined
library is a library written by a user in SQLScript. Users can make their own libraries and utilize them in other
procedures or functions. Libraries are designed to be used only in SQLScript procedures or functions and are
not available in other SQL statements.
Note
Any user having the EXECUTE privilege on a library can use that library by means of the USING
statement and can also access its public members.
Limitations
● The usage of library variables is currently limited. For example, it is not possible to use library variables in
the INTO clause of a SELECT INTO statement and in the INTO clause of dynamic SQL. This limitation can
be easily circumvented by using a normal scalar variable as intermediate value.
● It is not possible to call library procedures with hints.
● Since session variables are used for library variables, it is possible (provided you the necessary privileges)
to read and modify arbitrary library variables of (other) sessions.
● Variables cannot be declared by using LIKE for specifying the type.
● Non-constant variables can not have a default value yet.
● The table type library variable is not supported.
● A library member function cannot be used in queries.
Related Information
Syntax
Code Syntax
Description
Access Mode
Each library member can have a PUBLIC or a PRIVATE access mode. PRIVATE members are not accessible
outside the library, while PUBLIC members can be used freely in procedures and functions.
Example
Sample Code
Setup
do begin
declare idx int = 0;
for idx in 1..200 do
insert into data_table values (:idx);
end for;
end;
Sample Code
Library DDL
public procedure get_data(in size int, out result table(col1 int)) as begin
result = select top :size col1 from data_table;
end;
end;
Sample Code
Result
call myproc(10);
Result:
count(*)
10
call myproc(150);
Result:
count(*)
100
Related Information
LIBRARIES
LIBRARY_MEMBERS
Related Information
When creating a SQLScript procedure or function, you can use the OR REPLACE option to change the defined
procedure or function, if it already exists.
Syntax
Behavior
The behavior of this command depends on the existence of the defined procedure or function. If the procedure
or function already exists, it will be modified according to the new definition. If you do not explicitly specify a
property (for example, read only), this property will be set to the default value. Please refer to the example
below. If the procedure or function does not exist yet, the command works like CREATE PROCEDURE or
CREATE FUNCTION.
Compared to using DROP PROCEDURE followed by CREATE PROCEDURE, CREATE OR REPLACE has the
following benefits:
● DROP and CREATE incur object revalidation twice, while CREATE OR REPLACE incurs it only once
● If a user drops a procedure, its privileges are lost, while CREATE OR REPLACE preserves them.
Example
Sample Code
Sample Code
-- new parameter
CREATE OR REPLACE PROCEDURE test1 (IN i int) as
begin
select :i from dummy;
select * from dummy;
end;
call test1(?);
-- default value
CREATE OR REPLACE PROCEDURE test1 (IN i int default 1) as
begin
select :i from dummy;
end;
call test1();
-- table type
create column table tab1 (a INT);
create column table tab2 (a INT);
CREATE OR REPLACE PROCEDURE test1(out ot1 table(a INT), out ot2 table(a INT))
as begin
insert into tab1 values (1);
select * from tab1;
insert into tab2 values (2);
select * from tab2;
insert into tab1 values (1);
insert into tab2 values (2);
ot1 = select * from tab1;
ot2 = select * from tab2;
end;
call test1(?, ?);
-- security
CREATE OR REPLACE PROCEDURE test1(out o table(a int))
sql security invoker as
begin
o = select 5 as a from dummy;
end;
call test1(?);
-- change security
ALTER PROCEDURE test1(out o table(a int))
sql security definer as
begin
o = select 8 as a from dummy;
end;
call test1(?);
-- result view
ALTER PROCEDURE test1(out o table(a int))
reads sql data with result view rv1 as
begin
o = select 0 as A from dummy;
end;
call test1(?);
-- table function
-- scalar function
CREATE OR REPLACE FUNCTION sfunc_param returns a int as
begin
A = 0;
end;
select sfunc_param() from dummy;
An anonymous block is an executable DML statement which can contain imperative or declarative statements.
All SQLScript statements supported in procedures are also supported in anonymous blocks. Compared to
procedures, anonymous blocks have no corresponding object created in the metadata catalog.
An anonymous block is defined and executed in a single step by using the following syntax:
DO [(<parameter_clause>)]
BEGIN [SEQUENTIAL EXECUTION]
<body>
END
<body> ::= !! supports the same feature set as procedure did
For more information on <body>, see <procedure_body> in CREATE in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
With the parameter clause you can define a signature, whereby the value of input and output parameters needs
to be bound by using named parameters.
The following example illustrates how to call an anonymous block with a parameter clause:
For output parameters only ? is a valid value and cannot be omitted, otherwise the query parameter cannot be
bound. For the scalar input parameter any scalar expression can be used.
You can also parameterize the scalar parameters if needed. For example, for the above given example it would
look as follows:
Contrary to a procedure, an anonymous block has no container-specific properties (for example, language,
security mode, and so on.) However, the body of an anonymous block is similar to the procedure body.
Note
In the following example, you find further examples for anonymous blocks:
Example 1
DO
BEGIN
DECLARE I INTEGER;
CREATE TABLE TAB1 (I INTEGER);
FOR I IN 1..10 DO
INSERT INTO TAB1 VALUES (:I);
END FOR;
END;
This example contains an anonymous block that creates a table and inserts values into that table.
Example 2
DO
BEGIN
T1 = SELECT * FROM TAB;
CALL PROC3(:T1, :T2);
Example 3
Procedure and function definitions may contain delicate or critical information but a user with system
privileges can easily see all definitions from the public system views PROCEDURES, FUNCTIONS or from
traces, even if the procedure or function owner has controlled the authorization rights in order to secure their
objects. If application developers want to protect their intellectual property from any other users, even system
users, they can use SQLScript encryption.
Note
Decryption of an encrypted procedure or function is not supported and cannot be performed even by SAP.
Users who want to use encrypted procedures or functions are responsible for saving the original source
code and providing supportability because there is no way to go back and no supportability tools for that
purpose are available in SAP HANA.
Syntax
Code Syntax
Code Syntax
Code Syntax
Behavior
If a procedure or a function is created by using the WITH ENCRYPTION option, their definition is saved as an
encrypted string that is not human readable. That definition is decrypted only when the procedure or the
function is compiled. The body in the CREATE statement is masked in various traces or monitoring views.
Encrypting a procedure or a function with the ALTER PROCEDURE/FUNCTION statement can be achieved in
the following ways. An ALTER PROCEDURE/FUNCTION statement, accompanying a procedure body, can make
use of the WITH ENCRYPTION option, just like the CREATE PROCEDURE/FUNCTION statement.
If you do not want to repeat the procedure or function body in the ALTER PROCEDURE/FUNCTION statement
and want to encrypt the existing procedure or function, you can use ALTER PROCEDURE/FUNCTION
<proc_func_name> ENCRYPTION ON. However, the CREATE statement without the WITH ENCRYPTION
property is not secured.
Note
A new encryption key is generated for each procedure or function and is managed internally.
SQLScript Debugger, PlanViz, traces, monitoring views, and others that can reveal procedure definition are
not available for encrypted procedures or functions.
Object Dependency
The object dependency of encrypted procedures or functions is not secured. The purpose of encryption is to
secure the logic of procedures or functions and object dependency cannot reveal how a procedure or a
function works.
Limitation in Optimization
Some optimizations, which need analysis of the procedure or function definition, are turned off for encrypted
procedures and functions.
Calculation Views
An encrypted procedure cannot be used as a basis for a calculation view. It is recommended to use table user-
defined functions instead.
System Views
PROCEDURES
SCHEMA_NAME PROCEDURE_NAME ... IS_ENCRYPTED DEFINITION
FUNCTIONS
SCHEMA_NAME FUNCTION_NAME ... IS_ENCRYPTED DEFINITION
For every public interface that shows procedure or function definitions, such as PROCEDURES or FUNCTIONS,
the definition column displays only the signature of the procedure, if it is encrypted.
Sample Code
Result:
PROCEDURE_NAME DEFINITON
Sample Code
Result:
FUNCTION_NAME DEFINITON
Supportability
For every monitoring view showing internal queries, the internal statements will also be hidden, if its parent is
an encrypted procedure call. Debugging tools or plan analysis tools are also blocked.
● SQLScript Debugger
● EXPLAIN PLAN FOR Call
● PlanViz
● Statement-related views
● Plan Cache-related views
● M_ACTIVE_PROCEDURES
Default Behavior
Encrypted procedures or functions cannot be exported, if the option ENCRYPTED OBJECT HEADER ONLY is
not applied. When the export target is an encrypted object or if objects, which are referenced by the export
object, include an encrypted object, the export will fail with the error FEATURE_NOT_SUPPORTED. However,
when exporting a schema and an encrypted procedure or function in the schema does not have any dependent
objects, the procedure or function will be skipped during the export.
To enable export of any other objects based on an encrypted procedure, the option ENCRYPTED OBJECT
HEADER ONLY is introduced for the EXPORT statement. This option does not export encrypted objects in
encrypted state, but exports the encrypted object as a header-only procedure or function. After an encrypted
procedure or a function has been exported with the HEADER ONLY option, objects based on encrypted objects
will be invalid even after a successful import. You should alter the exported header-only procedure or function
to its original body or dummy body to make dependent objects valid.
Sample Code
Original Procedure
Sample Code
Export Statement
export all as binary into <path> with encrypted object header only;
Sample Code
Exported create.sql
Each table assignment in a procedure or table user defined function specifies a transformation of some data by
means of classical relational operators such as selection, projection. The result of the statement is then bound
to a variable which either is used as input by a subsequent statement data transformation or is one of the
output variables of the procedure. In order to describe the data flow of a procedure, statements bind new
variables that are referenced elsewhere in the body of the procedure.
This approach leads to data flows which are free of side effects. The declarative nature to define business logic
might require some deeper thought when specifying an algorithm, but it gives the SAP HANA database
freedom to optimize the data flow which may result in better performance.
The following example shows a simple procedure implemented in SQLScript. To better illustrate the high-level
concept, we have omitted some details.
This SQLScript example defines a read-only procedure that has 2 scalar input parameters and 2 output
parameters of type table. The first line contains an SQL query Q1, that identifies big publishers based on the
number of books they have published (using the input parameter cnt). Next, detailed information about these
publishers along with their corresponding books is determined in query Q2. Finally, this information is
aggregated in 2 different ways in queries Q3 (aggregated per publisher) and Q4 (aggregated per year)
respectively. The resulting tables constitute the output tables of the function.
A procedure in SQLScript that only uses declarative constructs can be completely translated into an acyclic
dataflow graph where each node represents a data transformation. The example above could be represented
as the dataflow graph shown in the following image. Similar to SQL queries, the graph is analyzed and
optimized before execution. It is also possible to call a procedure from within another procedure. In terms of
the dataflow graph, this type of nested procedure call can be seen as a sub-graph that consumes intermediate
results and returns its output to the subsequent nodes. For optimization, the sub-graph of the called procedure
is merged with the graph of the calling procedure, and the resulting graph is then optimized. The optimization
applies similar rules as an SQL optimizer uses for its logical optimization (for example filter pushdown). Then
the plan is translated into a physical plan which consists of physical database operations (for example hash
joins). The translation into a physical plan involves further optimizations using a cost model as well as
heuristics.
Description
Table parameters that are defined in the Signature are either input or output. They must be typed explicitly.
This can be done either by using a table type previously defined with the CREATE TYPE command or by writing
it directly in the signature without any previously defined table type.
Example
The advantage of previously defined table type is that it can be reused by other procedure and functions. The
disadvantage is that you must take care of its lifecycle.
The advantage of a table variable structure that you directly define in the signature is that you do not need to
take care of its lifecycle. In this case, the disadvantage is that it cannot be reused.
The type of a table variable in the body of a procedure or a table function is either derived from the SQL Query,
or declared explicitly. If the table variable has derived its type from the SQL query, the SQLScript compiler
determines its type from the first assignments of the variable thus providing a lot of flexibility. One
disadvantage of this procedure is that it also leads to many type conversions in the background because
sometimes the derived table type does not match the typed table parameters in the signature. This can lead to
additional unnecessary conversions. Another disadvantage is the unnecessary internal statement compilation
to derive the types. To avoid this unnecessary effort, you can declare the type of a table variable explicitly. A
declared table variable is always initialized with empty content.
Local table variables are declared by using the DECLARE keyword. For the referenced type, you can either use a
previously declared table type, or the type definition TABLE (<column_list_definition>). The next
example illustrates both variants:
You can also directly assign a default value to a table variable by using the DEFAULT keyword or ‘=’. By default
all statements are allowed all statements that are also supported for the typical table variable assignment.
The table variable can be also flagged as read-only by using the CONSTANT keyword. The consequence is that
you cannot override the variable any more. Note that if the CONSTANT keyword is used, the table variable
should have a default value, it cannot be NULL.
An alternative way to declare a table variable is to use the LIKE keyword. You can specify the variable type by
using the type of a persistent table, a view, or another table variable.
Note
When you declare a table variable using LIKE <table_name>, all the attributes of the columns (like
unique, default value, and so on) in the referenced table are ignored in the declared variable except the not
null attribute.
When you use LIKE <table_name> to declare a variable in a procedure, the procedure will be dependent
on the referenced table.
Description
Local table variables are declared by using the DECLARE keyword. A table variable temp can be referenced by
using :temp. For more information, see Referencing Variables [page 85]. The <sql_identifier> must be
unique among all other scalar variables and table variables in the same code block. However, you can use
In each block there are table variables declared with identical names. However, since the last assignment to the
output parameter <outTab> can only have the reference of variable <temp> declared in the same block, the
result is the following:
N
----
1
In this code example there is no explicit table variable declaration where done, that means the <temp> variable
is visible among all blocks. For this reason, the result is the following:
N
----
2
For every assignment of the explicitly declared table variable, the derived column names and types on the right-
hand side are checked against the explicitly declared type on the left-hand side.
Another difference, compared to derived types, is that a reference to a table variable without an assignment,
returns a warning during the compilation.
BEGIN
DECLARE a TABLE (i DECIMAL(2,1), j INTEGER);
IF :num = 4
THEN
a = SELECT i, j FROM tab;
END IF;
END;
Create new variable First SQL query assignment Table variable declaration in a block:
Variable scope Global scope, regardless of the block Available in declared block only.
where it was first declared
Variable hiding is applied.
Unassigned variable check No warning during the compilation Warning during compilation if it is pos
sible to refer to the unassigned table
variable. The check is perforrmed only
if a table variable is used.
You can specify the NOT NULL constraint on columns in table types used in SQLScript. Historically, this was
not allowed by the syntax and existing NOT NULL constraints on tables and table types were ignored when
used as types in SQLScript. Now, NOT NULL constraints are taken into consideration, if specified directly in the
column list of table types. NOT NULL constraints in persistent tables and table types are still ignored by default
for backward compatibility but you can make them valid by changing the configuration, as follows:
If both are set, the session variable takes precedence. Setting it to 'ignore_with_warning' has the same
effect as 'ignore', except that you additionally get a warning whenever the constraint is ignored. With
'respect', the NOT NULL constraints (including primary keys) in tables and table types will be taken into
consideration but that could invalidate existing procedures. Consider the following example:
Sample Code
Table variables are bound by using the equality operator. This operator binds the result of a valid SELECT
statement on the right-hand side to an intermediate variable or an output parameter on the left-hand side.
Statements on the right-hand side can refer to input parameters or intermediate result variables bound by
other statements. Cyclic dependencies that result from the intermediate result assignments or from calling
other functions are not allowed, which means that recursion is not possible.
Bound variables are referenced by their name (for example, <var>). In the variable reference the variable name
is prefixed by <:> such as <:var>. The procedure or table function describe a dataflow graph using their
statements and the variables that connect the statements. The order in which statements are written in a body
can be different from the order in which statements are evaluated. In case a table variable is bound multiple
times, the order of these bindings is consistent with the order they appear in the body. Additionally, statements
are only evaluated if the variables that are bound by the statement are consumed by another subsequent
statement. Consequently, statements whose results are not consumed are removed during optimization.
Example:
In this assignment, the variable <lt_expensive_books> is bound. The <:it_books> variable in the FROM
clause refers to an IN parameter of a table type. It would also be possible to consume variables of type table in
the FROM clause which were bound by an earlier statement. <:minPrice> and <:currency> refer to IN
parameters of a scalar type.
Syntax
The parameter name definition. PLACEHOLDER is used for place holder parameters and HINT for hint
parameters.
Description
Using column view parameter binding it is possible to pass parameters from a procedure/scripted calculation
view to a parameterized column view e.g. hierarchy view, graphical calculation view, scripted calculation view.
Examples:
In the following example, assume you have the calculation view CALC_VIEW with placeholder parameters
"client" and "currency". You want to use this view in a procedure and bind the values of the parameters during
the execution of the procedure.
The following example assumes that you have a hierarchical column view "H_PROC" and you want to use this
view in a procedure. The procedure should return an extended expression that will be passed via a variable.
CALL "EXTEND_EXPRESSION"('',?);
CALL "EXTEND_EXPRESSION"('subtree("B1")',?);
Description
The MAP_MERGE operator is used to apply each row of the input table to the mapper function and unite all
intermediate result tables. The purpose of the operator is to replace sequential FOR-loops and union patterns,
like in the example below, with a parallel operator.
Sample Code
Note
The mapper procedure is a read-only procedure with only one output that is a tabular output.
Syntax
The first input of the MAP_MERGE operator is th mapper table <table_or_table_variable> . The mapper
table is a table or a table variable on which you want to iterate by rows. In the above example it would be table
variable t.
The second input is the mapper function <mapper_identifier> itself. The mapper function is a function you
want to have evaluated on each row of the mapper table <table_or_table_variable>. Currently, the
MAP_MERGE operator supports only table functions as <mapper_identifier>. This means that in the above
example you need to convert the mapper procedure into a table function.
Example
As an example, let us rewrite the above example to leverage the parallel execution of the MAP_MERGE operator.
We need to transform the procedure into a table function, because MAP_MERGE only supports table functions
as <mapper_identifier>.
Sample Code
After transforming the mapper procedure into a function, we can now replace the whole FOR loop by the
MAP_MERGE operator.
MAP_REDUCE is a programming model introduced by Google that allows easy development of scalable parallel
applications for processing big data on large clusters of commodity machines. The MAP_REDUCE operator is a
specialization of the MAP_MERGE operator.
Syntax
Code Syntax
We take as an example a table containing sentences with their IDs. If you want to count the number of
sentences that contain a certain character and the number of occurrences of each character in the table, you
can use the MAP_REDUCE operator in the following way:
Mapper Function
Sample Code
Mapper Function
Reducer Function
Sample Code
Reducer Function
Sample Code
do begin
declare result table(c varchar, stmt_freq int, total_freq int);
result = MAP_REDUCE(tab, mapper(tab.id, tab.sentence) group by c as X,
reducer(X.c, X));
select * from :result order by c;
end;
1. The mapper TUDF processes each row of the input table and returns a table.
5. The reducer TUDF (or procedure) processes each group and returns a table (or multiple tables).
If you use a read-only procedure as a reducer, you can fetch multiple table outputs from a MAP_REDUCE
operator. To bind the output of MAP_REDUCE operators, you can simply apply the table variable as the
parameter of the reducer specification. For example, if you want to change the reducer in the example above to
a read-only procedure, apply the following code.
do begin
declare result table(c varchar, stmt_freq int, total_freq int);
MAP_REDUCE(tab, mapper(tab.id, tab.sentence) group by c as X,
reducer_procedure(X.c, X, result));
Sample Code
do begin
declare result table(c varchar, stmt_freq int, total_freq int);
declare extra_arg1, extra_arg2 int;
declare extra_arg3, extra_arg4 table(...);
... more extra args ...
result = MAP_REDUCE(tab, mapper(tab.id,
tab.sentence, :extra_arg1, :extra_arg3, ...) group by c as X,
reducer(X.c, X, :extra_arg2, :extra_arg4,
1+1, ...));
select * from :result order by c;
end;
Note
There is no restriction about the order of input table parameters, input column parameters, extra
parameters and so on. It is also possible to use default parameter values in mapper/reducer TUDFs or
procedures.
Restrictions
● Only Mapper and Reducer are supported (no other Hadoop functionalities like group comparator, key
comparator and so on).
● The alias ID in the mapper output and the ID in the Reducer TUDF (or procedure) parameter must be the
same.
● The Mapper must be a TUDF, not a procedure.
● The Reducer procedure should be a read-only procedure and cannot have scalar output parameters.
Related Information
The SQLScript compiler combines statements to optimize code. Hints enable you to block or enforce the
inlining of table variables.
Note
Using a HINT needs to be considered carefully. In some cases, using a HINT could end up being more
expensive.
Block Statement-Inlining
The overall optimization guideline in SQLScript states that dependent statements are combined if possible. For
example, you have two table variable assignments as follows:
There can be situations, however, when the combined statements lead to a non-optimal plan and as a result, to
less-than-optimal performance of the executed statement. In these situations it can help to block the
combination of specific statements. Therefore SAP has introduced a HINT called NO_INLINE. By placing that
HINT at the end of select statement, it blocks the combination (or inlining) of that statement into other
statements. An example of using this follows:
By adding WITH HINT (NO_INLINE) to the table variable tab, you can block the combination of that
statement and ensure that the two statements are executed separately.
Using the hint called INLINE helps in situations when you want to combine the statement of a nested
procedure into the outer procedure.
Currently statements that belong to nested procedure are not combined into the statements of the calling
procedures. In the following example, you have two procedures defined.
By executing the procedure, ProcCaller, the two table assignments are executed separately. If you want to
have both statements combined, you can do so by using WITH HINT (INLINE) at the statement of the
output table variable. Using this example, it would be written as follows:
Now, if the procedure, ProcCaller, is executed, then the statement of table variable tab2 in ProcInner is
combined into the statement of the variable, tab, in the procedure, ProcCaller:
SELECT I FROM (SELECT I FROM T WITH HINT (INLINE)) where I > 10;
This section focuses on imperative language constructs such as loops and conditionals. The use of imperative
logic splits the logic between several data flows.
For more information, see Orchestration Logic [page 12] and Declarative SQLScript Logic [page 79].
Syntax
Syntax Elements
Description
Local variables are declared by using the DECLARE keyword and they can optionally be initialized with their
declaration. By default scalar variables are initialized with NULL. A scalar variable var can be referenced as
described above by using :var.
Tip
If you want to access the value of the variable, use :var in your code. If you want to assign a value to the
variable, use var in your code.
Recommendation
Even though the := operator is still available, SAP recommends that you use only the = operator in defining
scalar variables.
Example
CREATE PROCEDURE proc (OUT z INT) LANGUAGE SQLSCRIPT READS SQL DATA
AS
BEGIN
DECLARE a int;
DECLARE b int = 0;
DECLARE c int DEFAULT 0;
This examples shows various ways for making declarations and assignments.
Note
Before the SAP HANA SPS 08 release, scalar UDF assignment to a scalar variable was not supported. If you
wanted to get the result value from a scalar UDF and consume it in a procedure, the scalar UDF had to be
used in a SELECT statement, even though this was expensive.
Now you can assign a scalar UDF to a scalar variable with 1 output or more than 1 output, as depicted in the
following code examples.
The SELECT INTO statement is widely used for assigning a result set to a set of scalar variables. Since the
statement does not accept an empty result set, it is necessary to define exit handlers in case of an empty result
set. The introduction of DEFAULT values makes it possible to to handle empty result sets and there is no need
any more to write exit handlers to assign default values to the target variables when the result set is empty.
Syntax
Code Syntax
Example
DO BEGIN
DECLARE A_COPY INT;
DECLARE B_COPY VARCHAR(10);
CREATE ROW TABLE T1 (A INT NOT NULL, B VARCHAR(10));
SELECT A, B INTO A_COPY, B_COPY DEFAULT -2+1, NULL FROM T1;
--(A_COPY,B_COPY) = (-1,?), use default value
EXEC 'SELECT A FROM T1' INTO A_COPY DEFAULT 2;
--(A_COPY) = (2), exec into statement with default value
INSERT INTO T1 VALUES (0, 'sample0');
SELECT A, B INTO A_COPY, B_COPY DEFAULT 5, NULL FROM T1;
--(A_COPY,B_COPY) = (0,'sample0'), executed as-is
END;
Related Information
Local table variables are, as the name suggests, variables with a reference to tabular data structure. This data
structure originates from an SQL Query.
Global session variables can be used in SQLScript to share a scalar value between procedures and functions
that are running in the same session. The value of a global session variable is not visible from another session.
To set the value of a global session variable you use the following syntax:
While <key> can only be a constant string or a scalar variable, <values> can be any expression, scalar
variable or function which returns a value that is convertible to string. Both have maximum length of 5000
characters. The session variable cannot be explicitly typed and is of type string. If <value> is not of type string
the value will be implicitly converted to string.
The next examples illustrate how you can set the value of a session variable in a procedure:
To retrieve the session variable, the function SESSION_CONTEXT (<key>) can be used.
For more information on SESSION_CONTEXT, see SESSION_CONTEXT in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
For example, the following function retrieves the value of session variable 'MY_VAR'
SET <key> = <value> cannot not be used in functions and procedure flagged as READ ONLY (scalar
and table functions are implicitly READ ONLY).
Note
The maximum number of session variables can be configured with the configuration parameter
max_session_variables under the section session (min=1, max=5000). The default is 1024.
Note
Session variables are null by default and can be reset to null using UNSET <key>. For more information on
UNSET, see UNSET in the SAP HANA SQL and System Views Reference.
SQLScript supports local variable declaration in a nested block. Local variables are only visible in the scope of
the block in which they are defined. It is also possible to define local variables inside LOOP / WHILE /FOR / IF-
ELSE control structures.
call nested_block(?)
--> OUT:[2]
From this result you can see that the inner most nested block value of 3 has not been passed to the val
variable. Now let's redefine the procedure without the inner most DECLARE statement:
Now when you call this modified procedure the result is:
call nested_block(?)
--> OUT:[3]
From this result you can see that the innermost nested block has used the variable declared in the second level
nested block.
Conditionals
CREATE PROCEDURE nested_block_if(IN inval INT, OUT val INT) LANGUAGE SQLSCRIPT
READS SQL DATA AS
BEGIN
DECLARE a INT = 1;
DECLARE v INT = 0;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 /(1-:inval);
IF :a = 1 THEN
DECLARE a INT = 2;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 /(2-:inval);
IF :a = 2 THEN
DECLARE a INT = 3;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 / (3-:inval);
END IF;
v = 1 / (4-:inval);
END IF;
v = 1 / (5-:inval);
END;
call nested_block_if(1, ?)
-->OUT:[1]
call nested_block_if(2, ?)
-->OUT:[2]
call nested_block_if(3, ?)
-->OUT:[3]
call nested_block_if(4, ?)
--> OUT:[2]
call nested_block_if(5, ?)
--> OUT:[1]
While Loop
For Loop
Loop
Note
The example below uses tables and values created in the For Loop example above.
8.5.1 Conditionals
Syntax
IF <bool_expr1>
THEN
Syntax Elements
Note
Specifies the comparison value. This can be based on either scalar literals or scalar variables.
Description
The IF statement consists of a Boolean expression <bool_expr1>. If this expression evaluates to true, the
statements <then_stmts1> in the mandatory THEN block are executed. The IF statement ends with END IF.
The remaining parts are optional.
If the Boolean expression <bool_expr1> does not evaluate to true, the ELSE-branch is evaluated. The
statements <else_stmts3> are executed without further checks. No ELSE-branches or ELSEIF-branches are
allowed after an else branch.
This statement can be used to simulate the switch-case statement known from many programming languages.
The predicate x [NOT] BETWEEN lower AND upper can also be used within the expression <bool_expr1>. It
works just like [ NOT ] ( x >= lower AND x <= upper). For more information, see Example 4.
Examples
Example 1
You use the IF statement to implement the functionality of the UPSERT statement in SAP HANA database.
Example 2
You use the IF statement to check if variable :found is NULL.
Example 3
It is also possible to use a scalar UDF in the condition, as shown in the following example.
Related Information
Syntax
WHILE <condition> DO
<proc_stmts
END WHILE
Syntax Elements
The while loop executes the statements <proc_stmts> in the body of the loop as long as the Boolean
expression at the beginning <condition> of the loop evaluates to true.
The predicate x [NOT] BETWEEN lower AND upper can also be used within the expression of the
<condition>. It works just like [ NOT ] ( x >= lower AND x <= upper). For more information, see
Example 3.
Example 1
You use WHILE to increment the :v_index1 and :v_index2 variables using nested loops.
Example 2
You can also use scalar UDF for the while condition as follows.
Example 3
Caution
Syntax:
Syntax elements:
REVERSE
Description:
The for loop iterates a range of numeric values and binds the current value to a variable <loop-var> in
ascending order. Iteration starts with the value of <start_value> and is incremented by one until the <loop-
var> is greater than <end_value> .
If <start_value> is larger than <end_value>, <proc_stmts> in the loop will not be evaluated.
Example 1
You use nested FOR loops to call a procedure that traces the current values of the loop variables appending
them to a table.
Example 2
Syntax:
BREAK
CONTINUE
Syntax elements:
BREAK
CONTINUE
Specifies that a loop should stop processing the current iteration, and should immediately start processing the
next.
Description:
Example:
You defined the following loop sequence. If the loop value :x is less than 3 the iterations will be skipped. If :x is
5 then the loop will terminate.
8.6 Cursors
Cursors are used to fetch single rows from the result set returned by a query. When a cursor is declared, it is
bound to the query. It is possible to parameterize the cursor query.
Syntax:
Syntax elements:
Description:
Cursors can be defined either after the signature of the procedure and before the procedure’s body or at the
beginning of a block with the DECLARE token. The cursor is defined with a name, optionally a list of parameters,
and an SQL SELECT statement. The cursor provides the functionality to iterate through a query result row-by-
row. Updating cursors is not supported.
Avoid using cursors when it is possible to express the same logic with SQL. You should do this as cursors
cannot be optimized the same way SQL can.
Example:
You create a cursor c_cursor1 to iterate over results from a SELECT on the books table. The cursor passes
one parameter v_isbn to the SELECT statement.
Syntax:
OPEN <cursor_name>[(<argument_list>)]
Syntax elements:
Specifies one or more arguments to be passed to the select statement of the cursor.
Description:
Evaluates the query bound to a cursor and opens the cursor, so that the result can be retrieved. If the cursor
definition contains parameters, the actual values for each of these parameters should be provided when the
cursor is opened.
This statement prepares the cursor, so that the results for the rows of a query can be fetched.
Example:
You open the cursor c_cursor1 and pass a string '978-3-86894-012-1' as a parameter.
OPEN c_cursor1('978-3-86894-012-1');
Syntax:
CLOSE <cursor_name>
Syntax elements:
Description:
Closes a previously opened cursor and releases all associated state and resources. It is important to close all
cursors that were previously opened.
Example:
CLOSE c_cursor1;
Syntax:
Syntax elements:
Specifies the name of the cursor where the result will be obtained.
Specifies the variables where the row result from the cursor will be stored.
Description:
Fetches a single row in the result set of a query and moves the cursor to the next row. It is assumed that the
cursor was declared and opened before. You can use the cursor attributes to check if the cursor points to a
valid row.
You fetch a row from the cursor c_cursor1 and store the results in the variables shown.
Related Information
A cursor provides a number of methods to examine its current state. For a cursor bound to variable
c_cursor1, the attributes summarized in the table below are available.
Cursor Attributes
Attribute Description
c_cursor1::ROWCOUNT Returns the number of rows that the cursor fetched so far.
This value is available after the first FETCH operation. Be
fore the first fetch operation the number is 0.
Example:
The example below shows a complete procedure using the attributes of the cursor c_cursor1 to check if
fetching a set of results is possible.
Related Information
Syntax:
Syntax elements:
Specifies one or more arguments to be passed to the select statement of the cursor.
To access the row result attributes in the body of the loop, you use the displayed syntax.
Description:
Opens a previously declared cursor and iterates over each row in the result set of the query bound to the
cursor. The statements in the body of the procedure are executed for each row in the result set. After the last
row from the cursor has been processed, the loop is exited and the cursor is closed.
As this loop method takes care of opening and closing cursors, resource leaks can be avoided.
Consequently, this loop is preferred to opening and closing a cursor explicitly and using other loop-variants.
Within the loop body, the attributes of the row that the cursor currently iterates over can be accessed like an
attribute of the cursor. Assuming that <row_var> is a_row and the iterated data contains a column test, then
the value of this column can be accessed using a_row.test.
Example:
The example below demonstrates how to use a FOR-loop to loop over the results from c_cursor1.
Related Information
Syntax
Description
When you iterate over each row of a result set, you can use the updatable cursor to change a record directly on
the row, to which the cursor is currently pointing. The updatable cursor is a standard SQL feature (ISO/IEC
9075-2:2011).
Restrictions
● The cursor has to be declared with a SELECT statement having the FOR UPDATE clause in order to prevent
concurrent WRITE on tables (without FOR UPDATE, the cursor is not updatable)
● The updatable cursor may be used only for UPDATE and DELETE operations.
● Using an updatable cursor in a single query instead of SQLScript is prohibited.
● Only persistent tables (both ROW and COLUMN tables) can be updated with an updatable cursor.
● UPDATE or DELETE operations performed on a table by means of an updatable cursor are allowed only one
time per row.
Note
Updating the same row multiple times is possible, if several cursors selecting the same table are declared
within a single transaction.
Examples
Sample Code
DO BEGIN
DECLARE CURSOR cur FOR SELECT * FROM employees FOR UPDATE;
FOR r AS cur DO
IF r.employee_id < 10000 THEN
UPDATE employees SET employee_id = employee_id + 10000
WHERE CURRENT OF cur;
ELSE
DELETE FROM employees WHERE CURRENT OF cur;
END IF;
END FOR;
END;
Example for updating or deleting multiple tables (currently COLUMN tables only supported) by means of an
updatable cursor.
Note
In this case, you have to specify columns of tables to be locked by using the FOR UPDATE OF clause within
the SELECT statement of the cursor. Keep in mind that DML execution by means of an updatable cursor is
allowed only one time per row.
DO BEGIN
DECLARE CURSOR cur FOR SELECT employees.employee_name,
departments.department_name
FROM employees, departments WHERE employees.department_id =
departments.department_id
FOR UPDATE OF employees.employee_id, departments.department_id;
FOR r AS cur DO
IF r.department_name = 'Development' THEN
UPDATE employees SET employee_id = employee_id + 10000,
department_id = department_id + 100
WHERE CURRENT OF cur;
UPDATE departments SET department_id = department_id + 100
WHERE CURRENT OF cur;
ELSEIF r.department_name = 'HR' THEN
DELETE FROM employees WHERE CURRENT OF cur;
DELETE FROM departments WHERE CURRENT OF cur;
END IF;
END FOR;
END;
Syntax:
Description:
The autonomous transaction is independent from the main procedure. Changes made and committed by an
autonomous transaction can be stored in persistency regardless of commit/rollback of the main procedure
transaction. The end of the autonomous transaction block has an implicit commit.
The examples show how commit and rollback work inside the autonomous transaction block. The first updates
(1) are committed, whereby the updates made in step (2) are completely rolled back. And the last updates (3)
are committed by the implicit commit at the end of the autonomous block.
CREATE PROCEDURE PROC1( IN p INT , OUT outtab TABLE (A INT)) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE errCode INT;
DECLARE errMsg VARCHAR(5000);
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN AUTONOMOUS TRANSACTION
errCode= ::SQL_ERROR_CODE;
errMsg= ::SQL_ERROR_MESSAGE ;
INSERT INTO ERR_TABLE (PARAMETER,SQL_ERROR_CODE, SQL_ERROR_MESSAGE)
VALUES ( :p, :errCode, :errMsg);
END;
outtab = SELECT 1/:p as A FROM DUMMY; -- DIVIDE BY ZERO Error if p=0
END
In the example above, an autonomous transaction is used to keep the error code in the ERR_TABLE stored in
persistency.
If the exception handler block were not an autonomous transaction, then every insert would be rolled back
because they were all made in the main transaction. In this case the result of the ERR_TABLE is as shown in the
following example.
P |SQL_ERROR_CODE| SQL_ERROR_MESSAGE
--------------------------------------------
0 | 304 | division by zero undefined: at function /()
The LOG_TABLE table contains 'MESSAGE', even though the inner autonomous transaction rolled back.
Note
You have to be cautious if you access a table both before and inside an autonomous transaction started in a
nested procedure (e.g. TRUNCATE, update the same row), because this can lead to a deadlock situation.
One solution to avoid this is to commit the changes before entering the autonomous transaction in the
nested procedure.
The COMMIT command commits the current transaction and all changes before the COMMIT command is
written to persistence.
The ROLLBACK command rolls back the current transaction and undoes all changes since the last COMMIT.
Example 1:
In this example, the B_TAB table has one row before the PROC1 procedure is executed:
V ID
0 1
After you execute the PROC1 procedure, the B_TAB table is updated as follows:
V ID
3 1
This means only the first update in the procedure affected the B_TAB table. The second update does not affect
the B_TAB table because it was rolled back.
The following graphic provides more detail about the transactional behavior. With the first COMMIT command,
transaction tx1 is committed and the update on the B_TAB table is written to persistence. As a result of the
COMMIT, a new transaction starts, tx2.
By triggering ROLLBACK, all changes done in transaction tx2 are reverted. In Example 1, the second update is
reverted. Additionally after the rollback is performed, a new transaction starts, tx3.
Example 2:
In Example 2, the PROC1 procedure calls the PROC2procedure. The COMMIT in PROC2 commits all changes done
in the tx1 transaction (see the following graphic). This includes the first update statement in the PROC1
procedure as well as the update statement in the PROC2 procedure. With COMMIT a new transaction starts
implicitly, tx2.
Therefore the ROLLBACK command in PROC1 only affects the previous update statement; all other updates
were committed with the tx1 transaction.
● If you used DSQL in the past to execute these commands (for example, EXEC ‘COMMIT’,
EXEC ’ROLLBACK’), SAP recommends that you replace all occurrences with the native commands
COMMIT/ROLLBACK because they are more secure.
● The COMMIT/ROLLBACK commands are not supported in Scalar UDF or in Table UDF.
Dynamic SQL allows you to construct an SQL statement during the execution time of a procedure. While
dynamic SQL allows you to use variables where they may not be supported in SQLScript and provides more
flexibility when creating SQL statements, it does have some disadvantages at run time:
Note
You should avoid dynamic SQL wherever possible as it may have a negative impact on security or
performance.
Syntax:
Description:
EXEC executes the SQL statement <sql-statement> passed in a string argument. EXEC does not return any
result set, if <sql_statement> is a SELECT statement. You have to use EXECUTE IMMEDIATE for that
purpose.
If the query returns a single row, you can assign the value of each column to a scalar variable by using the INTO
clause.
INTO <var_name_list>
<var_name_list> ::= <var_name>[{, <var_name>}...]
<var_name> ::= <identifier>
Sample Code
END;
The EXEC INTO statement does not accept empty result sets, so you need to define exit handlers in case of an
empty result set or use DEFAULT values.
The following example illustrates how to use default values with the EXEC statement:
Sample Code
DO BEGIN
DECLARE A_COPY INT;
DECLARE B_COPY VARCHAR(10);
CREATE ROW TABLE T1 (A INT NOT NULL, B VARCHAR(10));
SELECT A, B INTO A_COPY, B_COPY DEFAULT -2+1, NULL FROM T1;
--(A_COPY,B_COPY) = (-1,?), use default value
EXEC 'SELECT A FROM T1' INTO A_COPY DEFAULT 2;
--(A_COPY) = (2), exec into statement with default value
INSERT INTO T1 VALUES (0, 'sample0');
SELECT A, B INTO A_COPY, B_COPY DEFAULT 5, NULL FROM T1;
--(A_COPY,B_COPY) = (0,'sample0'), executed as-is
END;
USING <expression_list>
<expression> can be either a simple expression, such as a character, a date, a number, or a scalar variable.
Sample Code
END;
Syntax:
Description:
EXECUTE IMMEDIATE executes the SQL statement passed in a string argument. The results of queries
executed with EXECUTE IMMEDIATE are appended to the procedures result iterator.
You can also use the INTO und USING clauses to pass in or out scalar values. With the INTO clause the result
set is not appended to the procedure result iterator. For more information, see the EXEC statement
documentation.
Example:
You use dynamic SQL to delete the contents of the table tab, insert a value and, finally, to retrieve all results in
the table.
Related Information
This feature introduces additional support for parameterized dynamic SQL. It is possible to use scalar
variables, as well as table variable in USING and INTO clauses and CALL-statement parameters with USING and
INTO clauses. You can use the INTO and USING clauses to pass in or out scalar or tabular values. With the INTO
clause, the result set is not appended to the procedure result iterator.
Syntax
Description
EXEC executes the SQL statement <sql-statement> passed as a string argument. EXEC does not return a
result set, if <sql_statement> is a SELECT-statement. You have to use EXECUTE IMMEDIATE for that
purpose.
If the query returns result sets or output parameters, you can assign the values to scalar or table variables with
the INTO clause.
When the SQL statement is a SELECT statement and there are table variables listed in the INTO clause, the
result sets are assigned to the table variables sequentially. If scalar variables are listed in the INTO clause for a
SELECT statement, it works like <select_into_stmt> and assigns the value of each column of the first row
to a scalar variable when a single row is returned from a single result set. When the SQL statement is a CALL
statement, output parameters represented as':<var_name>' in the SQL statement are assigned to the
variables in the INTO clause that have the same names.
Examples
Sample Code
INTO Example 1
INTO Example 2
Sample Code
INTO Example 3
Note
You can also bind scalar or table values with the USING clause.
When <sql-statement> uses ':<var_name>' as a parameter, only variable references are allowed in the
USING clause and variables with the same name are bound to the parameter ':<var_name>'. However, when
<sql-statement> uses '?' as a parameter (unnamed parameter bound), any expression is allowed in the
USING clause and values are mapped to parameters sequentially. The unnamed parameter bound is supported
when there are only input parameters.
Sample Code
USING Example 1
DO BEGIN
DECLARE tv TABLE (col1 INT) = SELECT * FROM mytab;
DECLARE a INT = 123;
DECLARE tv2 TABLE (col1 INT);
EXEC 'select col1 + :a as col1 from :tv' INTO tv2 USING :a, :tv;
SELECT * FROM :tv2;
END;
Sample Code
USING Example 2
Sample Code
USING Example 3
DO BEGIN
DECLARE tv TABLE (col1 INT) = SELECT * FROM mytab;
DECLARE a INT = 123;
EXEC 'call myproc(:a, :tv)' USING :a, :tv;
END;
Limitations
The parameter '?' and the variable reference ':<var_name>' cannot be used at the same time in an SQL
statement.
8.9.4 APPLY_FILTER
Syntax
<variable_name> = APPLY_FILTER(<table_or_table_variable>,
<filter_variable_name>);
Syntax Elements
You can use APPLY_FILTER with persistent tables and table variables.
<table_name> :: = <identifier>
Note
The following constructs are not supported in the filter string <filter_variable_name>:
Description
The APPLY_FILTER function applies a dynamic filter on a table or table variable. Logically it can be considered a
partial dynamic sql statement. The advantage of the function is that you can assign it to a table variable and will
not block sql – inlining. Despite this all other disadvantages of a full dynamic sql yields also for the
APPLY_FILTER.
Examples
Exception handling is a method for handling exception and completion conditions in an SQLScript procedure.
The DECLARE EXIT HANDLER parameter allows you to define an exit handler to process exception conditions
in your procedure or function.
For example, the following exit handler catches all SQLEXCEPTION and returns the information that an
exception was thrown:
DECLARE EXIT HANDLER FOR SQLEXCEPTION SELECT 'EXCEPTION was thrown' AS ERROR
FROM dummy;
There are two system variables ::SQL_ERROR_CODE and ::SQL_ERROR_MESSAGE that can be used to get the
error code and the error message, as shown in the next example:
CREATE PROCEDURE MYPROC (IN in_var INTEGER, OUT outtab TABLE(I INTEGER) ) AS
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
outtab = SELECT 1/:in_var as I FROM dummy;
END;
::SQL_ERROR_CODE ::SQL_ERROR_MESSAGE
304 Division by zero undefined: the right-hand value of the division cannot be zero
at function /() (please check lines: 6)
Besides defining an exit handler for an arbitrary SQLEXCEPTION, you can also define it for a specific error code
number by using the keyword SQL_ERROR_CODE followed by an SQL error code number.
For example, if only the “division-by-zero” error should be handled the exception handler, the code looks as
follows:
Please note that only the SQL (code strings starting with ERR_SQL_*) and SQLScript (code strings starting
with ERR_SQLSCRIPT_*) error codes are supported in the exit handler. You can use the system view
M_ERROR_CODES to get more information about the error codes.
Note
It is now possible to define an exit handler for the statement FOR UPDATE NOWAIT with the error code 146.
For more information, see Supported Error Codes [page 136].
Instead of using an error code the exit handler can be also defined for a condition.
For more information about declaring a condition, see DECLARE CONDITION [page 131].
If you want to do more in the exit handler, you have to use a block by using BEGIN…END. For instance preparing
some additional information and inserting the error into a table:
END;
tab = SELECT 1/:in_var as I FROM dummy;
In the example above, in case of an unhandled exception the transaction will be rolled back. Thus the new
row in the table LOG_TABLE will be gone as well. To avoid this, you can use an autonomous transaction. For
more information, see Autonomous Transaction [page 118].
Declaring a CONDITION variable allows you to name SQL error codes or even to define a user-defined
condition.
These variables can be used in EXIT HANDLER declaration as well as in SIGNAL and RESIGNAL statements.
Whereby in SIGNAL and RESIGNAL only user-defined conditions are allowed.
Using condition variables for SQL error codes makes the procedure/function code more readable. For example
instead of using the SQL error code 304, which signals a division by zero error, you can declare a meaningful
condition for it:
Besides declaring a condition for an already existing SQL error code, you can also declare a user-defined
condition. Either define it with or without a user-defined error code.
Considering you would need a user-defined condition for an invalid procedure input you have to declare it as in
the following example:
Optional you can also associate a user-defined error code, e.g. 10000:
Note
Please note the user-defined error codes must be within the range of 10000 to 19999.
How to signal and/or resignal a user-defined condition will be handled in the section SIGNAL and RESIGNAL
[page 132].
The SIGNAL statement is used to explicitly raise a user-defined exception from within your procedure or
function.
The error value returned by the SIGNAL statement is either an SQL_ERROR_CODE, or a user_defined_condition
that was previously defined with DECLARE CONDITION [page 131]. The used error code must be within the
user-defined range of 10000 to 19999.
For example, to signal an SQL_ERROR_CODE 10000, proceed as follows:
To raise a user-defined condition, for example invalid_input, as declared in the previous section (see DECLARE
CONDITION [page 131]), use the following command:
SIGNAL invalid_input;
But none of these user-defined exceptions have an error message text. That means that the value of the
system variable ::SQL_ERROR_MESSAGE is empty. Whereas the value of ::SQL_ERROR_CODE is 10000.
In both cases you get the following information in case the user-defined exception is thrown:
[10000]: user-defined error: "SYSTEM"."MY": line 4 col 2 (at pos 96): [10000]
(range 3) user-defined error exception: Invalid input arguments
In the following example, the procedure signals an error in case the input argument of start_date is greater
than the input argument of end_date:
END;
If the procedures are called with invalid input arguments, you receive the following error message:
For more information on how to handle the exception and continue with procedure execution, see Nested Block
Exceptions in Exception Handling Examples [page 134].
The RESIGNAL statement is used to pass on the exception that is handled in the exit handler.
Besides pass on the original exception by simple using RESIGNAL you can also change some information
before pass it on. Please note that the RESIGNAL statement can only be used in the exit handler.
Using RESIGNAL statement without changing the related information of an exception is done as follows:
CREATE PROCEDURE MYPROC (IN in_var INTEGER, OUT outtab TABLE(I INTEGER) ) AS
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
RESIGNAL;
In case of <in_var> = 0 the raised error would be the original SQL error code and message text.
You can change the error message of an SQL error by using SET MESSAGE _TEXT:
The original SQL error message will be now replaced by the new one:
[304]: division by zero undefined: [304] "SYSTEM"."MY": line 4 col 10 (at pos
131): [304] (range 3) division by zero undefined exception: for the input
parameter in_var = 0 exception was raised
You can get the original message via the system variable ::SQL_ERROR_MESSAGE. This is useful, if you still
want to keep the original message, but would like to add additional information:
A general exception can be handled with an exception handler declared at the beginning of a statement that
makes an explicit or an implicit signal exception.
You can declare an exception handler that catches exceptions with specific error code numbers.
Exceptions can be declared by using a CONDITION variable. The CONDITION can optionally be specified with an
error code number.
Signal an Exception
The SIGNAL statement can be used to explicitly raise an exception from within your procedures.
Note
The error code used must be within the user-defined range of 10000 to 19999.
Resignal an Exception
The RESIGNAL statement raises an exception on the action statement in exception handler. If error code is not
specified, RESIGNAL will throw the caught exception.
The following is a list of the error codes supported by the exit handler.
8.11 ARRAY
An array is an indexed collection of elements of a single data type. In the following section we explore the
varying ways to define and use arrays in SQLScript.
You can declare a variable of type ARRAY by using the keyword ARRAY.
You can declare an array <variable_name> with the element type <sql_type>. The following SQL types are
supported:
<sql_type> ::=
DATE | TIME| TIMESTAMP | SECONDDATE | TINYINT | SMALLINT | INTEGER | BIGINT |
DECIMAL | SMALLDECIMAL | REAL | DOUBLE | VARCHAR | NVARCHAR | VARBINARY | CLOB |
NCLOB |BLOB
Note that only unbounded arrays are supported with a maximum cardinality of 2^31. You cannot define a static
size for an array.
You can use the array constructor to directly assign a set of values to the array.
The array constructor returns an array containing elements specified in the list of value expressions. The
following example illustrates an array constructor that contains the numbers 1, 2 and 3:
Besides using scalar constants you can also use scalar variables or parameters instead, as shown in the next
example.
Note
The <array_index> indicates the index of the element in the array to be modified whereby <array_index>
can have any value from 1 to 2^31. For example the following statement stores the value 10 in the second
element of the array id:
id[2] = 10;
Please note that all unset elements of the array are NULL. In the given example id[1] is then NULL.
Instead of using a constant scalar value it is also possible to use a scalar variable of type INTEGER as
<array_index>. In the next example, variable I of type INTEGER is used as an index.
DECLARE i INT ;
DECLARE arr NVARCHAR(15) ARRAY ;
for i in 1 ..10 do
arr [:i] = 'ARRAY_INDEX '|| :i;
end for;
SQL Expressions and Scalar User Defined Functions (Scalar UDF) that return a number also can be used as an
index. For example, a Scalar UDF that adds two values and returns the result of it
The value of an array element can be accessed with the index <array_index>, where <array_index> can be
any value from 1 to 2^31. The syntax is:
For example, the following copies the value of the second element of array arr to variable var. Since the array
elements are of type NVARCHAR(15) the variable var has to have the same type:
Please note that you have to use ‘:’ before the array variable if you read from the variable.
Instead of assigning the array element to a scalar variable it is possible to directly use the array element in the
SQL expression as well. For example, using the value of an array element as an index for another array.
DO
BEGIN
DECLARE arr TINYINT ARRAY = ARRAY(1,2,3);
DECLARE index_array INTEGER ARRAY = ARRAY(1,2);
DECLARE value TINYINT;
arr[:index_array[1]] = :arr[:index_array[2]];
value = :arr[:index_array[1]];
select :value from dummy;
END;
8.11.4 UNNEST
The UNNEST function converts one or many arrays into a table. The result table includes a row for each element
of the specified array. The result of the UNNEST function needs to be assigned to a table variable. The syntax is:
For example, the following statements convert the array id of type INTEGER and the array name of type
VARCHAR(10) into a table and assign it to the tabular output parameter rst:
For multiple arrays, the number of rows will be equal to the largest cardinality among the cardinalities of the
arrays. In the returned table, the cells that are not corresponding to any elements of the arrays are filled with
NULL values. The example above would result in the following tabular output of rst:
:ARR_ID :ARR_NAME
-------------------
1 name1
2 name2
? name3
Furthermore the returned columns of the table can also be explicitly named be using the AS clause. In the
following example, the column names for :ARR_ID and :ARR_NAME are changed to ID and NAME.
ID NAME
-------------------
1 name1
2 name2
? name3
As an additional option an ordinal column can be specified by using the WITH ORDINALITY clause.
The ordinal column will then be appended to the returned table. An alias for the ordinal column needs to be
explicitly specified. The next example illustrates the usage. SEQ is used as an alias for the ordinal column:
AMOUNT SEQ
----------------
10 1
20 2
Note
The UNNEST function cannot be referenced directly in a FROM clause of a SELECT statement.
8.11.5 ARRAY_AGG
The type of the array needs to have the same type as the column.
Optionally the ORDER BY clause can be used to determine the order of the elements in the array. If it is not
specified, the array elements are ordered non-deterministic. In the following example all elements of array id
are sorted descending by column B.
Additionally it is also possible to define where NULL values should appear in the result set. By default NULL
values are returned first for ascending ordering, and last for descending ordering. You can override this
behavior using NULLS FIRST or NULLS LAST to explicitly specify NULL value ordering. The next example
shows how the default behavior for the descending ordering can be overwritten by using NULLS FIRST:
Note
ARRAY_AGG function does not support using value expressions instead of table variables.
8.11.6 TRIM_ARRAY
The TRIM_ARRAY function removes elements from the end of an array. TRIM_ARRAY returns a new array with a
<trim_quantity> number of elements removed from the end of the array <array_variable>.
TRIM_ARRAY”(“:<array_variable>, <trim_quantity>”)”
<array_variable> ::= <identifier>
<trim_quantity> ::= <unsigned_integer>
ID
---
1
2
8.11.7 CARDINALITY
The CARDINALITY function returns the highest index of a set element in the array <array_variable>. It
returns N (>= 0) if the index of the N-th element is the largest among the indices.
CARDINALITY(:<array_variable>)
The result is n=0 because there is no element in the array. In the next example the cardinality is 20, as the 20th
element is set. This implicitly sets the elements 1-19 to NULL:
END;
The CARDINALITY function can also directly be used everywhere where expressions are supported, for
example in a condition:
The CONCAT function concatenates two arrays. It returns the new array that contains a concatenation of
<array_variable_left> and <array_variable_right>. Both || and the CONCAT function can be used
for concatenation:
The index-based cell access allows you random access (read and write) to each cell of table variable.
<table_variable>.<column_name>[<index>]
For example, writing to certain cell of a table variable is illustrated in the following example. Here we simply
change the value in the second row of column A.
Reading from a certain cell of a table variable is done in similar way. Note that for the read access, the ‘:’ is
needed in front of the table variable.
The same rules apply for <index> as for the array index. That means that the <index> can have any value
from 1 to 2^31 and that SQL Expression and Scalar User Defined Functions (Scalar UDF) that return a number
also can be used as an index. Instead of using a constant scalar values, it is also possible to use a scalar
variable of type INTEGER as <index>.
Restrictions:
To determine whether a table or table variable is empty, you can use the predicate IS_EMPTY:
You can use IS_EMPTY in conditions like in IF-statements or WHILE-loops. For instance, in the next example
IS_EMPTY is used in an IF-statement:
Note
To get the number of records of a table or a table variable, you can use the operator RECORD_COUNT:
RECORD_COUNT takes as the argument <table_name> or <table_variable> and returns the number of records
of type BIGINT.
You can use RECORD_COUNT in all places where expressions are supported such as IF-statements, loops or
scalar assignments. In the following example it is used in a loop:
END FOR;
END
Note
For all position expressions, the valid values are in the interval from 1 to 2^31-1.
You can insert a new data record at a specific position in a table variable with the following syntax:
All existing data records at positions starting from the given index onwards are moved to the next position. If
the index is greater than the original table size, the records between the inserted record and the original last
record are initialized with NULL values.
IF IS_EMPTY(:IT) THEN
RETURN;
END IF;
If you omit the position, the data record will be appended at the end.
Sample Code
Note
The values for the omitted columns are initialized with NULL values.
You can insert the content of one table variable into another table variable with one single operation without
using SQL.
Code Syntax
:<target_table_var>[.(<column_list>)].INSERT(:<source_table_var>[,
<position>])
If no position is specified, the values will be appended to the end. The positions starts from 1 - NULL and all
values smaller than 1 are invalid. If no column list is specified, all columns of the table are insertion targets.
Sample Code
Usage Example
:tab_a.insert(:tab_b);
:tab_a.(col1, COL2).insert(:tab_b);
:tab_a.INSERT(:tab_b, 5);
:tab_a.("a","b").insert(:tab_b, :index_to_insert);
If SOURCE_TAB has columns (X, A, B, C) and TARGET_TAB has columns (A, B, C, D),
then :target_tab.insert(:source_tab) will insert X into A, A into B, B into C and C into D.
If another order is desired, the column sequence has to specified in the column list for the TARGET_TAB. for
example :TARGET_TAB.(D, A, B, C).insert(:SOURCE_TAB) will insert X into D, A into A, B into B and C
into C.
The types of the columns have to match, otherwise it is not possible to insert data into the column. For
example, a column of type DECIMAL cannot be inserted in an INTEGER column and vice versa.
Sample Code
CALL P(?)
K V
--------
C 3.890
B 2.045
B 2.067
A 1.123
You can modify a data record at a specific position. There are two equivalent syntax options.
Note
Note
Sample Code
Note
You can also set values at a position outside the original table size. Just like with INSERT, the records
between the original last record and the newly inserted records are initialized with NULL values.
:<table_variable>.DELETE(<index>)
Sample Code
:<table_variable>.DELETE(<from_index>..<to_index>)
If the starting index is greater than the table size, no operation is performed. If the end index is smaller than the
starting index, an error occurs. If the end index is greater than the table size, all records from the starting index
to the end of the table are deleted.
Sample Code
Note
:<table_variable>.DELETE(<array_of_integers>)
The provided array expression contains indexes pointing to records which shall be deleted from the table
variable. If the array contains an invalid index (for example, zero), an error occurs.
Sample Code
Note
This feature offers an efficient way to search by key value pairs in table variables.
Syntax
The size of the column list and the value list must be the same, columns and values are matched by their
position in the list. The <start_position> is optional, the default is 1 (first position), which is equal to
scanning all data.
The search function itself can be used in further expressions, but not directly in SQL statements.
The position of the first matching record is returned (or NULL, if no record matches). This result can be used in
conjunction with other table variable operators (DELETE, UPDATE).
Example
Sample Code
A 1 V11
E 5 V12
B 6 V13
E 7 V14
M 3 V15
A 1 V11
E 5 V12
B 6 V13
E 7 V14
M 3 V15
I 3 X
A 1 V11
E 5 V12
B 6 V13
E 7 V14
I 3 X
If your SQLScript procedure needs execution of dynamic SQL statements where the parts of it are derived from
untrusted input (e.g. user interface), there is a danger of an SQL injection attack. The following functions can
be utilized in order to prevent it:
Example:
The following values of input parameters can manipulate the dynamic SQL statement in an unintended way:
This cannot happen if you validate and/or process the input values:
Syntax IS_SQL_INJECTION_SAFE
IS_SQL_INJECTION_SAFE(<value>[, <max_tokens>])
Syntax Elements
String to be checked.
Description
Checks for possible SQL injection in a parameter which is to be used as a SQL identifier. Returns 1 if no possible
SQL injection is found, otherwise 0.
The following code example shows that the function returns 0 if the number of tokens in the argument is
different from the expected number of a single token (default value).
safe
-------
0
The following code example shows that the function returns 1 if the number of tokens in the argument matches
the expected number of 3 tokens.
safe
-------
1
Syntax ESCAPE_SINGLE_QUOTES
ESCAPE_SINGLE_QUOTES(<value>)
Description
Escapes single quotes (apostrophes) in the given string <value>, ensuring a valid SQL string literal is used in
dynamic SQL statements to prevent SQL injections. Returns the input string with escaped single quotes.
Example
The following code example shows how the function escapes a single quote. The one single quote is escaped
with another single quote when passed to the function. The function then escapes the parameter content
Str'ing to Str''ing, which is returned from the SELECT.
string_literal
---------------
Str''ing
ESCAPE_DOUBLE_QUOTES(<value>)
Description
Escapes double quotes in the given string <value>, ensuring a valid SQL identifier is used in dynamic SQL
statements to prevent SQL injections. Returns the input string with escaped double quotes.
Example
The following code example shows that the function escapes the double quotes.
table_name
--------------
TAB""LE
So far, implicit parallelization has been applied to table variable assignments as well as read-only procedure
calls that are independent from each other. DML statements and read-write procedure calls had to be executed
sequentially. From now on, it is possible to parallelize the execution of independent DML statements and read-
write procedure calls by using parallel execution blocks:
For example, in the following procedure several UPDATE statements on different tables are parallelized:
Note
Only DML statements on column store tables are supported within the parallel execution block.
In the next example several records from a table variable are inserted into different tables in parallel.
Sample Code
You can also parallelize several calls to read-write procedures. In the following example, several procedures
performing independent INSERT operations are executed in parallel.
Sample Code
call cproc;
Only the following statements are allowed in read-write procedures, which can be called within a parallel
block:
● DML
● Imperative logic
● Autonomous transaction
● Implicit SELECT and SELECT INTO scalar variable
Recommendation
SAP recommends that you use SQL rather than Calculation Engine Plan Operators with SQLScript.
The execution of Calculation Engine Plan Operators currently is bound to processing within the calculation
engine and does not allow a possibility to use alternative execution engines, such as L native execution. As
most Calculation Engine Plan Operators are converted internally and treated as SQL operations, the
conversion requires multiple layers of optimizations. This can be avoided by direct SQL use. Depending on
your system configuration and the version you use, mixing Calculation Engine Plan Operators and SQL can
lead to significant performance penalties when compared to to plain SQL implementation.
Calculation engine plan operators encapsulate data-transformation functions and can be used in the definition
of a procedure or a table user-defined function. They constitute a no longer recommended alternative to using
SQL statements. Their logic is directly implemented in the calculation engine, which is the execution
environments of SQLScript.
● Data Source Access operators that bind a column table or a column view to a table variable.
● Relational operators that allow a user to bypass the SQL processor during evaluation and to directly
interact with the calculation engine.
● Special extensions that implement functions.
The data source access operators bind the column table or column view of a data source to a table variable for
reference by other built-in operators or statements in a SQLScript procedure.
9.1.1 CE_COLUMN_TABLE
Syntax:
CE_COLUMN_TABLE(<table_name> [<attributes>])
Syntax Elements:
Description:
The CE_COLUMN_TABLE operator provides access to an existing column table. It takes the name of the table
and returns its content bound to a variable. Optionally a list of attribute names can be provided to restrict the
output to the given attributes.
Note that many of the calculation engine operators provide a projection list for restricting the attributes
returned in the output. In the case of relational operators, the attributes may be renamed in the projection list.
The functions that provide data source access provide no renaming of attributes but just a simple projection.
Note
Calculation engine plan operators that reference identifiers must be enclosed with double-quotes and
capitalized, ensuring that the identifier's name is consistent with its internal representation.
If the identifiers have been declared without double-quotes in the CREATE TABLE statement (which is the
normal method), they are internally converted to upper-case letters. Identifiers in calculation engine plan
operators must match the internal representation, that is they must be upper case as well.
In contrast, if identifiers have been declared with double-quotes in the CREATE TABLE statement, they are
stored in a case-sensitive manner. Again, the identifiers in operators must match the internal
representation.
9.1.2 CE_JOIN_VIEW
Syntax:
CE_JOIN_VIEW(<column_view_name>[{,<attributes>,}...])
Syntax elements:
Specifies the name of the required columns from the column view.
The CE_JOIN_VIEW operator returns results for an existing join view (also known as Attribute View). It takes
the name of the join view and an optional list of attributes as parameters of such views/models.
9.1.3 CE_OLAP_VIEW
Syntax:
CE_OLAP_VIEW(<olap_view_name>, '['<attributes>']')
Syntax elements:
Note
● count("column")
● sum("column")
● min("column")
● max("column")
● use sum("column") / count("column") to compute the average
The CE_OLAP_VIEW operator returns results for an existing OLAP view (also known as an Analytical View). It
takes the name of the OLAP view and an optional list of key figures and dimensions as parameters. The OLAP
cube that is described by the OLAP view is grouped by the given dimensions and the key figures are aggregated
using the default aggregation of the OLAP view.
9.1.4 CE_CALC_VIEW
Syntax:
CE_CALC_VIEW(<calc_view_name>, [<attributes>])
Syntax elements:
Specifies the name of the required attributes from the calculation view.
Description:
The CE_CALC_VIEW operator returns results for an existing calculation view. It takes the name of the
calculation view and optionally a projection list of attribute names to restrict the output to the given attributes.
The calculation engine plan operators presented in this section provide the functionality of relational operators
that are directly executed in the calculation engine. This allows exploitation of the specific semantics of the
calculation engine and to tune the code of a procedure if required.
9.2.1 CE_JOIN
Syntax:
Syntax elements:
Specifies a list of join attributes. Since CE_JOIN requires equal attribute names, one attribute name per pair of
join attributes is sufficient. The list must at least have one element.
Specifies a projection list for the attributes that should be in the resulting table.
Note
If the optional projection list is present, it must at least contain the join attributes.
Description:
The CE_JOIN operator calculates a natural (inner) join of the given pair of tables on a list of join attributes. For
each pair of join attributes, only one attribute will be in the result. Optionally, a projection list of attribute names
can be given to restrict the output to the given attributes. Finally, the plan operator requires each pair of join
attributes to have identical attribute names. In case of join attributes having different names, one of them must
be renamed prior to the join.
9.2.2 CE_LEFT_OUTER_JOIN
Calculate the left outer join. Besides the function name, the syntax is the same as for CE_JOIN.
9.2.3 CE_RIGHT_OUTER_JOIN
Calculate the right outer join. Besides the function name, the syntax is the same as for CE_JOIN.
Note
Syntax:
Syntax elements:
Specifies a list of attributes that should be in the resulting table. The list must at least have one element. The
attributes can be renamed using the SQL keyword AS, and expressions can be evaluated using the CE_CALC
function.
Specifies an optional filter where Boolean expressions are allowed. See CE_CALC [page 187] for the filter
expression syntax.
Description:
Restricts the columns of the table variable <var_table> to those mentioned in the projection list. Optionally,
you can also rename columns, compute expressions, or apply a filter.
With this operator, the <projection_list> is applied first, including column renaming and computation of
expressions. As last step, the filter is applied.
Caution
Be aware that <filter> in CE_PROJECTION can be vulnerable to SQL injection because it behaves like
dynamic SQL. Avoid use cases where the value of <filter> is passed as an argument from outside of the
procedure by the user himself or herself, for example:
create procedure proc (in filter nvarchar (20), out output ttype)
begin
tablevar = CE_COLUMN_TABLE(TABLE);
output = CE_PROJECTION(:tablevar,
["A", "B"], '"B" = :filter );
end;
It enables the user to pass any expression and to query more than was intended, for example: '02 OR B =
01'.
Syntax:
Syntax elements:
Specifies the expression to be evaluated. Expressions are analyzed using the following grammar:
Where terminals in the grammar are enclosed, for example 'token' (denoted with id in the grammar), they are
like SQL identifiers. An exception to this is that unquoted identifiers are converted into lower-case. Numeric
constants are basically written in the same way as in the C programming language, and string constants are
enclosed in single quotes, for example, 'a string'. Inside string, single quotes are escaped by another single
quote.
An example expression valid in this grammar is: "col1" < ("col2" + "col3"). For a full list of expression
functions, see the following table.
Description:
CE_CALC is used inside other relational operators. It evaluates an expression and is usually then bound to a
new column. An important use case is evaluating expressions in the CE_PROJECTION operator. The CE_CALC
function takes two arguments:
Expression Functions
Name Description Syntax
midstr returns a part of the string starting at string midstr(string, int, int)
arg2, arg3 bytes long. arg2 is counted
from 1 (not 0) 2
leftstr returns arg2 bytes from the left of the string leftstr(string, int)
arg1. If arg1 is shorter than the value of
arg2, the complete string will be re
turned. 1
rightstr returns arg2 bytes from the right of the string rightstr(string, int)
arg1. If arg1 is shorter than the value of
arg2, the complete string will be re
turned. 1
instr returns the position of the first occur int instr(string, string)
rence of the second string within the
first string (>= 1) or 0, if the second
string is not contained in the first. 1
● trim(s) = ltrim(rtrim(s))
● trim(s1, s2) = ltrim(rtrim(s1, s2),
s2)
Mathematical Functions The math functions described here generally operate on floating point values;
their inputs will automatically convert to double, the output will also be a double.
● double log(double) These functions have the same functionality as in the Cprogramming language.
● double exp(double)
● double log10(double)
● double sin(double)
● double cos(double)
● double tan(double)
● double asin(double)
● double acos(double)
● double atan(double)
● double sinh(double)
● double cosh(double)
● double floor(double)
● double ceil(double)
Further Functions
1 Due to calendar variations with dates earlier that 1582, the use of the date data type is deprecated; you
should use the daydate data type instead.
Note
date is based on the proleptic Gregorian calendar. daydate is based on the Gregorian calendar which is
also the calendar used by SAP HANA SQL.
2 These Calculation Engine string functions operate using single byte characters. To use these functions with
multi-byte character strings please see section: Using String Functions with Multi-byte Character Encoding
below. Note, this limitation does not exist for the SQL functions of the SAP HANA database which support
Unicode encoded strings natively.
To allow the use of the string functions of the Calculation Engine with multi-byte character encoding, you can
use the charpos and chars functions. An example of this usage for the single-byte character function midstr
follows below:
Related Information
Syntax:
Syntax elements:
Note
Specifies a list of aggregates. For example, [SUM ("A"), MAX("B")] specifies that in the result, column "A"
has to be aggregated using the SQL aggregate SUM and for column B, the maximum value should be given.
● count("column")
● sum("column")
● min("column")
● max("column")
● use sum("column") / count("column") to compute the average
Specifies an optional list of group-by attributes. For instance, ["C"] specifies that the output should be
grouped by column C. Note that the resulting schema has a column named C in which every attribute value
from the input table appears exactly once. If this list is absent the entire input table will be treated as a single
group, and the aggregate function is applied to all tuples of the table.
Specifies the name of the column attribute for the results to be grouped by.
CE_AGGREGATION implicitly defines a projection: All columns that are not in the list of aggregates, or in the
group-by list, are not part of the result.
Description:
The result schema is derived from the list of aggregates, followed by the group-by attributes. The order of the
returned columns is defined by the order of columns defined in these lists. The attribute names are:
● For the aggregates, the default is the name of the attribute that is aggregated.
● For instance, in the example above ([SUM("A"),MAX("B")]), the first column is called A and the second
is B.
● The attributes can be renamed if the default is not appropriate.
● For the group-by attributes, the attribute names are unchanged. They cannot be renamed using
CE_AGGREGATION.
Note
Note that count(*) can be achieved by doing an aggregation on any integer column; if no group-by
attributes are provided, this counts all non-null values.
9.2.7 CE_UNION_ALL
Syntax:
Syntax elements:
Description:
The CE_UNION_ALL function is semantically equivalent to SQL UNION ALL statement. It computes the union
of two tables which need to have identical schemas. The CE_UNION_ALL function preserves duplicates, so the
result is a table which contains all the rows from both input tables.
Syntax:
Syntax elements:
Specifies a list of attributes that should be in the resulting table. The list must at least have one element. The
attributes can be renamed using the SQL keyword AS.
Description:
For each input table variable the specified columns are concatenated. Optionally columns can be renamed. All
input tables must have the same cardinality.
Caution
The vertical union is sensitive to the order of its input. SQL statements and many calculation engine plan
operators may reorder their input or return their result in different orders across starts. This can lead to
unexpected results.
9.3.2 CE_CONVERSION
Syntax:
Syntax elements:
Specifies the parameters for the conversion. The CE_CONVERSIONoperator is highly configurable via a list of
key-value pairs. For the exact conversion parameters permissible, see the Conversion parameters table.
Description:
Applies a unit conversion to input table <var_table> and returns the converted values. Result columns can
optionally be renamed. The following syntax depicts valid combinations. Supported keys with their allowed
domain of values are:
Conversion parameters
Key Values Type Mandatory Default Documentation
'source_unit_col column in input ta column name N None The name of the
umn' ble column containing
the source unit in
the input table.
'target_unit_col column in input ta column name N None The name of the
umn' ble column containing
the target unit in
the input table.
'refer column in input ta column name N None The default refer
ence_date_col ble ence date for any
umn' kind of conversion.
9.3.3 TRACE
Syntax:
TRACE(<var_input>)
Syntax elements:
The TRACE operator is used to debug SQLScript procedures. It traces the tabular data passed as its argument
into a local temporary table and returns its input unmodified. The names of the temporary tables can be
retrieved from the SYS.SQLSCRIPT_TRACE monitoring view. See SQLSCRIPT_TRACE below.
Example:
out = TRACE(:input);
Note
This operator should not be used in production code as it will cause significant runtime overhead.
Additionally, the naming conventions used to store the tracing information may change. Thus, this operator
should only be used during development for debugging purposes.
Related Information
SQLSCRIPT_TRACE
To eliminate the dependency of having a procedure or a function that already exist when you want to create a
new procedure consuming them, you can use headers in their place.
When creating a procedure, all nested procedures that belong to that procedure must exist beforehand. If
procedure P1 calls P2 internally, then P2 must have been created earlier than P1. Otherwise, P1 creation fails
with the error message,“P2 does not exist”. With large application logic and no export or delivery unit available,
it can be difficult to determine the order in which the objects need to be created.
To avoid this kind of dependency problem, SAP introduces HEADERS. HEADERS allow you to create a minimum
set of metadata information that contains only the interface of the procedure or function.
AS HEADER ONLY
You create a header for a procedure by using the HEADER ONLY keyword, as in the following example:
With this statement you are creating a procedure <proc_name> with the given signature
<parameter_clause>. The procedure <proc_name> has no body definition and thus has no dependent base
objects. Container properties (for example, security mode, default_schema, and so on) cannot be defined
with the header definition. These are included in the body definition.
The following statement creates the procedure TEST_PROC with a scalar input INVAR and a tabular output
OUTTAB:
CREATE PROCEDURE TEST_PROC (IN INVAR NVARCHAR(10), OUT OUTTAB TABLE(no INT)) AS
HEADER ONLY
By checking the is_header_only field in the system view PROCEDURE, you can verify that a procedure only
header is defined.
If you want to check for functions, then you need to look into the system view FUNCTIONS.
Once a header of a procedure or function is defined, other procedures or functions can refer to it in their
procedure body. Procedures containing these headers can be compiled as shown in the following example:
CREATE PROCEDURE OUTERPROC (OUT OUTTAB TABLE (NO INT)) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE s INT;
s = 1;
CALL TEST_PROC (:s, outtab);
END;
To change this and to make a valid procedure or function from the header definition, you must replace the
header by the full container definition. Use the ALTER statement to replace the header definition of a
procedure, as follows:
For a function header, the task is similar, as shown in the following example:
For example, if you want to replace the header definition of TEST_PROC that was defined already, then the
ALTER statement might look as follows:
ALTER PROCEDURE TEST_PROC (IN INVAR NVARCHAR(10), OUT OUTTAB TABLE(no INT))
LANGUAGE SQLSCRIPT SQL SECURITY INVOKER READS SQL DATA
AS
BEGIN
DECLARE tvar TABLE (no INT, name nvarchar(10));
tvar = SELECT * FROM TAB WHERE name = :invar;
outtab = SELECT no FROM :tvar;
END
You cannot change the signature with the ALTER statement. If the name of the procedure or the function or the
input and output variables do not match, you will receive an error.
Note
The ALTER PROCEDURE and the ALTER FUNCTION statements are supported only for a procedure or a
function respectively, that contain a header definition.
SQLScript supports the spatial data type ST_GEOMETRY and SQL spatial functions to access and manipulate
spatial data. In addition, SQLScript also supports the objective style function calls needed for some SQL spatial
functions.
The following example illustrates a small scenario for using spatial data type and function in SQLScript.
The function get_distance calculates the distance between the two given parameters <first> and
<second> of type ST_GEOMETRY by using the spatial function ST_DISTANCE.
The ‘:’ in front of the variable <first> is needed because you are reading from the variable.
The function get_distance itself is called by the procedure nested_call. The procedure returns the
distance and the text representation of the ST_GEOMETRY variable <first>.
Out(1) Out(2)
----------------------------------------------------------------------
8,602325267042627 POINT(7 48)
Note that the optional SRID (Spatial Reference Identifier) parameter in SQL spatial functions is mandatory if
the function is used within SQLScript. If you do not specify the SRID, you receive an error as demonstrated with
the function ST_GEOMFROMTEXT in the following example. Here SRID 0 is used to specify the default spatial
reference system.
DO
BEGIN
If you do not use the same SRID for the ST_GEOMETRY variables <line1> and <line2> latest the UNNEST will
return an error because it is not allowed for the values in one column to have different SRID.
In addition, there is a consistency check for output table variables to ensure that all elements of a spatial
column have the same SRID.
Note
● ST_CLUSTERID
● ST_CLUSTERCENTEROID
● ST_CLUSTERENVELOPE
● ST_CLUSTERCONVEXHULL
● ST_AsSVG
The construction of objects with the NEW keyword is also not supported in SQLScript. Instead you can use
ST_GEOMFROMTEXT(‘POINT(1 1)’, srid).
For more information on SQL spatial functions and their usage, see SAP HANA Spatial Reference available on
the SAP HANA Platform.
System variables are built-in variables in SQLScript that provide you with information about the current
context.
12.1 ::CURRENT_OBJECT_NAME
and ::CURRENT_OBJECT_SCHEMA
To identify the name of the current running procedure or function you can use the following two system
variables:
the result of that function is then the name and the schema_name of the function:
SCHEMA_NAME NAME
----------------------------------------
MY_SCHEMA RETURN_NAME
The next example shows that you can also pass the two system variables as arguments to procedure or
function call.
Note
Note that in anonymous blocks the value of both system variables is NULL.
The two system variable will always return the schema name and the name of the procedure or function.
Creating a synonym on top of the procedure or function and calling it with the synonym will still return the
original name as shown in the next example.
We create a synonym on the RETURN_NAME function from above and will query it with the synonym:
SCHEMA_NAME NAME
------------------------------------------------------
MY_SCHEMA RETURN_NAME
12.2 ::ROWCOUNT
The System Variable ::ROWCOUNT stores the number of updated row counts of the previously executed DML
statement. There is no accumulation of all previously executed DML statements.
The next examples shows you how you can use ::ROWCOUNT in a procedure. Consider we have the following
table T:
Now we want to update table T and want to return the number of updated rows:
UPDATED_ROWS
-------------------------
2
In the next example we change the procedure by having two update statements and in the end we again get the
row count:
By calling the procedure you will see that the number of updated rows is now 1. That is because the las update
statements only updated one row.
UPDATED_ROWS
-------------------------
1
If you now want to have the number of all updated rows you have to retrieve the row count information after
each update statement and accumulate them:
By now calling this procedure again the number of updated row is now 3:
UPDATED_ROWS
-------------------------
3
SQLScript procedures, functions and triggers can return the line number of the current statement
via ::CURRENT_LINE_NUMBER.
Syntax
::CURRENT_LINE_NUMBER
Example
Sample Code
Sample Code
Sample Code
1 do begin
2 declare a int = ::CURRENT_LINE_NUMBER;
3 select :a, ::CURRENT_LINE_NUMBER + 1 from dummy;
4 end;
5 -- Returns [2, 3 + 1]
In some scenarios you may need to let certain processes wait for a while (for example, when executing
repetitive tasks). Implementing such waiting manually may lead to "busy waiting" and to the CPU performing
unnecessary work during the waiting time. To avoid this, SQLScript offers a built-in library
SYS.SQLSCRIPT_SYNC containing the procedures SLEEP_SECONDS and WAKEUP_CONNECTION.
Procedure SLEEP_SECONDS
This procedure puts the current process on hold. It has one input parameter of type DOUBLE which specifies
the waiting time in seconds. The maximum precision is one millisecond (0.001), but the real waiting time may
be slightly longer (about 1-2 ms) than the given time.
Note
● If you pass 0 or NULL to SLEEP_SECONDS, SQLScript executor will do nothing (also no log will be
written).
● If you pass a negative number, you get an error.
Procedure WAKEUP_CONNECTION
This procedure resumes a waiting process. It has one input parameter of type INTEGER which specifies the ID
of a waiting connection. If this connection is waiting because the procedure SLEEP_SECONDS has been called,
the sleep is terminated and the process continues. If the given connection does not exist or is not waiting
because of SLEEP_SECONDS, an error is raised.
If the user calling WAKEUP_CONNECTION is not a session admin and is different from the user of the waiting
connection, an error is raised as well.
Note
● The waiting process is also terminated, if the session is canceled (with ALTER SYSTEM CANCEL
SESSION or ALTER SYSTEM DISCONNECT SESSION).
● A session admin can wake up any sleeping connection.
Limitations
The library cannot be used in functions (neither in scalar, nor in tabular ones) and in calculation views.
Examples
Sample Code
Monitor
Sample Code
The SQLSCRIPT_STRING library offers a handy and simple way for manipulating strings. You can split libraries
with given delimiters or regular expressions, format or rearrange strings, and convert table variables into the
already available strings.
Syntax
Code Syntax
SPLIT / SPLIT_REGEXPR
The SPLIT(_REGEXPR) function returns multiple variables depending on the given parameters.
SPLIT_TO_TABLE / SPLIT_REGEXPR_TO_TABLE
The SPLIT_TO_TABLE(_REGEXPR) returns a single column table with table type (WORD NVARCHAR(5000))
Sample Code
DO BEGIN
SQLSCRIPT_STRING AS LIB;
DECLARE a1, a2, a3 INT;
(a1, a2, a3) = LIB:SPLIT('10, 20, 30', ', '); --(10, 20, 30)
END;
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE first_name, last_name STRING;
DECLARE area_code, first_num, last_num INT;
first_name = LIB:SPLIT('John Sutherland', ','); --('John Sutherland')
(first_name, last_name) = LIB:SPLIT('John Sutherland', ' '); --
('John','Sutherland')
first_name = LIB:SPLIT('Brian', ' '); --('Brian')
(first_name, last_name) = LIB:SPLIT('Brian', ' '); -- throw SQL_FEW_VALUES
(first_name, last_name) = LIB:SPLIT('Michael Forsyth Jr', ' ');--throw
SQL_MANY_VALUES
(first_name, last_name) = LIB:SPLIT('Michael Forsyth Jr', ' ', 1); --
('Michael', 'Forsyth Jr')
(area_code, first_num, last_num) = LIB:SPLIT_REGEXPR('02)2143-5300', '\(|
\)|-'); --(02, 2143, 5300)
END;
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE arr INT ARRAY;
DECLARE arr2 STRING ARRAY;
DECLARE tv, tv2 TABLE(RESULT NVARCHAR(5000));
The SPLIT_TO_TABLE function currently does not support implicit table variable declaration.
FORMAT String
FORMAT functions support a new Python-style formatting.
Code Syntax
Type Meaning
Type Meaning
'c' Character
'F' Fixed point. Use NAN for nan and INF for inf in the result.
Type 'e' with precision p-1, the number has exponent exp
If -4 <= exp < p, the same as 'f' and the precision is p-1-exp
Example
Type Example
FORMAT
Returns a single formatted string using a given format string and additional arguments. Two type of additional
arguments are supported: scalar variables and a single array. The first argument type accepts only scalar
variables and should have a proper number and type of arguments. With the second argument type is allowed
only one array that should have a proper size and type.
FORMAT_TO_TABLE/FORMAT_TO_ARRAY
Returns a table or an array with N formatted strings using a given table variable. FORMAT STRING is applied
row by row.
Sample Code
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE your_name STRING = LIB:FORMAT('{} {}', 'John', 'Sutherland');
--'John Sutherland'
DECLARE name_age STRING = LIB:FORMAT('{1} {0}', 30, 'Sutherland');
--'Sutherland 30'
DECLARE pi_str STRING = LIB:FORMAT('PI: {:06.2f}', 3.141592653589793);
--'PI: 003.14'
DECLARE ts STRING = LIB:FORMAT('Today is {}', TO_VARCHAR (current_timestamp,
'YYYY/MM/DD')); --'Today is 2017/10/18'
DECLARE scores double ARRAY = ARRAY(1.4, 2.1, 40.3);
DECLARE score_str STRING = LIB:FORMAT('{}-{}-{}', :scores);
--'1.4-2.1-40.3'
END;
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE arr NVARCHAR(5000) ARRAY;
declare tv table(result NVARCHAR(5000));
--tt: [('John', 'Sutherland', 1988), ('Edward','Stark',1960)]
DECLARE tt TABLE (first_name NVARCHAR(100), last_name NVARCHAR(100),
birth_year INT);
tt.first_name[1] = 'John';
tt.last_name[1] = 'Sutherland';
tt.birth_year[1] = 1988;
tt.first_name[2] = 'Edward';
tt.last_name[2] = 'Stark';
tt.birth_year[2] = 1960;
TABLE_SUMMARY converts a table variable into a single formatted string. It serializes the table into a human-
friendly format, similar to the current result sets in the client. Since the table is serialized as a single string, the
result is fetched during the PROCEDURE execution, not at the client-side fetch time. The parameter
MAX_RECORDS limits the number of rows to be serialized. If the size of the formatted string is larger than
NVARCHAR(8388607), only the limited size of the string is returned.
By means of SQLScript FORMAT functions, the values in the table are be formatted as follows:
Sample Code
DO
BEGIN
USING SQLSCRIPT_STRING AS STRING;
USING SQLSCRIPT_PRINT AS PRINT;
T1 = SELECT * FROM SAMPLE1;
LIB:PRINT_LINE(STRING:TABLE_SUMMARY(:T1, 3));
END;
------------------------
NAME,AGE
John Bailey,28
Kevin Lawrence,56
Leonard Poole,31
Syntax
Code Syntax
Description
The PRINT library makes it possible to print strings or even whole tables. It is especially useful when used
together with the STRING library. The PRINT library procedures produce a server-side result from the
parameters and stores it in an internal buffer. All stored strings will be printed in the client only after the end of
the PROCEDURE execution. In case of nested execution, the PRINT results are delivered to the client after the
end of the outermost CALL execution. The traditional result-set based results are not mixed up with PRINT
results.
The PRINT library procedures can be executed in parallel. The overall PRINT result is flushed at once, without
writing it on a certain stream for each request. SQLScript ensures the order of PRINT results, based on the
description order in the PROCEDURE body, not on the order of execution.
Note
PRINT_LINE
This library procedure returns a string as a PRINT result. The procedure accepts NVARCHAR values as input,
but also most other values are possible, as long as implicit conversion is possible (for example, INTEGER to
NVARCHAR). Hence, most of the non-NVACHAR values can be used as parameters, since they are supported
with SQLScript implicit conversion. Users can freely introduce string manipulation by using either a
concatenation operator (||), a TO_NVARCHAR() value formatting, or the newly introduced
SQLSCRIPT_STRING built-in library.
PRINT_TABLE
This library procedure takes a table variable and returns a PRINT result. PRINT_TABLE() parses a table variable
into a single string and sends the string to the client. The parameter MAX_RECORDS limits the number of rows
to be printed. PRINT_TABLE() is primarily used together with TABLE_SUMMARY of the STRING library.
Sample Code
DO
BEGIN
USING SQLSCRIPT_PRINT as LIB;
LIB:PRINT_LINE('HELLO WORLD');
LIB:PRINT_LINE('LINE2');
LIB:PRINT_LINE('LINE3');
END;
DO
BEGIN
USING SQLSCRIPT_PRINT as LIB1;
USING SQLSCRIPT_STRING as LIB2;
LIB1:PRINT_LINE('HELLO WORLD');
LIB1:PRINT_LINE('Here is SAMPLE1');
T1 = SELECT * FROM SAMPLE1;
LIB1:PRINT_LINE(LIB2:TABLE_SUMMARY(:T1));
LIB1:PRINT_LINE('Here is SAMPLE2');
T2 = SELECT * FROM SAMPLE2;
LIB1:PRINT_TABLE(:T2);
LIB1:PRINT_LINE('End of PRINT');
END;
All scalar variables used in queries of procedures, functions or anonymous blocks, are represented either as
query parameters, or as constant values during query compilation. Which option shall be chosen is a decision
of the optimizer.
Example
The following procedure uses two scalar variables (var1 and var2) in the WHERE-clause of a nested query.
Sample Code
CREATE PROCEDURE PROC (IN var1 INT, IN var2 INT, OUT tab mytab)
AS
BEGIN
tab = SELECT * FROM MYTAB WHERE MYCOL >:var1
OR MYCOL =:var2;
END;
Sample Code
will prepare the nested query of the table variable tab by using query parameters for the scalar parameters:
Sample Code
Before the query is executed, the parameter values will be bound to the query parameters.
Calling the procedure without query parameters and using constant values directly
Sample Code
will lead to the following query string that uses the parameter values directly:
Sample Code
A potential disadvantage is that there is a chance of not getting the most optimal query plan because
optimizations using parameter values cannot be performed directly during compilation time. Using constant
values will always lead to preparing a new query plan and therefore to different query plan cache entries for the
different parameter values. This comes along with additional time spend for query preparation and potential
cache flooding effects in fast-changing parameter value scenarios.
In order to control the parameterization behavior of scalar parameters explicitly, you can use the function
BIND_AS_PARAMETER and BIND_AS_VALUE. The decision of the optimizer and the general configuration are
overridden when you use these functions.
Syntax
Using BIND_AS_PARAMETER will always use a query parameter to represent a <scalar_variable> during query
preparation.
Using BIND_AS_VALUE will always use a value to represent a <scalar_variable> during query preparation.
The following example represents the same procedure from above but now using the functions
BIND_AS_PARAMETER and BIND_AS_VALUE instead of referring to the scalar parameters directly:
Sample Code
CREATE PROCEDURE PROC (IN var1 INT, IN var2 INT, OUT tab mytab)
AS
BEGIN
tab = SELECT * FROM MYTAB WHERE MYCOL > BIND_AS_PARAMETER(:var1)
OR MYCOL = BIND_AS_VALUE(:var2);
END;
Sample Code
and bind the values (1 for var1 and 2 for var2), the following query string will be prepared
Sample Code
The same query string will be prepared even if you call this procedure with constant values because the
functions override the decisions of the optimizer.
15.1 M_ACTIVE_PROCEDURES
The view M_ACTIVE_PROCEDURES monitors all internally executed statements starting from a procedure call.
That also includes remotely executed statements.
M_ACTIVE_PROCEDURES is also helpful for analyzing long-running procedures and for determining their
current status. You can run the following query from another session to find out more about the status of a
procedure, like MY_SCHEMA.MY_PROC in the example:
Level Description
To prevent flooding of the memory with irrelevant data, the number of records is limited. If the record count
exceeds the given threshold, the first record is deleted irrespective of its status. The limit can be adjusted the
INI-parameter execution_monitoring_limit, for example execution_monitoring_limit = 100 000.
Limitations:
With NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION, you can specify how many calls are retained after
execution and RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT defines how long the result should be kept in
M_ACTIVE_PROCEDURES. The following options are possible:
● Both parameters are set: M_ACTIVE_PROCEDURES keeps the specified numbers of records for the
specified amount of time
● Only NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION is set: M_ACTIVE_PROCEDURES keeps the
specified number for the default amount of time ( = 3600 seconds)
● Only RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT is set: M_ACTIVE_PROCEDURES keeps the default
number of records ( = 100) for the specified amount of time
● Nothing is set: no records are kept.
Note
The Query Export is an enhancement of the EXPORT statement. It allows exporting queries, that is database
objects used in a query together with the query string and parameters. This query can be either standalone, or
executed as a part of a SQLScript procedure.
Prerequisites
In order to execute the query export as a developer you need an EXPORT system privilege.
Procedure
With <export_format> you define whether the export should use a BINARY format or a CSV format.
Currently the only format supported for SQLScript query export is CSV . If you choose BINARY, you get a
warning message and the export is performed in CSV.
The server path where the export files are be stored is specified as <path>.
For more information about <export_option_list>, see EXPORT in the SAP HANA SQL and System Views
Reference on the SAP Help Portal.
Apart from SELECT statements, you can export the following statement types as well:
With the <sqlscript_location_list> you can define in a comma-separated list several queries that you want to
export. For each query you have to specify the name of the procedure with <procedure_name> to indicate
where the query is located. <procedure_name> can be omitted if it is the same procedure as the procedure in
<procedure_call_statement>.
You also need to specify the line information, <line_number>, and the column information, <column_number>.
The line number must correspond to the first line of the statement. If the column number is omitted, all
statements (usually there is just one) on this line are exported. Otherwise the column must match the first
character of the statement.
The line and column information is usually contained in the comments of the queries generated by SQLScript
and can be taken over from there. For example, the monitoring view M_ACTIVE_PROCEDURES or the
statement statistic in PlanViz shows the executed queries together with the comment.
If you want to export both queries of table variables tabtemp, then the <sqlscript_location> looks as follows:
and
For the query of table variable temp we also specified the column number because there are two table variable
assignments on one line and we only wanted to have the first query.
To export these queries, the export needs to execute the procedure call that triggers the execution of the
procedure containing the queries. Therefore the procedure call has to be specified as well by using
<procedure_call_statement>:
EXPORT ALL AS CSV INTO '/tmp' ON (proc_one LINE 15), ( proc_two LINE 27 COLUMN
4) FOR CALL PROC_ONE (...);
If you want to export a query that is executed multiple times, you can use <pass_number> to specify which
execution should be exported. If <pass_number> is omitted, only the first execution of the query is exported. If
you need to export multiple passes, but not all of them, you need to specify the same location multiple times
with the corresponding pass numbers.
Given the above example, we want to export the query on line 34 but only the snapshot of the 2nd and 30th
loop iteration. The export statement is then the following, considering that PROC_LOOP is a procedure call:
If you want to export the snapshots of all iterations you need to use PASS ALL:
EXPORT ALL AS CSV INTO '/tmp' ON (myschema.proc_loop LINE 34 PASS ALL) FOR CALL
PROC_LOOP(...);
Overall the SQLScript Query Export creates one subdirectory for each exported query under the given path
<path> with the following name pattern <schema_name>-<procedure_name>-<line_number>-
<column_number>-<pass_number >. For example the directories of the first above mentioned export
statement would be the following:
|_ /tmp
The exported SQLScript query is stored in a file named Query.sql and all related base objects of that query are
stored in the directories index and export, as it is done for a typical catalog export.
You can import the exported objects, including temporary tables and their data, with the IMPORT statement.
For more information about IMPORT, see IMPORT in the SAP HANA SQL and System Views Reference on the
SAP Help Portal.
Note
Note
Query export is not supported on distributed systems. Only single-node systems are supported.
The derived table type of a tabular variable should always match the declared type of the corresponding
variable, both for the type code as well as length or precision/scale information. This is particular important for
signature variables as they can be considered the contract a caller will follow. The derived type code will be
implicitly converted if this conversion is possible without loss in information (see SQL guide for further details
on which data types conversion are supported).
If the derived type is larger (e.g. BIGINT) than the expected type (e.g. INTEGER) can this lead to errors as
shown in the following example.
The procedure PROC_TYPE_MISMATCH has a defined tabular output variable RESULT with a single column of
type VARCHAR with a length of 2. The derived type from the table variable assignment has a single column of
type VARCHAR with a length of 10.
Calling this procedure will work fine as long as the difference in length does not matter e.g. calling this
procedure from any SQL client will not cause an issues. However using the result for further processing can
lead to an error as shown in the following example:
Declared type "VARCHAR(2)" of attribute "A" not same as assigned type "VARCHAR(10)"
The configuration parameters have three different levels to reveal differences between expected and derived
types if the derived type is larger than the expected type:
warn general warning: Declared type "VAR Print warning in case of type mis
CHAR(2)" of attribute "A" not same as match(default behavior)
assigned type "VARCHAR(10)"
strict return type mismatch: Declared type Error in case of potential type error
"VARCHAR(2)" of attribute "A" not
same as assigned type "VARCHAR(10)"
Note
With the SQLScript debugger you can investigate functional issues. The debugger is available in the SAP
WebIDE for SAP HANA (WebIDE) and in ABAP in Eclipse (ADT Debugger). In the following we want to give you
an overview of the available functionality and also in which IDE it is supported. For a detailed description of how
to use the SQLScript debugger, see the documentation of SAP WebIDE for SAP HANA and ABAP in Eclipse
available at the SAP HANA Help Portal.
A conditional breakpoint can be used to break the debugger in the breakpoint-line only when certain conditions
are met. This is especially useful when a Breakpoint is set within a loop.
Each breakpoint can have only one condition. The condition expressions can contain any SQL function. A
condition must either contain an expression that results in true or false, or can contain a single variable or a
complex expression without restrictions in the return type.
When setting a conditional breakpoint, the debugger will check all conditions for potential syntax errors. It
checks for:
At execution time the debugger will check and evaluate the conditions of the conditional breakpoints, but with
the given variables and its values. If the value of a variable in a condition is not accessible and therefor the
condition cannot be evaluated, the debugger will send a warning and will break for the breakpoint anyway.
Note
The debugger will also break and send a warning, if there are expressions set, that access a variable that is
not yet accessible at this point (NULL value).
Note
15.4.2 Watchpoints
Watchpoints give you the possibility to watch the values of variables or complex expressions and break the
debugger, if certain conditions are met.
For each watchpoint you can define an arbitrary number of conditions. The conditions can either contain an
expression that results in true or false or contain a single variable or complex expression without restrictions in
the return type.
When setting a watchpoint, the debugger will check all conditions for potential syntax errors. It checks for:
At execution time the debugger will check and evaluate the conditions of the watchpoints, but with the given
variables and its values. A watchpoint will be skipped, if the value of a variable in a condition is not accessible.
But in case the return type of the condition is wrong , the debugger will send a warning to the user and will
break for the watchpoint anyway.
Note
If a variable value changes to NULL, the debugger will not break since it cannot evaluate the expression
anymore.
You can activate the Exception Mode to allow the Debugger to break, if an error in the execution of a procedure
or a function occurs. User-defined exceptions are also handled.
The debugger stops on the line, where the exception is thrown, and allows access to the current value of all
local variables, the call stack and a short information about the error. After that, the execution can continue
and you can step into the exception handler or into further exceptions (fore example, on a CALL statement).
Save Table allows you to store the result set of a table variable into a persistent table in a predefined schema in
a debugging session.
Syntax
Syntax Elements
<statement_name> ::= <string_literal> Specifies the name of a specific execution plan in the output
table for a given SQL statement
<explain_plan_entry> ::= <plan_id> specifies the identifier of the entry in the SQL
<call_statement> | SQL PLAN CACHE ENTRY plan cache to be explained. Refer to the
<plan_id> M_SQL_PLAN_CACHE monitoring view to find the
<plan_id> for the desired cache entry.
<plan_id> ::= <integer_literal>
<call_statement> specifies the procedure call to ex
plain the plan for. For more information about subqueries,
see the CALL statement.
Note
The EXPLAIN PLAN [SET STATEMENT_NAME = <statement_name>] FOR SQL PLAN CACHE ENTRY
<plan_id> command can only be run by users with the OPTIMIZER_ADMIN privilege.
Description
EXPLAIN PLAN provides information about the compiled plan of a given procedure. It inserts each piece of
information into a system global temporary table named EXPLAIN_CALL_PLANS. The result is visible only
within the session where the EXPLAIN PLAN call is executed.
EXPLAIN PLAN generates the plan information by using the given SQLScript Engine Plan structure. It traverses
the plan structure and records each information corresponding to the current SQLScript Engine Operator.
In the case of invoking another procedure inside of a procedure, EXPLAIN PLAN inserts the results of the
invoked procedure (callee) under the invoke operator (caller) although the actual invoked procedure is a sub-
plan which is not located under the invoke operator.
Another case is the else operator. EXPLAIN PLAN generates a dummy else operator to represent alternative
operators in the condition operator.
You can retrieve the result by selecting from the table EXPLAIN_CALL_PLANS.
EXPLAIN PLAN FOR select query deletes its temporary table by HDB client but in the case of EXPLAIN PLAN
FOR call, it is not yet supported. To delete rows in the table, execute a delete query from
EXPLAIN_CALL_PLANS table or close current session.
Note
Client integration is not available yet. You need to use the SQL statement above to retrieve the plan
information.
The SQLScript Code Analyzer consists of two built-in procedures that scan CREATE FUNCTION and CREATE
PROCEDURE statements and search for patterns indicating problems in code quality, security or performance.
Interface
The view SQLSCRIPT_ANALYZER_RULES listing the available rules is defined in the following way:
RULE_NAMESPACE VARCHAR(16)
RULE_NAME VARCHAR(64)
CATEGORY VARCHAR(16)
SHORT_DESCRIPTION VARCHAR(256)
LONG_DESCRIPTION NVARCHAR(5000)
RECOMMENDATION NVARCHAR(5000)
Procedure ANALYZE_SQLSCRIPT_DEFINITION
The procedure ANALYZE_SQLSCRIPT_DEFINITION can be used to analyze the source code of a single
procedure or a single function that has not been created yet. If not yet existing objects are referenced, the
procedure or function cannot be analyzed.
Sample Code
) AS BUILTIN
Parameter Description
RULES Rules to be used for the analysis. Available rules can be re
trieved from the view SQLSCRIPT_ANALYZER_RULES
Procedure ANALYZE_SQLSCRIPT_OBJECTS
The procedure ANALYZE_SQLSCRIPT_OBJECTS can be used to analyze the source code of multiple already
existing procedures or functions.
Sample Code
Parameter Description
RULES Rules that should be used for the analysis. Available rules
can be retrieved from the view SQLSCRIPT_ANA
LYZER_RULES.
OBJECT_DEFINITIONS Contains the names and definitions of all objects that were
analyzed, including those without any findings
Rules
UNNECESSARY_VARIABLE
For each variable, it is tested if it is used by any output parameter of the procedure or if it influences the
outcome of the procedure. Statements relevant for the outcome could be DML statements, implicit result sets,
conditions of control statements.
UNUSED_VARIABLE_VALUE
If a value, assigned to a variable, is not used in any other statement, the assignment can be removed. In case of
default assignments in DECLARE statements, the default is never used.
Parameters of type string should always be checked for SQL injection safety, if they are used in dynamic SQL.
This rule checks if the function is_sql_injection_safe is called for every parameter of that type.
If the condition is more complex (for example, more than one variable is checked in one condition), a warning
will be displayed because it is only possible to check if any execution of the dynamic SQL has passed the SQL
injection check.
SINGLE_SPACE_LITERAL
This rule searches for string laterals consisting of only one space. If ABAP VARCHAR MODE is used, such string
literals are treated as empty strings. In this case CHAR(32) can be used instead of ' '.
COMMIT_OR_ROLLBACK_IN_DYNAMIC_SQL
This rule detects dynamic SQL that uses the COMMIT or ROLLBACK statements. It is recommended to use
COMMIT and ROLLBACK directly in SQLScript, thus eliminating the need of dynamic SQL.
● It can only check dynamic SQL that uses a constant string (for example, EXEC 'COMMIT';). It cannot detect
dynamic SQL that evaluates any expression (for example, EXEC 'COM' || 'MIT';)
● It can only detect simple strings containing COMMIT or ROLLBACK and whitespaces, as well as simple
comments. More complex strings might not be detected by this rule.
USE_OF_SELECT_IN_SCALAR_UDF
This rule detects and reports SELECT statements in scalar UDFs. SELECT statements in scalar UDFs can affect
performance. If table operations are really needed, procedures or table UDFs should be used instead.
Sample Code
USE_OF_SELECT_IN SCALAR_UDF
DO BEGIN
tab = SELECT RULE_NAMESPACE, RULE_NAME, category FROM
SQLSCRIPT_ANALYZER_RULES where rule_name = 'USE_OF_SELECT_IN_SCALAR_UDF';
CALL ANALYZE_SQLSCRIPT_DEFINITION('
CREATE FUNCTION f1(a INT) RETURNS b INT AS
BEGIN
DECLARE x INT;
SELECT count(*) into x FROM _sys_repo.active_object;
IF :a > :x THEN
SELECT count(*) INTO b FROM _sys_repo.inactive_object;
ELSE
b = 100;
END IF;
END;', :tab, res);
SELECT * FROM :res;
END;
USE_OF_UNASSIGNED_SCALAR_VARIABLE
The rule detects variables which are used but were never assigned explicitly. Those variables still have their
default value when used, which might be undefined. It is recommended to assign a default value (that can be
NULL) to be sure that you get the intended value when you read from the variable. If this rule returns a warning
or an error, check in your code if have not assigned a value to the wrong variable. Always rerun this rule after
changing code, since it is possible that multiple errors trigger only a single message and the error still persists.
For every DECLARE statement this rule returns one of the following:
● <nothing>: if the variable is always assigned before use or not used. Everything is correct.
● Variable <variable> may be unassigned: if there is at least one branch, where the variable is unassigned
when used, even if the variable is assigned in other branches.
● Variable <variable> is used but was never assigned explicitly: if the variable will never have a value assigned
when used.
DML_STATEMENTS_IN_LOOPS
The rule detects the following DML statements inside loops - INSERT, UPDATE, DELETE, REPLACE/UPSERT.
Sometimes it is possible to rewrite the loop and use a single DML statement to improve performance instead.
In the following example a table is updated in a loop. This code can be rewritten to update the table with a
single DML statement.
Sample Code
DO BEGIN
tab = select rule_namespace, rule_name, category from
sqlscript_analyzer_rules;
call analyze_sqlscript_definition('
// Optimized version
USE_OF_CE_FUNCTIONS
The rule checks whether Calculation Engine Plan Operators (CE Functions) are used. Since they make
optimization more difficult and lead to performance problems, they should be avoided. For more information
and how to replace them using only plain SQL, see Calculation Engine Plan Operators [page 179]
Examples
Sample Code
DO BEGIN
tab = SELECT rule_namespace, rule_name, category FROM
SQLSCRIPT_ANALYZER_RULES; -- selects all rules
CALL ANALYZE_SQLSCRIPT_DEFINITION('
CREATE PROCEDURE UNCHECKED_DYNAMIC_SQL(IN query NVARCHAR(500)) AS
BEGIN
DECLARE query2 NVARCHAR(500) = ''SELECT '' || query || '' from
tab'';
EXEC :query2;
query2 = :query2; --unused variable value
END', :tab, res);
SELECT * FROM :res;
END;
DO BEGIN
tab = SELECT rule_namespace, rule_name, category FROM
SQLSCRIPT_ANALYZER_RULES;
to_scan = SELECT schema_name, procedure_name object_name, definition
FROM sys.procedures
WHERE procedure_type = 'SQLSCRIPT2' AND schema_name
IN('MY_SCHEMA','OTHER_SCHEMA')
ORDER BY procedure_name;
CALL analyze_sqlscript_objects(:to_scan, :tab, objects, findings);
SELECT t1.schema_name, t1.object_name, t2.*, t1.object_definition
FROM :findings t2
JOIN :objects t1
ON t1.object_definition_id = t2.object_definition_id;
END;
SQLScript Plan Profiler is a new performance analysis tool designed mainly from the perspective of stored
procedures and functions. When SQLScript Plan Profiler is enabled, a single tabular result per call statement is
generated. The result table contains start time, end time, CPU time, wait time, thread ID, and some other
details for each predefined operation. The predefined operations can be anything that is considered significant
for analyzing the engine performance of stored procedures and functions, covering both compilation and
execution time. The tabular results are displayed in the new monitoring view M_SQLSCRIPT_PLAN_PROFILES
in HANA.
Note
There are two ways to start the profiler and to check the results.
ALTER SYSTEM
You can use the ALTER SYSTEM command with the following syntax:
Code Syntax
Note
● START
When the START command is executed, the profiler checks if the exact same filter has already been
applied and if so, the command is ignored. You can check the status of enabled profilers in the monitoring
view M_SQLSCRIPT_PLAN_PROFILERS. Results are available only after the procedure execution has
finished. If you apply a filter by procedure name, only the outermost procedure calls are returned.
Sample Code
● STOP
When the STOP command is executed, the profiler disables all started commands, if they are included in
the filter condition (no exact filter match is needed). The STOP command does not affect the results that
are already profiled.
● CLEAR
The CLEAR command is independent of the status of profilers (running or stopped). The CLEAR command
clears profiled results based on the PROCEDURE_CONNECTION_ID, PROCEDURE_SCHEMA_NAME, and
PROCEDURE_NAME in M_SQLSCRIPT_PLAN_PROFILER_RESULTS. If the results are not cleared, the
oldest data will be automatically deleted when the maximum capacity is reached.
j) ALTER SYSTEM CLEAR SQLSCRIPT PLAN PROFILER FOR SESSION 222222; -- deletes
records with PROCEDURE_CONNECTION_ID = 222222
k) ALTER SYSTEM CLEAR SQLSCRIPT PLAN PROFILER FOR PROCEDURE S1.P1; -- delete
records with PROCEDURE_SCHEMA_NAME = S1 and PROCEDURE_NAME = P1
l) ALTER SYSTEM CLEAR SQLSCRIPT PLAN PROFILER; -- deletes all records
Note
The <filter> does not check the validity or existence of <session id> or <procedure_id>.
SQL Hint
You can use the SQL HINT command to start the profiler with the following syntax:
Code Syntax
You can check the status of the profiler by using the following command:
Sample Code
Example
So far this document has introduced the syntax and semantics of SQLScript. This knowledge is sufficient for
mapping functional requirements to SQLScript procedures. However, besides functional correctness, non-
functional characteristics of a program play an important role for user acceptance. For instance, one of the
most important non-functional characteristics is performance.
The following optimizations all apply to statements in SQLScript. The optimizations presented here cover how
dataflow exploits parallelism in the SAP HANA database.
● Reduce Complexity of SQL Statements: Break up a complex SQL statement into many simpler ones. This
makes a SQLScript procedure easier to comprehend.
● Identify Common Sub-Expressions: If you split a complex query into logical sub queries it can help the
optimizer to identify common sub expressions and to derive more efficient execution plans.
● Multi-Level-Aggregation: In the special case of multi-level aggregations, SQLScript can exploit results at a
finer grouping for computing coarser aggregations and return the different granularities of groups in
distinct table variables. This could save the client the effort of reexamining the query result.
● Understand the Costs of Statements: Employ the explain plan facility to investigate the performance
impact of different SQL queries.
● Exploit Underlying Engine: SQLScript can exploit the specific capabilities of the OLAP- and JOIN-Engine by
relying on views modeled appropriately.
● Reduce Dependencies: As SQLScript is translated into a dataflow graph, and independent paths in this
graph can be executed in parallel, reducing dependencies enables better parallelism, and thus better
performance.
● Avoid Mixing Calculation Engine Plan Operators and SQL Queries: Mixing calculation engine plan operators
and SQL may lead to missed opportunities to apply optimizations as calculation engine plan operators and
SQL statements are optimized independently.
● Avoid Using Cursors: Check if use of cursors can be replaced by (a flow of) SQL statements for better
opportunities for optimization and exploiting parallel execution.
● Avoid Using Dynamic SQL: Executing dynamic SQL is slow because compile time checks and query
optimization must be done for every invocation of the procedure. Another related problem is security
because constructing SQL statements without proper checks of the variables used may harm security.
Variables in SQLScript enable you to arbitrarily break up a complex SQL statement into many simpler ones.
This makes a SQLScript procedure easier to comprehend. To illustrate this point, consider the following query:
Writing this query as a single SQL statement requires either the definition of a temporary view (using WITH), or
the multiple repetition of a sub query. The two statements above break the complex query into two simpler
SQL statements that are linked via table variables. This query is much easier to comprehend because the
names of the table variables convey the meaning of the query and they also break the complex query into
smaller logical pieces.
The SQLScript compiler will combine these statements into a single query or identify the common sub-
expression using the table variables as hints. The resulting application program is easier to understand without
sacrificing performance.
The query examined in the previous sub section contained common sub-expressions. Such common sub-
expressions might introduce expensive repeated computation that should be avoided. For query optimizers it is
very complicated to detect common sub-expressions in SQL queries. If you break up a complex query into
logical sub queries it can help the optimizer to identify common sub-expressions and to derive more efficient
execution plans. If in doubt, you should employ the EXPLAIN plan facility for SQL statements to investigate how
the HDB treats a particular statement.
Computing multi-level aggregation can be achieved by using grouping sets. The advantage of this approach is
that multiple levels of grouping can be computed in a single SQL statement.
To retrieve the different levels of aggregation, the client typically has to examine the result repeatedly, for
example by filtering by NULL on the grouping attributes.
In the special case of multi-level aggregations, SQLScript can exploit results at a finer grouping for computing
coarser aggregations and return the different granularities of groups in distinct table variables. This could save
the client the effort of re-examining the query result. Consider the above multi-level aggregation expressed in
SQLScript.
It is important to keep in mind that even though the SAP HANA database is an in-memory database engine and
that the operations are fast, each operation has its associated costs and some are much more costly than
others.
As an example, calculating a UNION ALL of two result sets is cheaper than calculating a UNION of the same
result sets because of the duplicate elimination the UNION operation performs. The calculation engine plan
operator CE_UNION_ALL (and also UNION ALL) basically stacks the two input tables over each other by using
references without moving any data within the memory. Duplicate elimination as part of UNION, in contrast,
requires either sorting or hashing the data to realize the duplicate removal, and thus a materialization of data.
Various examples similar to these exist. Therefore it is important to be aware of such issues and, if possible, to
avoid these costly operations.
You can get the query plan from the view SYS.QUERY_PLANS. The view is shared by all users. Here is an
example of reading a query plan from the view.
Sometimes alternative formulations of the same query can lead to faster response times. Consequently
reformulating performance critical queries and examining their plan may lead to better performance.
The SAP HANA database provides a library of application-level functions which handle frequent tasks, e.g.
currency conversions. These functions can be expensive to execute, so it makes sense to reduce the input as
much as possible prior to calling the function.
SQLScript can exploit the specific capabilities of the built-in functions or SQL statements. For instance, if your
data model is a star schema, it makes sense to model the data as an Analytic view. This allows the SAP HANA
database to exploit the star schema when computing joins producing much better performance.
Similarly, if the application involves complex joins, it might make sense to model the data either as an Attribute
view or a Graphical Calculation view. Again, this conveys additional information on the structure of the data
which is exploited by the SAP HANA database for computing joins. When deciding to use Graphical Calculation
views involving complex joins refer to SAP note 1857202 for details on how, and under which conditions, you
may benefit from SQL Engine processing with Graphical Calculation views.
Finally, note that not assigning the result of an SQL query to a table variable will return the result of this query
directly to the client as a result set. In some cases the result of the query can be streamed (or pipelined) to the
client. This can be very effective as this result does not need to be materialized on the server before it is
returned to the client.
One of the most important methods for speeding up processing in the SAP HANA database is a massive
parallelization of executing queries. In particular, parallelization is exploited at multiple levels of granularity: For
example, the requests of different users can be processed in parallel, and also single relational operators within
a query are executed on multiple cores in parallel. It is also possible to execute different statements of a single
SQLScript in parallel if these statements are independent of each other. Remember that SQLScript is
translated into a dataflow graph, and independent paths in this graph can be executed in parallel.
From an SQLScript developer perspective, we can support the database engine in its attempt to parallelize
execution by avoiding unnecessary dependencies between separate SQL statements, and also by using
declarative constructs if possible. The former means avoiding variable references, and the latter means
avoiding imperative features, for example cursors.
Best Practices: Avoid Mixing Calculation Engine Plan Operators and SQL Queries
The semantics of relational operations as used in SQL queries and calculation engine operations are different.
In the calculation engine operations will be instantiated by the query that is executed on top of the generated
data flow graph.
Therefore the query can significantly change the semantics of the data flow graph. For example consider a
calculation view that is queried using attribute publisher (but not year) that contains an aggregation node
( CE_AGGREGATION) which is defined on publisher and year. The grouping on year would be removed from the
grouping. Evidently this reduces the granularity of the grouping, and thus changes the semantics of the model.
On the other hand, in a nested SQL query containing a grouping on publisher and year this aggregation-level
would not be changed if an enclosed query only queries on publisher.
Because of the different semantics outlined above, the optimization of a mixed data flow using both types of
operations is currently limited. Hence, one should avoid mixing both types of operations in one procedure.
While the use of cursors is sometime required, they imply row-at-a-time processing. As a consequence,
opportunities for optimizations by the SQL engine are missed. So you should consider replacing the use of
cursors with loops, by SQL statements as follows:
Read-Only Access
Computing this aggregate in the SQL engine may result in parallel execution on multiple CPUs inside the SQL
executor.
Similar to updates and deletes, computing this statement in the SQL engine reduces the calls through the
runtime stack of the SAP HANA database, and potentially benefits from internal optimizations like buffering or
parallel execution.
Dynamic SQL is a powerful way to express application logic. It allows for constructing SQL statements at
execution time of a procedure. However, executing dynamic SQL is slow because compile time checks and
query optimization must be done for every start up of the procedure. So when there is an alternative to
dynamic SQL using variables, this should be used instead.
Another related problem is security because constructing SQL statements without proper checks of the
variables used can create a security vulnerability, for example, SQL injection. Using variables in SQL
statements prevents these problems because type checks are performed at compile time and parameters
cannot inject arbitrary SQL code.
This section contains information about creating applications with SQLScript for SAP HANA.
In this section we briefly summarize the concepts employed by the SAP HANA database for handling
temporary data.
Table Variables are used to conceptually represent tabular data in the data flow of a SQLScript procedure. This
data may or may not be materialized into internal tables during execution. This depends on the optimizations
applied to the SQLScript procedure. Their main use is to structure SQLScript logic.
Temporary Tables are tables that exist within the life time of a session. For one connection one can have
multiple sessions. In most cases disconnecting and reestablishing a connection is used to terminate a session.
The schema of global temporary tables is visible for multiple sessions. However, the data stored in this table is
private to each session. In contrast, for local temporary tables neither the schema nor the data is visible
outside the present session. In most aspects, temporary tables behave like regular column tables.
Persistent Data Structures are like sequences and are only used within a procedure call. However, sequences
are always globally defined and visible (assuming the correct privileges). For temporary usage – even in the
presence of concurrent invocations of a procedure, you can invent a naming schema to avoid sequences. Such
a sequence can then be created using dynamic SQL.
Ranking can be performed using a Self-Join that counts the number of items that would get the same or lower
rank. This idea is implemented in the sales statistical example below.
In this document we have discussed the syntax for creating SQLScript procedures and calling them. Besides
the SQL command console for invoking a procedure, calls to SQLScript will also be embedded into client code.
In this section we present examples how this can be done.
The best way to call SQLScript from ABAP is to create a procedure proxy which can be natively called from
ABAP by using the built in command CALL DATABASE PROCEDURE.
The SQLScript procedure has to be created normally in the SAP HANA Studio with the HANA Modeler. After
this a procedure proxy can be creating using the ABAP Development Tools for Eclipse. In the procedure proxy
the type mapping between ABAP and HANA data types can be adjusted. The procedure proxy is transported
normally with the ABAP transport system while the HANA procedure may be transported within a delivery unit
as a TLOGO object.
Calling the procedure in ABAP is very simple. The example below shows calling a procedure with two inputs
(one scalar, one table) and one (table) output parameter:
Using the connection clause of the CALL DATABASE PROCEDURE command, it is also possible to call a
database procedure using a secondary database connection. Please consult the ABAP help for detailed
instructions of how to use the CALL DATABASE PROCEDURE command and for the exceptions may be raised.
It is also possible to create procedure proxies with an ABAP API programmatically. Please consult the
documentation of the class CL_DBPROC_PROXY_FACTORY for more information on this topic.
Using ADBC
*&---------------------------------------------------------------------*
*& Report ZRS_NATIVE_SQLSCRIPT_CALL
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
report zrs_native_sqlscript_call.
parameters:
con_name type dbcon-con_name default 'DEFAULT'.
Output:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.CallableStatement;
import java.sql.ResultSet;
…
import java.sql.SQLException;CallableStatement cSt = null;
String sql = "call SqlScriptDocumentation.getSalesBooks(?,?,?,?)";
ResultSet rs = null;
Connection conn = getDBConnection(); // establish connection to database using
jdbc
try {
cSt = conn.prepareCall(sql);
if (cSt == null) {
System.out.println("error preparing call: " + sql);
return;
}
cSt.setFloat(1, 1.5f);
cSt.setString(2, "'EUR'");
cSt.setString(3, "books");
int res = cSt.executeUpdate();
System.out.println("result: " + res);
do {
rs = cSt.getResultSet();
while (rs != null && rs.next()) {
System.out.println("row: " + rs.getString(1) + ", " +
rs.getDouble(2) + ", " + rs.getString(3));
}
} while (cSt.getMoreResults());
} catch (Exception se) {
se.printStackTrace();
} finally {
if (rs != null)
rs.close();
if (cSt != null)
cSt.close();
}
Given procedure:
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Data.Common;
using ADODB;
using System.Data.SqlClient;
namespace NetODBC
{
class Program
{
static void Main(string[] args)
{
try
{
DbConnection conn;
DbProviderFactory _DbProviderFactoryObject;
String connStr = "DRIVER={HDBODBC32};UID=SYSTEM;PWD=<password>;
SERVERNODE=<host>:<port>;DATABASE=SYSTEM";
String ProviderName = "System.Data.Odbc";
_DbProviderFactoryObject =
DbProviderFactories.GetFactory(ProviderName);
conn = _DbProviderFactoryObject.CreateConnection();
conn.ConnectionString = connStr;
conn.Open();
System.Console.WriteLine("Connect to HANA database
successfully");
DbCommand cmd = conn.CreateCommand();
//call Stored Procedure
cmd = conn.CreateCommand();
cmd.CommandText = "call test_pro1(?,?)";
DbParameter inParam = cmd.CreateParameter();
inParam.Direction = ParameterDirection.Input;
inParam.Value = "asc";
cmd.Parameters.Add(inParam);
DbParameter outParam = cmd.CreateParameter();
outParam.Direction = ParameterDirection.Output;
outParam.ParameterName = "a";
outParam.DbType = DbType.Integer;
cmd.Parameters.Add(outParam);
reader = cmd.ExecuteReader();
System.Console.WriteLine("Out put parameters = " +
outParam.Value);
reader.Read();
String row1 = reader.GetString(0);
System.Console.WriteLine("row1=" + row1);
}
catch(Exception e)
{
System.Console.WriteLine("Operation failed");
System.Console.WriteLine(e.Message);
}
The examples used throughout this manual make use of various predefined code blocks. These code snippets
are presented below.
18.1.1 ins_msg_proc
This code is used in the examples of this reference manual to store outputs, so that you can see the way the
examples work. It simply stores text along with a time stamp of the entry.
Before you can use this procedure, you must create the following table.
To view the contents of the message_box, you select the messages in the table.
For information about the capabilities available for your license and installation scenario, refer to the Feature
Scope Description (FSD) for your specific SAP HANA version on the SAP HANA Platform webpage.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Gender-Related Language
We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.