(JDBC) V2Connector en
(JDBC) V2Connector en
(JDBC) V2Connector en
JDBC V2 Connector
Informatica Cloud Data Integration JDBC V2 Connector
October 2023
© Copyright Informatica LLC 1993, 2023
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial
computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such,
the use, duplication, disclosure, modification, and adaptation is subject to the restrictions and license terms set forth in the applicable Government contract, and, to the
extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License.
Informatica, the Informatica logo, Informatica Cloud, and PowerCenter are trademarks or registered trademarks of Informatica LLC in the United States and many
jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company
and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties. Required third party notices are included with the product.
DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation
is subject to change at any time without notice.
NOTICES
This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES
OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH
OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, report them to us at
[email protected].
Informatica products are warranted according to the terms and conditions of the agreements under which they are provided. INFORMATICA PROVIDES THE
INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Table of Contents 3
Preface
Use JDBC V2 Connector to learn how to use the JDBC V2 connection to read from or write to Aurora
PostgreSQL and other databases that support the Type 4 JDBC driver by using Data Integration. Learn to
create a connection, develop and run mappings, mapping tasks, dynamic mapping tasks, and data transfer
tasks in Data Integration.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at [email protected].
https://network.informatica.com/community/informatica-network/products/cloud-integration
Developers can learn more and share tips at the Cloud Developer community:
https://network.informatica.com/community/informatica-network/products/cloud-integration/cloud-
developers
4
Informatica Intelligent Cloud Services Marketplace
Visit the Informatica Marketplace to try and buy Data Integration Connectors, templates, and mapplets:
https://marketplace.informatica.com/
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
[email protected].
Subscribe to the Informatica Intelligent Cloud Services Trust Center to receive upgrade, maintenance, and
incident notifications. The Informatica Intelligent Cloud Services Status page displays the production status
of all the Informatica cloud products. All maintenance updates are posted to this page, and during an outage,
it will have the most current information. To ensure you are notified of updates and outages, you can
subscribe to receive updates for a single component or all Informatica Intelligent Cloud Services
components. Subscribing to all components is the best way to be certain you never miss an update.
To subscribe, on the Informatica Intelligent Cloud Services Status page, click SUBSCRIBE TO UPDATES. You
can choose to receive notifications sent as emails, SMS text messages, webhooks, RSS feeds, or any
combination of the four.
To find online support resources on the Informatica Network, click Contact Support in the Informatica
Intelligent Cloud Services Help menu to go to the Cloud Support page. The Cloud Support page includes
system status information and community discussions. Log in to Informatica Network and click Need Help to
find additional resources and to contact Informatica Global Customer Support through email.
The telephone numbers for Informatica Global Customer Support are available from the Informatica web site
at https://www.informatica.com/services-and-training/support-services/contact-us.html.
Preface 5
Chapter 1
Introduction to JDBC V2
Connector
You can use JDBC V2 Connector to connect to Aurora PostgreSQL and databases that support the JDBC
Type 4 driver with Data Integration.
When you use JDBC V2 Connector, you can create a JDBC V2 connection and use the connection in Data
Integration mappings and tasks. You can switch mappings to advanced mode to include transformations and
functions that enable advanced functionality.
In advanced mode, the mapping can run on Amazon Web Services, Google Cloud Platform, the Microsoft
Azure environment, Azure Kubernetes Service (AKS), or Elastic Kubernetes Service (EKS).
When you use JDBC V2 Connector, you can include the following Data Integration assets:
For more information about configuring assets and transformations, see Mappings, Transformations, and
Tasks in the Data Integration documentation.
6
Prerequisite tasks for using JDBC V2 Connector
Before you use JDBC V2 objects as sources or targets in tasks, you must install the Type 4 JDBC driver on
the Secure Agent machine.
1. Download the latest Type 4 JDBC driver version that your database supports from the third-party vendor
site.
If you want to use JDBC V2 Connector to connect to Aurora PostgreSQL, download the Aurora
PostgreSQL driver. Informatica has certified Aurora PostgreSQL driver 42.2.6 for JDBC V2 Connector.
2. Install the Type 4 JDBC driver for the database on the Secure Agent machine and perform the following
tasks:
a. Navigate to the following directory on the Secure Agent machine: <Secure Agent installation
directory>/ext/connectors/thirdparty/
b. Create a folder and add the driver based on the type of mapping that you want to configure.
For mappings that do not apply in advanced mode, add the driver in the following folder:
informatica.jdbc_v2/common
For mappings in advanced mode, add the driver in the following folders:
informatica.jdbc_v2/common
informatica.jdbc_v2/spark
3. Restart the Secure Agent after you copy the driver.
If you update the driver on the Secure Agent machine and simultaneously the mapping in advanced
mode runs, you must restart the Secure Agent.
To use the serverless runtime environment, perform the following prerequisite tasks on Amazon S3:
1. Create the following structure for the serverless agent configuration in AWS: <Supplementary file
location>/serverless_agent_config
2. Add the JDBC driver files in the Amazon S3 bucket in the following location in your AWS account:
<Supplementary file location>/serverless_agent_config/common
3. For mappings in advanced mode, additionally add the JDBC driver files in the following location in the
Amazon S3 bucket:
<Supplementary file location>/serverless_agent_config/spark
4. Copy the following code snippet to a text editor based on the mappings that you want to run in a
serverless environment:
• For mappings that do not apply in advanced mode, add the following code snippet:
version: 1
agent:
dataIntegrationServer:
autoDeploy:
jdbcv2:
common:
- fileCopy:
sourcePath: common/<Driver_filename>
For more information about serverless runtime environment properties, see Administrator in the Data
Integration documentation.
1. In the advanced session properties of the mapping task, select advanced.custom.property as the
session property name.
2. In the session property value, specify the following value:
Spark.NeedUserCredentialFileForAdapter=true&:Spark.UserCredentialDirOnDIS=<Location of the SSL
certificate on the Secure Agent machine>
• Spark.NeedUserCredentialFileForAdapter. When you set this property to true, the Secure Agent copies
the SSL certificate from the Secure Agent machine to the advanced cluster.
• Spark.UserCredentialDirOnDIS. When you set this property to the location of the SSL certificates, the
Secure Agent uses the specified location to get the SSL certificate. This property is optional. If you do
not specify this property, the Secure Agent considers the following default location: /infa/user/
credentials
JDBC V2 connections
Create a JDBC V2 connection to access data from Aurora PostgreSQL or any database that supports the
Type 4 JDBC driver.
You can a use JDBC V2 connections to specify sources, lookup objects, and targets in a mapping. When you
create a JDBC V2 connection, you enter properties specific to the database to which you want to connect.
You can also parameterize a JDBC V2 connection with values specified in a parameter file when you
configure a mapping in advanced mode.
You can create a JDBC V2 connection on the Connections page in the Administrator service or in the Data
Integration Mapping Designer. The connection is available to the entire organization.
Property Description
Runtime Environment The name of the runtime environment where you want to run tasks.
Select a Secure Agent or serverless runtime environment.
10
Property Description
Additional Security Properties Masks sensitive and confidential data of the connection string that you
don't want to display in the session log.
Specify the part of the connection string that you want to mask.
When you create a connection, the string you enter in this field appends to
the string that you specified in the Connection String field.
Enable Auto Commit1 Specifies whether the driver supports connections to automatically commit
data to the database when you run an SQL statement.
When disabled, the driver does not support connections to automatically
commit data even if the auto-commit mode is enabled in the JDBC driver.
Default is disabled.
Support Mixed-Case Identifiers Indicates whether the database supports case-sensitive identifiers.
When enabled, the Secure Agent encloses all identifiers within the character
selected for the SQL Identifier Character property.
Default is disabled.
SQL Identifier Character Type of character that the database uses to enclose delimited identifiers in
SQL queries. The available characters depend on the database type.
Select None if the database uses regular identifiers. When the Secure Agent
generates SQL queries, it does not place delimited characters around any
identifiers.
Select a character if the database uses delimited identifiers. When the
Secure Agent generates SQL queries, it encloses delimited identifiers within
this character.
Add the Source and Target transformations in the mapping canvas and configure the JDBC V2 source and
target properties. In advanced mode, the Mapping Designer updates the mapping canvas to include
transformations and functions that enable advanced functionality.
Specify the name and description of the JDBC V2 source. Configure the source and advanced source
properties for the JDBC V2 object.
You can parameterize the source and advanced source properties for the JDBC source object in a mapping.
You can also parameterize the JDBC V2 source object with values specified in a parameter file when you
configure a mapping in advanced mode.
The following table describes the JDBC V2 source properties that you can configure in a Source
transformation:
Property Description
Source Type Source type. Select one of the following source types:
- Single Object. Select to specify a single JDBC V2 object.
- Multiple Objects. Select to specify multiple JDBC V2 objects. Use the advanced relationship
option to define the relationship for the objects that you want to join.
- Parameter. Select to specify a parameter name. You can configure the source object in a
mapping task associated with a mapping that uses this Source transformation.
- Query. Select to define a valid and supported SQL query in the Source transformation.
12
Property Description
Filter A simple filter or an advanced filter condition to remove rows at the source. You can improve
efficiency by filtering early in the data flow.
Specify a simple filter by including a field name, operator, and value. Configure an advanced filter to
define a more complex filter condition that can include multiple conditions using the AND or OR
logical operators.
You can choose from the following filter conditions:
- Not parameterized. Use a basic filter to specify the object, field, operator, and value to select
specific records.
- Completely parameterized. Use a parameter to represent the field mapping.
- Advanced. Use an advanced filter to define a more complex filter condition.
The following table describes the JDBC advanced source properties that you can configure in a Source
transformation:
Pre SQL The SQL query that the Secure Agent runs before reading data from the source.
Post SQL1 The SQL query that the Secure Agent runs after reading data from the source.
Fetch Size The number of rows that the Secure Agent fetches from the database in a single call.
Table Name Overrides the table name used in the metadata import with the table name that you specify.
SQL Override The SQL statement to override the default query and the object name that is used to read data
from the JDBC V2 source.
If you join tables that are from different schemas, you must manually add the schema names.
The following image shows an example of an advanced relationship condition defined between two
JDBC V2 tables where the schema names are different:
Specify the name and description of the JDBC target. Configure the target and advanced target properties for
the JDBC object in a Target transformation.
You can parameterize the target and advanced target properties for the JDBC target object in a mapping. You
can also parameterize the JDBC V2 target object with values specified in a parameter file when you configure
a mapping in advanced mode.
The following table describes the target properties that you can configure in a Target transformation:
Property Description
Object Name of the target object. You can select an existing object.
Create New at Creates a target table at runtime based on the table type and the path you specify.
Runtime To create a target table at runtime, provide the following parameters:
- Object name. Specify the name for the target table.
- Path. Specify the target table name and schema in the following format: <Schema-Name>/
<TableType>, where the mandatory value for TableType is TABLE.
Update columns The temporary key column that identifies rows in the target table to update, upsert, or delete data.
Data Driven Flags rows for an insert, update, delete, or reject operation based on the data driven expression
Condition you specify.
Appears only when the operation type is Data Driven.
Note: To configure a data driven expression in a mapping, you need to specify the expression that
uses the IIF function.
The following table describes the JDBC advanced target properties that you can configure in a Target
transformation:
Pre SQL The SQL statement to run before writing data to the target.
Post SQL The SQL statement to run after writing data to the target.
Truncate Target Truncates the target table before inserting records to the target.
Reject Truncated/ Writes truncated and overflow data to the reject file. If you select Reject Truncated/Overflow
Overflow Rows Rows, the Data Integration Service sends all truncated rows and any overflow rows to the reject
file.
Table Name Overrides the table name used in the metadata import with the table name that you specify.
Update Mode Determines the mode how the Secure Agent writes data to the target.
Select one of the following modes:
- Update As Update. Updates all rows flagged for update if the entries exist.
- Update Else Insert. Updates all rows flagged for update if the entries exist in the target. If the
entries do not exist, the agent then inserts the entries.
Forward Rejected Determines whether the transformation passes rejected rows to the next transformation or
Rows drops rejected rows. By default, the agent forwards rejected rows to the next transformation.
Note: Before you create a target table at runtime in PostgreSQL or Azure SQL Database, select the
appropriate database type in the JDBC V2 connection.
• Do not edit the metadata in the target object. If you edit the metadata, the changes are not reflected at
runtime.
• When a target table exists, the Secure Agent uses the same target table.
• When you configure an override of the schema name and table while creating a target at runtime, the
Secure Agent creates an empty table in the default database.
You can parameterize the JDBC V2 lookup object with values specified in a parameter file when you
configure a mapping in advanced mode.
You can create the following lookups when you configure field mappings in a mapping task:
• Connected lookup
The following table describes the JDBC V2 lookup properties that you can configure in a Lookup
transformation:
Property Description
Parameter A parameter file where you define values that you want to update without having to edit the task.
Select an existing parameter for the lookup object or click New Parameter to define a new
parameter for the lookup object.
The Parameter property appears only if you select parameter as the lookup type.
If you want to overwrite the parameter at runtime, select the Allow parameter to be overridden at
run time option.
When the task runs, the Secure Agent uses the parameters from the file that you specify in the
advanced session properties.
Multiple Matches Behavior when the lookup condition returns multiple matches. Select from the following options:
- Return any row
- Return all rows
- Report error
The following table describes the JDBC V2 lookup advanced properties that you can configure in a Lookup
transformation:
Pre SQL The SQL query that the Secure Agent runs before reading data from the source.
Post SQL1 The SQL query that the Secure Agent runs after reading data from the source.
Fetch Size The number of rows that the Secure Agent fetches from the database in a single call.
Table Name Overrides the table name used in the metadata import with the table name that you specify.
SQL Override The SQL statement to override the default query and the object name that is used to read
data from the JDBC V2 source.
Medical investigators from the hospital want to analyze the medical data of the treatment performance of
specific inpatient hospital events within the hospitals. The management wants to use this analysis to provide
better care for patients and help achieve health equity.
To avoid low performance and high-cost challenges, the hospital plans to port its entire data from all the
hospital branches spread globally to Aurora PostgreSQL within a short span of time. Create a mapping in
advanced mode to read data from the hospital branches and write data to the Aurora PostgreSQL target.
The advanced cluster distributes rows of data based on the partition key you define. Before you configure a
partition, ensure that you select the partition key in the Fields tab of the Source or Target transformation in a
JDBC V2 mapping. If the imported table already has a primary key defined, the cluster application considers
that as the default partition key. Before you import, ensure that the table has only one partition key.
After you import, you can change the partition key on the Fields tab. From the Options list, select Edit
Metadata and then select the partition key. Ensure that you do not define more than one partition key for the
source or target.
The default number of partitions is 1. If the number of partitions you specified for the target is 1, the cluster
application considers the same number of partitions you specified for the source as the target partition
number.
Smallint Yes
Integer Yes
Bigint Yes
Decimal Yes
Numeric Yes
Real Yes
Double Yes
Smallserial Yes
Serial Yes
Bigserial Yes
Char -
Charn -
Varchar -
Varcharn -
Text -
Bytes -
Boolean -
Date -
Time -
Timestamp -
Bit -
• If you change the schema name in the connection, the updated schema name does not reflect in the
user interface in the existing mapping object. However, the updated schema reflects at runtime.
• In a mapping that uses the Create Target option, you can't override the schema name and table
name.
Import objects
When you import a JDBC V2 object and search for special characters, the search results do not display
the objects.
Multiple objects
Data types
• When you preview the data for time(4), time(5) and time(6) data types, the data is truncated beyond
precision 3.
• Do not specify a filter in a mapping for the Azure SQL Database source that contains the Datetime
data type.
• When you read or write data that contains time(4), time(5), and time(6) data types, the data is
truncated beyond precision 3.
• When you write data that contains the Timestamp data type to a PostgreSQL or Azure SQL Database
target using the Create New at Runtime option, the Timestamp data type value is appended with the
time zone.
• When you create a new Azure SQL or PostgreSQL Database target at runtime and the source data
contains the Time data type, the Secure Agent writes the date time value until microseconds.
Partitioning
SQL query
Ensure that the list of selected columns, the data types, and the order of the columns that appear in the
query matches the columns, data types, and order in which they appear in the source object.
Oracle synonym
• When you import multiple source objects, ensure that the table and column names do not contain
Unicode characters.
• Ensure that the source or target table names and field names do not contain special characters.
• When you import a JDBC V2 object and search for special characters, the search results do not
display the objects.
• The mapping fails when the target table name contains double quotes.
• If you change the schema name in the connection, the updated schema name does not reflect in the
user interface in the existing mapping object. However, the updated schema reflects at runtime.
• When you join multiple Azure SQL Database tables, the schemas must have different table names.
Data types
• When you preview the data for time(4), time(5) and time(6) data types, the data is truncated beyond
precision 3.
• Do not specify a filter in a mapping for the Azure SQL Database source that contains the Datetime
data type.
• When you read or write data that contains time(4), time(5), and time(6) data types, the data is
truncated beyond precision 3.
• When you write data that contains the Timestamp data type to a PostgreSQL or Azure SQL Database
target using the Create New at Runtime option, the Timestamp data type value is appended with the
time zone.
• When you create a new Azure SQL or PostgreSQL Database target at runtime and the source data
contains the Time data type, the Secure Agent writes the date time value until microseconds.
• When you read from or write to an Oracle table that contains the Number data type, even though the
precision does not exceed the maximum allowed value of 38, the mapping adds additional digits to
the data. To avoid issues with the Number data type precision, select the Apply Custom Schema
advanced property in the Source or Target transformations, based on the operation that you
configure.
Parameterization
• When you configure the field mapping and the source or target object is parameterized, set the field
mapping to Automap. If you set the field mapping to Manual, the mapping does not run in advance
mode.
• You can't parameterize multiple JDBC V2 source objects.
Target operations
• The number of fields in the source and target must be identical when you configure the update mode
in the Target transformation.
• To perform an update, upsert, delete, or data driven operation on the target, you must specify a
column in the Update Columns field even though you define a primary key in the target table.
• When you specify a column in the Update Columns field, the Secure Agent inserts rows that contain
duplicate data and also updates or upserts data in the target table when both of the following
conditions are true:
- The source table contains columns that have duplicate data.
Object metadata
• To retain the same metadata in the target table after you perform an update, upsert, delete, or data
driven operation on the target, select the Truncate Target option in the Target transformation.
Rules and guidelines for JDBC V2 objects in mappings configured in advanced mode 23
Partitioning
When you configure partitioning for the source or target object that contains an Oracle synonym and the
synonym contains a primary key, select the partition key in the Fields tab of the Source or Target
transformation based on the operation that you configure.
Oracle synonym
When Data Integration writes to a JDBC V2 target, the Secure Agent converts the transformation data type to
the corresponding JDBC V2 data type.
The following table compares the JDBC V2 data types that Data Integration supports to the transformation
data types:
25
Aurora PostgreSQL Data Transformation Data Type Description
Type
double float
varbinary bytea
binary bytea
tinyint integer
time timestamp
smallint integer
numeric numeric
real float
float float
char varchar
nchar varchar
varchar varchar
nvarchar varchar
timestamp timestamp
boolean varchar
decimal numeric
bigint integer
integer integer
The following table lists the transformation data types that Data Integration supports and the corresponding
Azure SQL Database data types when you use the Create New at Runtime option:
date datetime2(7)
binary varbinary
tinyint int
time datetime2(7)
smallint int
numeric decimal
char nvarchar
nchar nvarchar
varchar nvarchar
nvarchar nvarchar
real float
float float
timestamp datetime2(7)
boolean varchar
decimal decimal
bigint bigint
integer int
JDBC V2 and transformation data types for Create New at Runtime option 27
Rules and guidelines for data types
Consider the following rules when you import data types:
• When you import data from Aurora PostgreSQL that contains the varchar data type and the data size is
less than 100 MB, the original precision of the Varchar data type is retained. However, if the data size is
more than 100 MB, the Secure Agent imports the Varchar data type with a default precision of 4000.
• Consider the following rules when you use the Create New at Runtime option to write to an Azure SQL
Database target:
- When the precision of the char(n), nchar(n), varchar(n), and nvarchar(n) data types is more than 4000,
the Secure Agent writes these data types as ntext. If the precision is less than 4000, the Secure Agent
writes these data types as nvarchar.
- When the precision of the binary(n) and varbinary(n)) data types is more than 8000, the Secure Agent
writes these data types as images. If the precision is less than 8000, the Secure Agent writes these data
types as varbinary.
A L
advanced source properties 12 lookups
advanced target properties 15 Mappings 17
C M
Cloud Application Integration community maintenance outages 5
URL 4 mappings
Cloud Developer community source properties 12
URL 4 target properties 15
commit interval Mappings in advanced mode
default 15 example 19
connection
creating a 10
connections
JDBC V2 10
N
Create target number of rows to be fetched 12
rules and guidelines 17
P
D page size 12
Data Integration community
URL 4
data types 25
databases 6
S
source properties 12
status
I T
Informatica Global Customer Support target properties 15
contact information 5 transformation data type 25
Informatica Intelligent Cloud Services trust site
web site 4 description 5
J U
JDBC data type 25 upgrade notifications 5
JDBC type 4 driver 6
JDBC V2
connection properties 10
JDBC V2 connector
W
rules and guidelines 21, 22 web site 4
JDBC V2 Connector
assets 6
JDBC V2 objects
lookups 17
29