HTTP Adaptor Sender Side

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 42

HTTP adaptor Sender side:

we will generate the address and provide it to Sender –Manam evali sender system ki ..mana address
vadi degara vunti vadu courier chestadu

This endpoint /address will be appended to cpi tenant url after deployment and that url will be shared
with sender system

Runtime lo xml ni xsd hold chestadi

Message Mappping –

 Source – xsd-
 Target- wsdl- webservice

Wsdl will have number of operations and which operation you want to call

HTTP will accept any data like XML, Json plain text... Etc

Only in soap Web services cases we will use Soap Any other cases we will Use HTTP

You can download wsdl definition

Reciever system/target system server address need to be give in adaptor configuration Reciever side

Headers and Properties can be used in any local integration process in an iflow.
Let me give you an example :
Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .
Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1),
but where as headers created in the IF1 can be accessed in IF2 because it as global scope.
----------------------------------------------------------------------------------------
Scope
The scope of a header is beyond the integration flow, while the scope of a property is only within the
integration flow
Header
Used to transfer information to the receiver system. Headers are part of a message and are propagated
to a receiver.
Property
Used for internal information within the integration flow. Properties are not transferred to a receiver,
but they last for the entire duration of an exchange
Header v/s Property in Content ModifierIf the information needs to be transferred to the receiver
system, then 'Header' should be used in Content Modifier. If the information is internal to Iflow, then
'Property' should be used
-------------------------------------------------------------------------------------------
Header and Property are both named key-value pairs. However, based on the purpose, the decision
needs to be taken whether to use Header/Property. If the information needs to be transferred to the
receiver system, then ‘Header’ should be used in Content Modifier. If the information is internal to
Iflow, then ‘Property’ should be used. The property lasts for the entire duration of an exchange but it is
not transferred to a receiver.

For example, Property is best suited if we are trying to fetch Product Id from incoming payload and
store it temporarily to use it in a later step within the integration flow.
Header is like HTTP header where we can store small attributes and pass it align with message. It will
be part of the message and hence the scope is global in the sense that any step can see the header
along with the receiver. My doubt is if we have a multiple sub processes in an iFlow like local
integration process then the header variables can be seen by all of them?
2. Property is like a container where it is internal to a process and it can't be sent to receiver. But what
is the scope of property in an iflow?

Property created in local integration flow can be used in main integration Process
---------------------------------------------------------------------------------------------------------------------------------

How content enricher is different from request-reply. Both are synchronous and wait for a response.
Request-reply doesn't have any algorithm while enricher is having enrich and combine algorithm.?

Answer:

In the case of Request-Reply, the response from the external system replaces the current payload. This
means that if you need both the old payload and the response from the external system, you need to
store the old payload in e.g. an exchange property before making the external call.

In the case of Content Enricher, the response from the external system is merged into the current
payload, as per the step configuration. An example of this could be a lookup of e.g. customer details.
The returned information is added to a customer element in the current payload, thereby enriching it.
Content Enrich(synchronous):

Aggregation algorithms :Combine and Enrich

there are two modes: one is combine mode, the other one is enrich mode. When you use combine
mode, what happens is the original message and the lookup message obtained from the external call are
combined into a single enhanced payload.

Another difference: Content enricher – connector from receiver towards content Enricher Request
Reply- connector is towards the Receiver

Request – Reply (synchronous)- HTTP ,ODATA,JDBC


particular IFlow. For this purpose, we can use a variable.
We can also use runtime property SAP_PollEnrichMessageFound to check file exist condition in looping
process call step.
-------------------------------------------------------------------------------------------------------------------------------------------
Share Data Across Two Integration Flows with Data Store and Variables in CPI

If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.

A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.

You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.

Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).

Please note that while there's a Write Variables step, there's no Read Variables step. To fetch the value
of a variable into a property or header, you use a Content Modifer with the type set to either Local
Variable or Global Variable.

Variables can be local or global. Local variables are only visible and accessible within one IFlow, while
global variables can be used by multiple IFlows.

There are scenarios where you might want to use a variable stored in one iFlow and call it from another
process in CPI or to query OData based on the last successful run from another IFlow. Write Variables is
one such functionality provided by SAP in CPI/HCI.
Scenario: Save the last Successful run date of my IFLOW1 and check the last successful run in my next
IFLOW2
Creation of a variable: In our IFlow, we have to include a ‘Write Variables’ shape to do this. Using simple
camel expression as shown below, we can store values in variables. Please note that we did not need to
create a Variable by going into Operations view first.

For example, we may wish to store Last Successful run Date of a particular IFlow (integration flow). This
could be required because we use this date in another IFlow, or another execution of the same IFlow,
like fetching data based on this date.

ome may argue that we can just look into the tenant’s Message Processing logs for finding the last
execution date, right? And if that’s the case, why do we even need to store this data? But it’s not
necessary that every execution is a successful one and thus, the Last Successful Run Date cannot be
fetched directly from Message Processing logs.As another example, let’s say that there are multiple
paths of execution within an IFlow, and the path taken is decided on the basis of input data.

Here are some things to keep in mind about variables in SAP CI:
 A variable expires after 400 days, starting when the integration flow that contains the variable is first
successfully processed. The retention period continues to extend after each successful processing.
 Using an expired or nonexistent variable in an integration flow will result in a runtime message
processing error.
 To share data across different steps of an integration flow, you can define a variable as local. However,
it's recommended to use Exchange properties instead of variables, as tenant space is limited and shared
with other tenant data

-------------------------------------------------------------------------------------------------------------------------------------------
- Wt is the diff between fix value and value mapping?..
Fixed Value Mapping: Fixed Value is very useful when you have lots of conditions coming in one source
fields during mapping, then to avoid the IfThenElse or UDF we go for Fixed Value mapping.

Example: if your source field contains 01 or 02 or 03 or.....12 and you want the result as JAN or FEB or
MAR or ......... DEC.

Advantage: Using fixed value table you can see the result immediately in the mapping.

Disadvantage: If you have used the same fixed value mapping at several places in you mapping then in
case of any change in fixed values you have to make the changes at several places. So in short fixed
value mapping is not good from the maintenance point of view.

Value Mapping: Value Mapping also works in the same way as fixed value mapping, only the thing is you
will get the result at run time only. And you have to define the value mapping in Integration directory.

Advantage: If you have used the same value mapping at several places in you mapping then in case of
any changes in values you don't have to make the changes in you mapping, just make the changes in in
value mapping in Integration Directory, that's it.

Disadvantage: you can't the see the result immediately in mapping. Results can be seen only at run time.

--------------------------------------------------------

Fix Value Mapping;

When you have finite set of values which are kind of static or not going to be changed over then you will
use fix value mapping.

You are going to hard code these values inside message mapping and in case of any changes you need to
modify iflow and deployment required. This usually triggers regression testing from change
management perspective.

This requires technical resources to be available for any modification to values.

Ex : Gender can be stored in fix value mapping.

Value Mapping:

When you have large/small set of values which are going to be changed (addition or modification To
existing values) over period of time then you will use value mapping.
No need to change iflow and during runtime these values rendered from value mapping project.

Any change request for addition or modification of values can be done by non technical guy also since it
has separate UI which very simpler to use.

EX:PayComponent types or disability type codes.

Fixvalue -

The corresponding source and target values are specified in IR in mapping. right click to FixValues.

You can test this function in Mapping Test Tab

Values mapping

this table is maintained in ID

only runtime testing possible as values are picked during runtime

Splitter tips :

When a message is split (as configured in a Splitter step of an integration flow), the Camel headers listed
below are generated every time the runtime finishes splitting an Exchange. You have several options for
accessing these Camel headers at runtime. For example, suppose that you are configuring an integration
flow with a Splitter step before an SFTP receiver adapter. If you enter the string split_$
{exchangeId}_Index${property.CamelSplitIndex} for File Name, the file name of the generated file on the
SFTP server contains the property CamelSplitIndex. This property contains the information on the
number of split Exchanges induced by the Splitter step.

 CamelSplitIndex :Provides a counter for split items that increases for each Exchange that is split
(starts from 0).
 CamelSplitSize :Provides the total number of split items (if you are using stream-based splitting,
this header is only provided for the last item, in other words, for the completed Exchange).
 CamelSplitComplete : Indicates whether an Exchange is the last split.

Grouping: For example, if a message has 10 nodes and grouping is defined as 2, the message is split into
5 messages with 2 nodes each.
The size of the groups into which the composite message is to be split.

You need to take the following important constraints into account when configuring a splitter in
a local integration process:

 Combining a splitter and a gather step works in the same way as in main process, but you must
close each splitter step with a gather step

There is always one message going into the local process and one message returning from the local
process to the main process.

 If splitter is used in local process in combination with Gather, the message returned to main
process is the message at the end of local process
 If Splitter is used in local process in combination with any other steps except Gather (for
example Send, Request-Reply) the message returned into main process is the message before
splitter

 You cannot use a splitter without a child element in a local process. This will raise an error
during deployment


 Splitters in local processes need a child element. You can use any flow step for this, but
most do not make sense in the context of the scenario. Some steps that would be useful are
Send, Request-Reply and Gather.

Externalized Parameters

Generally, some parameters will change in your iflow based on the environment. For example,
the host name of the target system will be different for dev and production. For such cases, we
must externalize the parameters.

9. Router

Router is similar to the if then else condition in the programming. Only the 1st match will be executed.
So, the order of execution is important here.

Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.

 Difference between poll enrich vs content enrich

 How will the gather know whether last message has been reached ?
 how will gather know how many split messages are there …

Gather will look for two properties :

 Camel Split Complete : it is always true or false if its false then there are still messages
 Camel Split index:

If the Camel Split complete is true then gather will come know that it has to combine the messages

Join with gather is used in combination when you use if you have Multiple branches – thumb rule
If you are using multicast it may be sequential or parallel then we can use join and then gather step to
combine all the messages

Class -8

 Router – there will be multiple branches but based on condition which ever satisfies it will go
to that branch( one default route if none of the condition satisfies )
 Parallel Multicasting-Messages will go all the branches/Multiple branches- independently
 Sequential Multicasting- Messages will go to Multiple branches in sequential order if branch1
fails it will not go to branch2 ..one branch Is dependent on other
 SFTP : will pick only one file from the directory
 In SFTP configuration File name :* (if you give star it will pick all the files in the folder )Cloud
connector : to send any data to on premise application

JDBC – we have to pull the data from database server (sender and Reciever side)

SFTP /FTP- we are pulling the file from SFTP server (sender and Reciever side)

Odata – we are pulling the data (Sender and Reciever side)

Never use Cloud connector :

 HTTPs- Action from outside we are pushing the data to cpi through post man
 IDoc
 SOAP
 Proxy

If cpi initiating pull request then we use sftp, jdbc, odata.. And if cpi initiates push request we use http,
soap, idoc etc we generate ulr and share it with source server team they will consume our cpi url.. Is my
understanding right?- Yes

Setup will be done by basis team

JMS is for only internal communication.. We can not use JMS for external server
Ways to connect from S4 Hana to SAP CPI :

 Idoc
 RFC
 Proxy
 SFTP File
 Steps for Outbound Configurations of IDoc in S4HANA:


 Looping Process call: In single call we cannot pull the entire data from success factor we go for
looping process call .. it will iterate/recall same local integration multiple times based on the
count which we provided in the looping process call it will work based on the condition which
we provided .. you can select local integration process you want to call
 If you are querying SAP SuccessFactors EC using the CompoundEmployee API, how would you
query all the records if the page size is set to 200 and they're a thousand records in EC?
 Looping Process Call: Call the Local Integration Process till Condition specified is true.
 More about Looping Process Call:
General splitter vs Iterator splitter

Splitter : to Split the composite message into Individual Message we go for splitter

 Iterator splitter:breaks down the message into individual messages without encapsulating
elements(without header elment)
 General Splitter: breaks down the message into individual messages keeping encapsulating
elements(including header element )
 Request reply (http/soap/odata)– it will send request and get the response so two times it will
be called one for request and one for response
 Variable :
 : What is the different between local variable and global variable?
A: Local variable can be access via same iFlow only. Global variable can be access via differet
iFlows.

 How to read local variable and global variable?


A: Use Content Modifier read to either header or property.
 Q: How to write Variable?
A: In iFlow, use 'Write Variables' step, take value from header/property/Xpath/expression.
 Is it possible local and global variable having same name?
A: Yes. Since the scope is different between local and global.
 : How to do delta synchronization via timestamp?
A: Use variable to remember last processed timestamp, so that next scheduled run will resume
from last process timestamp onward.
 Q: What need to consider when design delta synchronization via timestamp?
A: (1)Data should be sorted by timestamp.
(2) Timestamp should be unique (E.g. only date without time might not work).
(3) The right date field should use for delta synchronization.
(4) Only update last processed timestamp at last step if all processing success.
(5) Timer/scheduler interval.

 Q: What if I need to revert back to earlier past timestamp?
A: Build in same iFlow a manualRun/adhocRun flag to set manual timestamp, override value in
variable.
 : Should I use global variable or local variable?
A: Use global if other iFlow need to access same variable. Global can behave like local, but not
the other way round.

Q: What ways can be use to delete variable?
A: Manual delete via 'Manage Variables' page.

Q: What other potential use of variable?


A: Access same variable value in different branches of Multicast (because property will not
work).
 At iFlow first run, variable not created yet but need to use some initial/default value for
processing, how to handle such chicken-and-egg situation?
A: Use Content Modifier read variable and set default value.
-------------------------------------------------------------------------------------------------------------------------------
 Data Store:
 Q: How to write to data store?
A: Use DS write step.

Q: What is the different between Visibility: 'Global' and 'iFlow'?


A: 'Global' mean any iFlows can access DS; 'iFlow' mean only same iFlow that write can read it
back.

Q: At Write DS, Is it mandatory to specify Entry ID?


A: No.

Q: What happen if write to DS with same entry ID twice?


A: By default will fail/error. If selected 'overwrite existing msg' then will replace/update.

Q: Is it message body only will write to DS?


A: Body always write. If select 'include msg headers' then headers will be write to DS as well.
 Q: What kind of payload format can write to DS?
A: No restriction. Xml/Json/Text also is fine.
 Q: For DS Get, what happen if not specify entry id?
A: Will fail, entry id is mandatory for DS Get.

Q: What are the main different between DS Get and DS Select?


A: DS Get fetch single entry; DS Select fectch multiple.
A: DS Get mandatory to specify entry id; DS Select no option to enter entry id.
A: DS Get support different data format; DS Select only support XML.

Q: After Get or Select from DS, what are the ways to delete the entry?
A: Use 'delete on completion' or DS delete by entry id.

Q: When writing a list of records to DS, if some record processing failed, will the DS operation
have partial success, partial failed?
A: Yes. Now new iflow by default 'transaction handling' is none.

Q: How to select multiple entry from DS, process each entry independently one-by-one, process
those success one, skip those failed one, and further write to different DS in case success or
failed?
A: Use combination of DS select, splitter, router, exception sub-process, 'transaction handling'
setting, 1 source DS + 2 target DS to achive this. Will show in course lesson.

Q: What data format supported by DS sender adapter?


A: Xml, non-xml or any format also is ok.

Q: What so special about DS sender adapter, compared to DS Get & Select?


A: DS sender adapter have auto retry feature.

Q: Why DS sender retry consider as 'smart' retry? Describe it, please?


A: It have 'Exponential Backoff' retry option. Each retry will double the wait time.
 How to check whether the file exists in sftp server in sap cpi?
 In the process of test connectivity to SFTP server you need to select the Authentication type as
User credentials /Publick key and once you provide the details ..you can click on the checkbox
Check Directory access and provide the Directory Path to see all the files in that directory..below
is the screenshot for the same.
 We need to provide the connectivity details to connect SFTP server (Monitor->Connectivity
Test-> SSH tab) once the connection is successful we can able to see the directory details along
with files


Public Key Authentication:

 if the Authentication type is public key Go to Monitor ->Manage Keystore and click on drop
down next to create SSH key
 Once you create SSH key you need to download public key and share this with sftp server team
 Once you download this public key and send it to sfmc team they will assign one user to this
public key
 Sftp team will add user to this public key this is not our responsibility just showing for your
understanding
 Once you have done sftp connectivity test now you can use it in iflow sftp adaptor
configuration

SFTP: using SFTP adaptor we are connecting to sftp server

You need to have connectivity details to connect to sftp server

Authentication:

 User Credentials
 Public Key

it is not like sender or receiver, if CPI is initiating the pull/ push requests, then we need to use SAP CC
like SFTP, JDBC, OData ..etc. incase of source server push the data to CPI, then no need SAP CC. Like
HTTP, Soap, IDoc .. etc just we will generate URL and share with Source server team, then they will
consume our CPI URL, so this case CC is not required

Can you Please differentiate Poll enricher and Content Enricher...

Poll enrich to pull the file from SFTP server using SFTP adapter ..Content Enricher to enrich the XML data
from Odata service using Odata/SF adapter

In real time also we cant Poll more than one file using Poll Enricher in Single run....? Poll enrich will pull
only one file real time too
Purchase order data /invoice /shipment notification/product/vendor
IDoc Number using IDoc number we can check the status

Abap team will sit with functional team ..prepare the idocs we will use these idoc structure
How to create –exception –subprocess-for-iflow-in- sapcpi
Splitbyvalue:its simply to insert context changes based on each value changes by default

Remove Context:to remove all the context changes and put the values in same context so that we can
sort them -ie., single record

From every record it will take all the values and put it in single record

Sort :set the order as descending

Createif –is to create target node based on certain condition it will take input as true or false if its true
it will create an target node otherwise it will suppress

Field to structure map chese tapudu we need to use Remove context

 Field – one record lo one value


 Structure – ani values oka record kinda vuntayi

Remove context example :


Remove context example :

Collapse context : From every record it will take only first value 1356

Example 1:

Second example : 1356 will come in same record output is same for remove context and collapse
context
Split by value:

Generally after remove context we use split by value

For each value :


For value change : it will compare the first value with second value if the value changes it will create
context for the first value ;;

Value is not change for 2 2 so both are in same context

3 and 4 value changed so 3 is in separate context

Next 4 and 4 value not changes so both are in same context

1 value changed so in separate context

Input

Output:
Split by value – oka oka record lo oka value vundali

Generally

Remove context->splitby value->collapse context -> exists -> createif

1.createIf,

2.removeContexts,

3.replaceValue,

4.Exists,

5.SplitByValue,

6.collapseContexts,

7.useOneAsMany,

8.sort,

9.sortByKey,

10.mapwithDefault,

11.formatByExample

removeContexts

Removes all higher-level contexts of a source field. In this way, you can delete all hierarchy levels and
generate a list.

replaceValue

Replaces the value I with a value that you can define in the dialog for the function properties.
exists

O = true, if the source field assigned to inbound channel I exists in the XML instance. Otherwise, false.

SplitByValue

Inserts a context change for an element.

collapseContexts

Deletes all values from all contexts from the inbound queue except for the first value. Empty contexts (=
ResultList.SUPPRESS) are replaced by empty strings. Only one queue remains, which consists of contexts
that contain just one value each. Finally, all internal context changes are deleted, so that all values
belong to one and the same context.

useOneAsMany

Replicates a value of a field occurring once to pair it as a record with the values of a field occurring more
than once.

sort

Sorts all values of the multiply-occurring inbound field I within the existing or set context. The sorting
process is stable (the order of elements that are the same is not switched) and it sorts the values in
O(n*log(n)) steps. Using the function properties, you can specify whether values are to be sorted
numerically or lexicographically (case-sensitive or non case-sensitive) and in ascending or descending
order.

sortByKey

Like sort, but with two inbound parameters to sort (key/value) pairs. The sort process can be compared
to that of a table with two columns.

● Using the first parameter, you pass key values from the first column, which are used to sort the
table. If you have classified the key values as numeric in the function properties, they must not be equal
to the constant ResultList.SUPPRESS. See also: The ResultList Object

● Using the second parameter, you pass the values from the second column of the table.

If there is a discrepancy between the number of keys and values, the mapping runtime triggers an
exception. The function returns a queue with the values sorted according to the keys.

mapWithDefault
Replaces empty contexts in the inbound queue with a default value, which you specify in the function
properties.

formatByExample

This function has two inbound queues, which must both have the same number of values. To generate
the result queue, the function takes the values from the first queue and combines them with the context
changes from the second queue.

1. remove context:

You use removeContexts () to delete all the top contexts for an element. This removes all top hierarchy
levels, so that all elements of the target queue are assigned to a root element of the source queue.

Advanced user-defined functions can import either just one context into the input arrays, or complete
queues. Make your selection by selecting or deselecting the Save Entire Queue in Cache checkbox in the
function editor.

2. split by value:

The SplitByValue() function is the counterpart to removeContexts(): Instead of deleting a context, you
can insert a context change in the source value queue. You then receive this element for each inserted
context change instead of a top node element. However, for this to be possible, the top node source
field must be assigned a top node target field and minOccurs must be >0. You can insert a context
change in the queue after each value, after each change to the value, or after each tag without a value.
Variable - Persistent

Write Variable Use cases

 Last Successful Run


 Last Modified Date
 LastSyncDate

To Read the Variable value use Content Modifier Select the Source type as Local

SAP :

 IDOC
 RFC
 Proxy - we use either soap adaptor or xi adaptor

In CPI

Consumer Proxy: its also called Reciever proxy( Abap proxy at receiver side

Provider Proxy: it is also called Sender proxy (ABAP proxy at sender side)
SAP application (IDOC/RFC/Proxy) on Premise if it is at Sender side then we don’t need to use cloud
Connector

SOAP adaptor :

we will generate wsdl and give it to Abaper or ABAP team

End point will be given to Basis team so that In SOA manager they will create logical port

XI Adaptor:

Rfc g type

Sxmb_adm(transaction code): IS _URL CPI ki veledi destination maintain cheyali

Idoc – to transfer data asynchornously

SAP application ECC/,S/4,crm

PI – ALE Support

 SM59->rfc destination
 Port
 Logical system
 Partner profile

CPI :

 STRUST (TCODE)- Import SSL certificate


 Rfc ->HTTP destination ( HTTP destination are two types G AND H type ) we use G type
connection
 Port
 Logical system
 Partner Profile

We need to import certificates at both ends ie., ECC and CPI side

CPI :

We will create two iflows

Common iflow - For every IDoc ki ade RFC port vadutamu adi RFC destination vadutamu sap side CPI
lo ki ragane value mapping dawara idoc MatMas ayithe ee receiver ki velali.. orders ayithe ee
receiver ki velali ani Value Mapping lo we put conditions .. we identify the receiver based on
source agency ..target agency..idoc type ,Message type) we are routing Messages
Idoc Sender- CPIValue MappingProcess Direct

Process Direct Sender  CPI SFTP/REST(Actual receiver)

Idoc Sender SIDE AYITHE In Control records you will have document Number field -> Idoc
Number it will created in SAP application we get it from SAP Application

IDOC receiver side AITHE we are generating it in SAP Application it will come in IDOC response
header Property(SAP will be generating it)..

Process direct - to call from one iflow to another iflow Its Internal adaptor

When to use which ADATPOR:

1) You can use relevant adapter based on the Target Systems call( they have IDoc or they have proxy or
they have web service exposed from SOAMANAGER).

2) If standard IDocs are available in SAP S4 or ECC either inbound or outbound for the particular
interface you will be designing your IFlow having IDoc adapter.

3) If there are no standard IDocs available then they might take Custom Proxy approach then you should
go with XI adapter.

4) If there are any services exposed as web services from SOAMANAGER[ ex Successfactors Employee
and ORG Data replication ] then you will go with SOAP Adapter.
Yes its possible.If you are sending IDoc from CPI to ECC ask your ECC team to enable ALEAUD which gives
the response.

 Idoc- combination of Message type and Idoc type /Basic type


 RFC
 Proxy- they create structure in created in ECC /s4 hana system using tools and they share the
wsdl to cpi team through wsdl they will be sending the data to CPI
 SFTP File

IDOC Adaptor is soap based

 RFC adaptor—if we are not able to connect we will contact Basis team
 XI adaptor

If there are no messages in SM58 the u Have to check in SMQ1 as well


 SAP TO 3rd party or CPI outbound IDOC
 3rd party /CPI to SAP- Inbound IDOC

Outbound Configuration

Step 1 : Define logical system (LS)

 Go to T-Code : BD54

 Assign this logical system for client.

Go to T-Code : SALE => Logical Systems => Assign Logical System to Client

Step 2 : Define Http connection to External Server (Type G)

Step 3 : Define Port

In PI, we create tRFC port. In CPI we create XML HTTP Port

 Go to T-Code : WE21

 Click node : XML HTTP

 Click F7 / Create

Step 4 : Add TRUST of CPI into SAP ERP

Step 5 : Test connection from SAP ERP to SAP CPI ( Check connection HTTP to External Server)

Go to T-Code : SM59
Choose connection type G : HTTP connection to external server

Click Test Connection

Inbound Configuration:

SAP Erp settings

Step 1 : Active service

 Go to T-Code : SICF

 Search service with path : /sap/bc/srt/idoc


 F8 to execute.

Step 2 : Register service

Go to T-Code : SRTIDOC

Choose Register Service. Press F8 to run with default

If service is registered, we will receive this message

Step 3 : Run Test Service to get Endpoint.

Go to T-Code : SICF

Search service /sap/bc/srt/idoc

Right click and choose Test Service

Note the URL that comes in your browser. This is the URL where your IDoc adapter in HCI needs to point.
The URL will be of format :

http://host:port/sap/bc/srt/idoc?sap-client=<clientnumber>

Step 4 : Create Logical System

 Go to T-Code : SALE

 Create Logical System

 Assign this logical system to client.


Step 5 : Create partner profile and add Inbound parameter

Go to T-Code : WE20

Choose partner type = LS from left side tree view

Choose create and input information in step 4 – local system as Partner No.

Step 6 : Test service from POSTMAN

Case 1 : No body or wrong data XML.

Go to T-Code : SOAMANAGER

Cloud connector

 SAP Cloud Connector Configuration


 Login SAP Cloud Connector
 Choose Cloud to On-Premise at left side.
 Create
 Back-end Type : ABAP System
 Next, reference to Test Service above to take internal host and port

 Add Resources of this host. Take in URL when run Test service in above step. In this case
is : /sap/bc/srt/idoc
 Finally, check on SAP BTP

CPI Configuration:

CPI Configuration

Step 1 : Create Credential Name with User/Pass use for create IDOC on SAP/ERP. This user have to
create on SAP and take ROLE can create IDOC and Posting document

Go to SAP BTP->Go to Integration Suite Application>Go to Monitor>Go to Security Material under group
Manage Security

Fill user/pass. This name will use for in step config Integration Flow.

Step 2 : Design integration flow

In this integration flow, use components :


How to track SAP CPI generated IDOC in SAP ERP

Add Content Modifier component.

Add header as below:

Name Value

Name SapMessageId

Source Value SAP_MessageProcessingLogID

Source Type Header

In this integration flow, use components :

Configuration for sender HTTPS

Configuration for sender SFTP ( Solution 2)


Configuration for receiver IDOC

 (1) : Address : http://<Vitualhost on SCC> + <Resource on SCC> ? sap-client = xxx


 (2) : On-premise
 (3) : Basic
 (4) : Credential name which configured in step 1 of CPI Configuration
 Difference :Process Direct allows you to invoke other integration flows, while Process Call is for
calling sub-processes within the same integration flow.

Process Direct :

Use Case:

 Scenario: Let’s say you have multiple iFlows that need to validate incoming data using the same
validation logic.
o Implementation: You can create a dedicated "Data Validation" iFlow with the validation
logic and expose it via Process Direct. Other iFlows that need to perform validation can
call this iFlow using the Process Direct adapter, rather than replicating the same logic in
each iFlow.

Process call : It’s mainly used for breaking down a larger integration flow into smaller, reusable sub-
processes. Process Call is used to call a sub-process within the same integration flow.

Use Case:

 Scenario: You have a complex iFlow that handles multiple tasks, such as data transformation,
validation, routing, and logging.
o Implementation: Use Process Call to divide the iFlow into sub-processes, each handling
a specific task. For example, one sub-process could handle transformation, another
validation, and another logging.

4) Routing Concept mean where and how should my messages flow to?

Routing is common usage in integration, suggest to first learn these 3 routing patterns:

Router step to send message to different routes based on condition, process using different logic and
optionally back to single main route.

Multicast step to send the same message to multiple routes to process differently, and optionally gather
back results from all different routes.

Splitter step to tackle challenge of splitting large payload to smaller one then only process. Should
understand different between General and Iterating Splitter, and learn how to split by XPath, Line Break
and Token.

8) Enrich/Lookup concept in CPI

There will be cases you might need get different payloads from multiple sources or multiple calls, then
only enrich the main payload with data from lookup payload. You should understand default capability
and limitation of Content-Enricher step, and good to know ways to do advanced data enrichment using
groovy mapping, since message mapping(multi-mapping) that involved multiple messages normally will
be relatively much complex to develop and maintain.
11) Message Mapping (Graphical Mapping) and Queue and Context concept

If you need to support existing message mapping from standard pre-packaged content iflow, or need to
migrate SAP PI/PO message mapping to SAP CPI, then there is no other choice but need to understand
those tricky mapping queue and context handling using node functions. It will be difficult to work on
complex message mapping without good understanding on message mapping queue and context
concept.

15) Security concept in CPI

First security concept is authentication. Should aware different authentication supported by SAP CPI. For
example username/password, OAuth2 token or client-certificate based authentication. Second concept
is authorization mean what access granted example developer role don't have admin access to manage
CPI tenant setting or security artifact. Other security concept is message level encryption(encrypt and
decrypt) and signature (sign and verify) that commonly used in file-based transfer like in SFTP, but some
also apply encryption in API-based HTTP calls.

13) Persistence/Variable/Data Store mean CPI keep data for later usage

For normal integration flow processing, all header, properties and body temporarily hold in CPI during
runtime will not able to retrieve them back again after iFlow processing end. This
persistence/variable/Data store concept is asking CPI to ‘remember’ by storing in SAP CPI. Generally,
global/local variable is for storing single value, while Data Store is to store list of value/payloads under
same data store name.

5) Converter to convert body from one format to another

Should learn the usage of standard converter JSON To XML, XML to JSON, CSV to XML, XML to CSV,
because these are common payload format hence common requirement.

2) Content Modifier to add/change/delete some data

This is simple but important step in CPI that used frequently. Original Camel exchanges and Camel
concept can refer this SAP Press blog. The key takeaways for practical CPI usage are:

a) Pipeline concept that CPI pass message contain headers, properties and body from one CPI step to
next step connected by arrow. Multiple CPI steps can be chained together sequentially/parallelly.

b) Content Modifier able to add/change/delete headers, properties and body (also number range &
variables).

c) Generally, header store smaller size data that might need send to receiver(s)a. E.g. as HTTP
headers.

d) Generally, property store larger size data and data need to store in earlier step of iflow and need
retrieve back at later step.
e) Body is main payload that can send to receiver(s).

3) Request-Reply to make some calls and Adapter for Connectivity

Request-Reply simply mean make some request call to receiver and get reply/response back. Suggest to
first learn these 3 adapter types: HTTP, OData, SOAP because majority of use-case especially those
cloud-based API believed fall under these 3 adapter types.

6) Looping concept in CPI

If you come from imperative/procedural programming background (e.g. Java, ABAP, C#), you might
wondering how to looping in SAP CPI? What are the equivalent CPI ways to do for loop/while loop since
all these CPI steps is drag-and-drop only without coding? The answer is using CPI “Looping Process Call”
with condition to exit loop. You should also aware the concept of OData V2 vs V4 looping and leverage
build-in looping feature of OData adapter.

7) Filter/Sorting concept in CPI

Filter concept mean only take necessary data required, either source system only send required data, or
use CPI filter step to retain only required data. For filter step you will need fair good knowledge of XPath
to filter effectively. Alternatively, can use message mapping for filtering or use groovy mapping to do
advanced complex filtering.

For sorting, there are no build-in ready step by CPI to do sorting. Will have to resort to custom solution,
either XSLT, message mapping (not recommended due to need apply sorting to all fields) or groovy
mapping.

8) Enrich/Lookup concept in CPI

There will be cases you might need get different payloads from multiple sources or multiple calls, then
only enrich the main payload with data from lookup payload. You should understand default capability
and limitation of Content-Enricher step, and good to know ways to do advanced data enrichment using
groovy mapping, since message mapping(multi-mapping) that involved multiple messages normally will
be relatively much complex to develop and maintain.

10) Groovy Scripting capable to tackle complex mapping (and many things else)

Since Groovy is general purpose programming language so it is very flexible and powerful, basically you
can use Groovy script to do a lot of tasks (not only mapping) as long as you able code the solution in
Groovy language, and supported in CPI environment. For example use groovy mapping to so source-
target fields mapping between these data format: Xml, Json, CSV, IDOC, Odata Batch. Using Groovy
scripting is extensible because able to import Java Jar libraries that contain the feature you needed.
Example CSV processing Jar library.
13) Persistence/Variable/Data Store mean CPI keep data for later usage

For normal integration flow processing, all header, properties and body temporarily hold in CPI during
runtime will not able to retrieve them back again after iFlow processing end. This
persistence/variable/Data store concept is asking CPI to ‘remember’ by storing in SAP CPI. Generally,
global/local variable is for storing single value, while Data Store is to store list of value/payloads under
same data store name.

14) Exception handling concept in CPI

When error happened in CPI message processing, we can either do nothing then let it failed in CPI in red
error, or add exception sub-process. Ideally should at least able to get back the error message and error
stack trace, then see how to handle errors, e.g. send alert email, store error message in SFTP server or
design advanced re-processing mechanism using data store.

We have three environments dev ,qa ,prod first we need to develop in dev do our unit testing then
download th iflow an import in party and functional team qa environment Mostly end to end testing is
done here involving third once your end to tend testing is successful you need an business approval to
move this to production you have to raise one transport request once it got approved the the same way
you need to download the iflow from qa and import in prod and do the necessary changes as per
channel configuration ..just monitor the iflow in cpi is messaging processing successfully or not

For Every change you do after moving to the prod save it as version

We have 300+ interfaces in production

Gather the requirements build the interface perform the hyper care support

Phase wise..wave 1 wave 2 wave 3

Say client provided some idocs and said they failed to reach receiver system now how do you reprocess
them from cpi as we don’t have idocs number to see why it failed any help greatly appreciated

Answer:

 We need to check in SM58 if any idoc struck


 We need to develop our iflow such a way where we can see idoc number in Monitoring page
 If they provided idoc number to timestamp then you can check the failure in cpi and share them
your analysis to client
 You cant resent the idoc in cpi they need to retrigger
 For resend messages you don’t have any direct option in cpi but you provide workaround
 Say client provided some order no and said they failed to reach receiver system now how do you
reprocess them from cpi as we don’t have idocs number to see why it failed any help greatly
appreciated

Use Poll Enrich with SFTP Adapter

In general, there are three main use cases for the Poll Enrich pattern:

 You want to enrich the message payload with additional data retrieved from a file on the sftp
server.
 You want to poll from an sftp server triggered by an external trigger, for example triggered via
HTTP call.

 You need to poll from an sftp server but want to define the configuration in the sftp adapter
dynamically, for example from a partner directory.

You might also like