HTTP Adaptor Sender Side
HTTP Adaptor Sender Side
HTTP Adaptor Sender Side
we will generate the address and provide it to Sender –Manam evali sender system ki ..mana address
vadi degara vunti vadu courier chestadu
This endpoint /address will be appended to cpi tenant url after deployment and that url will be shared
with sender system
Message Mappping –
Source – xsd-
Target- wsdl- webservice
Wsdl will have number of operations and which operation you want to call
HTTP will accept any data like XML, Json plain text... Etc
Only in soap Web services cases we will use Soap Any other cases we will Use HTTP
Reciever system/target system server address need to be give in adaptor configuration Reciever side
Headers and Properties can be used in any local integration process in an iflow.
Let me give you an example :
Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .
Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1),
but where as headers created in the IF1 can be accessed in IF2 because it as global scope.
----------------------------------------------------------------------------------------
Scope
The scope of a header is beyond the integration flow, while the scope of a property is only within the
integration flow
Header
Used to transfer information to the receiver system. Headers are part of a message and are propagated
to a receiver.
Property
Used for internal information within the integration flow. Properties are not transferred to a receiver,
but they last for the entire duration of an exchange
Header v/s Property in Content ModifierIf the information needs to be transferred to the receiver
system, then 'Header' should be used in Content Modifier. If the information is internal to Iflow, then
'Property' should be used
-------------------------------------------------------------------------------------------
Header and Property are both named key-value pairs. However, based on the purpose, the decision
needs to be taken whether to use Header/Property. If the information needs to be transferred to the
receiver system, then ‘Header’ should be used in Content Modifier. If the information is internal to
Iflow, then ‘Property’ should be used. The property lasts for the entire duration of an exchange but it is
not transferred to a receiver.
For example, Property is best suited if we are trying to fetch Product Id from incoming payload and
store it temporarily to use it in a later step within the integration flow.
Header is like HTTP header where we can store small attributes and pass it align with message. It will
be part of the message and hence the scope is global in the sense that any step can see the header
along with the receiver. My doubt is if we have a multiple sub processes in an iFlow like local
integration process then the header variables can be seen by all of them?
2. Property is like a container where it is internal to a process and it can't be sent to receiver. But what
is the scope of property in an iflow?
Property created in local integration flow can be used in main integration Process
---------------------------------------------------------------------------------------------------------------------------------
How content enricher is different from request-reply. Both are synchronous and wait for a response.
Request-reply doesn't have any algorithm while enricher is having enrich and combine algorithm.?
Answer:
In the case of Request-Reply, the response from the external system replaces the current payload. This
means that if you need both the old payload and the response from the external system, you need to
store the old payload in e.g. an exchange property before making the external call.
In the case of Content Enricher, the response from the external system is merged into the current
payload, as per the step configuration. An example of this could be a lookup of e.g. customer details.
The returned information is added to a customer element in the current payload, thereby enriching it.
Content Enrich(synchronous):
there are two modes: one is combine mode, the other one is enrich mode. When you use combine
mode, what happens is the original message and the lookup message obtained from the external call are
combined into a single enhanced payload.
Another difference: Content enricher – connector from receiver towards content Enricher Request
Reply- connector is towards the Receiver
If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.
A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.
You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.
Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).
Please note that while there's a Write Variables step, there's no Read Variables step. To fetch the value
of a variable into a property or header, you use a Content Modifer with the type set to either Local
Variable or Global Variable.
Variables can be local or global. Local variables are only visible and accessible within one IFlow, while
global variables can be used by multiple IFlows.
There are scenarios where you might want to use a variable stored in one iFlow and call it from another
process in CPI or to query OData based on the last successful run from another IFlow. Write Variables is
one such functionality provided by SAP in CPI/HCI.
Scenario: Save the last Successful run date of my IFLOW1 and check the last successful run in my next
IFLOW2
Creation of a variable: In our IFlow, we have to include a ‘Write Variables’ shape to do this. Using simple
camel expression as shown below, we can store values in variables. Please note that we did not need to
create a Variable by going into Operations view first.
For example, we may wish to store Last Successful run Date of a particular IFlow (integration flow). This
could be required because we use this date in another IFlow, or another execution of the same IFlow,
like fetching data based on this date.
ome may argue that we can just look into the tenant’s Message Processing logs for finding the last
execution date, right? And if that’s the case, why do we even need to store this data? But it’s not
necessary that every execution is a successful one and thus, the Last Successful Run Date cannot be
fetched directly from Message Processing logs.As another example, let’s say that there are multiple
paths of execution within an IFlow, and the path taken is decided on the basis of input data.
Here are some things to keep in mind about variables in SAP CI:
A variable expires after 400 days, starting when the integration flow that contains the variable is first
successfully processed. The retention period continues to extend after each successful processing.
Using an expired or nonexistent variable in an integration flow will result in a runtime message
processing error.
To share data across different steps of an integration flow, you can define a variable as local. However,
it's recommended to use Exchange properties instead of variables, as tenant space is limited and shared
with other tenant data
-------------------------------------------------------------------------------------------------------------------------------------------
- Wt is the diff between fix value and value mapping?..
Fixed Value Mapping: Fixed Value is very useful when you have lots of conditions coming in one source
fields during mapping, then to avoid the IfThenElse or UDF we go for Fixed Value mapping.
Example: if your source field contains 01 or 02 or 03 or.....12 and you want the result as JAN or FEB or
MAR or ......... DEC.
Advantage: Using fixed value table you can see the result immediately in the mapping.
Disadvantage: If you have used the same fixed value mapping at several places in you mapping then in
case of any change in fixed values you have to make the changes at several places. So in short fixed
value mapping is not good from the maintenance point of view.
Value Mapping: Value Mapping also works in the same way as fixed value mapping, only the thing is you
will get the result at run time only. And you have to define the value mapping in Integration directory.
Advantage: If you have used the same value mapping at several places in you mapping then in case of
any changes in values you don't have to make the changes in you mapping, just make the changes in in
value mapping in Integration Directory, that's it.
Disadvantage: you can't the see the result immediately in mapping. Results can be seen only at run time.
--------------------------------------------------------
When you have finite set of values which are kind of static or not going to be changed over then you will
use fix value mapping.
You are going to hard code these values inside message mapping and in case of any changes you need to
modify iflow and deployment required. This usually triggers regression testing from change
management perspective.
Value Mapping:
When you have large/small set of values which are going to be changed (addition or modification To
existing values) over period of time then you will use value mapping.
No need to change iflow and during runtime these values rendered from value mapping project.
Any change request for addition or modification of values can be done by non technical guy also since it
has separate UI which very simpler to use.
Fixvalue -
The corresponding source and target values are specified in IR in mapping. right click to FixValues.
Values mapping
Splitter tips :
When a message is split (as configured in a Splitter step of an integration flow), the Camel headers listed
below are generated every time the runtime finishes splitting an Exchange. You have several options for
accessing these Camel headers at runtime. For example, suppose that you are configuring an integration
flow with a Splitter step before an SFTP receiver adapter. If you enter the string split_$
{exchangeId}_Index${property.CamelSplitIndex} for File Name, the file name of the generated file on the
SFTP server contains the property CamelSplitIndex. This property contains the information on the
number of split Exchanges induced by the Splitter step.
CamelSplitIndex :Provides a counter for split items that increases for each Exchange that is split
(starts from 0).
CamelSplitSize :Provides the total number of split items (if you are using stream-based splitting,
this header is only provided for the last item, in other words, for the completed Exchange).
CamelSplitComplete : Indicates whether an Exchange is the last split.
Grouping: For example, if a message has 10 nodes and grouping is defined as 2, the message is split into
5 messages with 2 nodes each.
The size of the groups into which the composite message is to be split.
You need to take the following important constraints into account when configuring a splitter in
a local integration process:
Combining a splitter and a gather step works in the same way as in main process, but you must
close each splitter step with a gather step
There is always one message going into the local process and one message returning from the local
process to the main process.
If splitter is used in local process in combination with Gather, the message returned to main
process is the message at the end of local process
If Splitter is used in local process in combination with any other steps except Gather (for
example Send, Request-Reply) the message returned into main process is the message before
splitter
You cannot use a splitter without a child element in a local process. This will raise an error
during deployment
Splitters in local processes need a child element. You can use any flow step for this, but
most do not make sense in the context of the scenario. Some steps that would be useful are
Send, Request-Reply and Gather.
Externalized Parameters
Generally, some parameters will change in your iflow based on the environment. For example,
the host name of the target system will be different for dev and production. For such cases, we
must externalize the parameters.
9. Router
Router is similar to the if then else condition in the programming. Only the 1st match will be executed.
So, the order of execution is important here.
Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.
Difference between poll enrich vs content enrich
How will the gather know whether last message has been reached ?
how will gather know how many split messages are there …
Camel Split Complete : it is always true or false if its false then there are still messages
Camel Split index:
If the Camel Split complete is true then gather will come know that it has to combine the messages
Join with gather is used in combination when you use if you have Multiple branches – thumb rule
If you are using multicast it may be sequential or parallel then we can use join and then gather step to
combine all the messages
Class -8
Router – there will be multiple branches but based on condition which ever satisfies it will go
to that branch( one default route if none of the condition satisfies )
Parallel Multicasting-Messages will go all the branches/Multiple branches- independently
Sequential Multicasting- Messages will go to Multiple branches in sequential order if branch1
fails it will not go to branch2 ..one branch Is dependent on other
SFTP : will pick only one file from the directory
In SFTP configuration File name :* (if you give star it will pick all the files in the folder )Cloud
connector : to send any data to on premise application
JDBC – we have to pull the data from database server (sender and Reciever side)
SFTP /FTP- we are pulling the file from SFTP server (sender and Reciever side)
HTTPs- Action from outside we are pushing the data to cpi through post man
IDoc
SOAP
Proxy
If cpi initiating pull request then we use sftp, jdbc, odata.. And if cpi initiates push request we use http,
soap, idoc etc we generate ulr and share it with source server team they will consume our cpi url.. Is my
understanding right?- Yes
JMS is for only internal communication.. We can not use JMS for external server
Ways to connect from S4 Hana to SAP CPI :
Idoc
RFC
Proxy
SFTP File
Steps for Outbound Configurations of IDoc in S4HANA:
Looping Process call: In single call we cannot pull the entire data from success factor we go for
looping process call .. it will iterate/recall same local integration multiple times based on the
count which we provided in the looping process call it will work based on the condition which
we provided .. you can select local integration process you want to call
If you are querying SAP SuccessFactors EC using the CompoundEmployee API, how would you
query all the records if the page size is set to 200 and they're a thousand records in EC?
Looping Process Call: Call the Local Integration Process till Condition specified is true.
More about Looping Process Call:
General splitter vs Iterator splitter
Splitter : to Split the composite message into Individual Message we go for splitter
Iterator splitter:breaks down the message into individual messages without encapsulating
elements(without header elment)
General Splitter: breaks down the message into individual messages keeping encapsulating
elements(including header element )
Request reply (http/soap/odata)– it will send request and get the response so two times it will
be called one for request and one for response
Variable :
: What is the different between local variable and global variable?
A: Local variable can be access via same iFlow only. Global variable can be access via differet
iFlows.
Q: After Get or Select from DS, what are the ways to delete the entry?
A: Use 'delete on completion' or DS delete by entry id.
Q: When writing a list of records to DS, if some record processing failed, will the DS operation
have partial success, partial failed?
A: Yes. Now new iflow by default 'transaction handling' is none.
Q: How to select multiple entry from DS, process each entry independently one-by-one, process
those success one, skip those failed one, and further write to different DS in case success or
failed?
A: Use combination of DS select, splitter, router, exception sub-process, 'transaction handling'
setting, 1 source DS + 2 target DS to achive this. Will show in course lesson.
if the Authentication type is public key Go to Monitor ->Manage Keystore and click on drop
down next to create SSH key
Once you create SSH key you need to download public key and share this with sftp server team
Once you download this public key and send it to sfmc team they will assign one user to this
public key
Sftp team will add user to this public key this is not our responsibility just showing for your
understanding
Once you have done sftp connectivity test now you can use it in iflow sftp adaptor
configuration
Authentication:
User Credentials
Public Key
it is not like sender or receiver, if CPI is initiating the pull/ push requests, then we need to use SAP CC
like SFTP, JDBC, OData ..etc. incase of source server push the data to CPI, then no need SAP CC. Like
HTTP, Soap, IDoc .. etc just we will generate URL and share with Source server team, then they will
consume our CPI URL, so this case CC is not required
Poll enrich to pull the file from SFTP server using SFTP adapter ..Content Enricher to enrich the XML data
from Odata service using Odata/SF adapter
In real time also we cant Poll more than one file using Poll Enricher in Single run....? Poll enrich will pull
only one file real time too
Purchase order data /invoice /shipment notification/product/vendor
IDoc Number using IDoc number we can check the status
Abap team will sit with functional team ..prepare the idocs we will use these idoc structure
How to create –exception –subprocess-for-iflow-in- sapcpi
Splitbyvalue:its simply to insert context changes based on each value changes by default
Remove Context:to remove all the context changes and put the values in same context so that we can
sort them -ie., single record
From every record it will take all the values and put it in single record
Createif –is to create target node based on certain condition it will take input as true or false if its true
it will create an target node otherwise it will suppress
Collapse context : From every record it will take only first value 1356
Example 1:
Second example : 1356 will come in same record output is same for remove context and collapse
context
Split by value:
Input
Output:
Split by value – oka oka record lo oka value vundali
Generally
1.createIf,
2.removeContexts,
3.replaceValue,
4.Exists,
5.SplitByValue,
6.collapseContexts,
7.useOneAsMany,
8.sort,
9.sortByKey,
10.mapwithDefault,
11.formatByExample
removeContexts
Removes all higher-level contexts of a source field. In this way, you can delete all hierarchy levels and
generate a list.
replaceValue
Replaces the value I with a value that you can define in the dialog for the function properties.
exists
O = true, if the source field assigned to inbound channel I exists in the XML instance. Otherwise, false.
SplitByValue
collapseContexts
Deletes all values from all contexts from the inbound queue except for the first value. Empty contexts (=
ResultList.SUPPRESS) are replaced by empty strings. Only one queue remains, which consists of contexts
that contain just one value each. Finally, all internal context changes are deleted, so that all values
belong to one and the same context.
useOneAsMany
Replicates a value of a field occurring once to pair it as a record with the values of a field occurring more
than once.
sort
Sorts all values of the multiply-occurring inbound field I within the existing or set context. The sorting
process is stable (the order of elements that are the same is not switched) and it sorts the values in
O(n*log(n)) steps. Using the function properties, you can specify whether values are to be sorted
numerically or lexicographically (case-sensitive or non case-sensitive) and in ascending or descending
order.
sortByKey
Like sort, but with two inbound parameters to sort (key/value) pairs. The sort process can be compared
to that of a table with two columns.
● Using the first parameter, you pass key values from the first column, which are used to sort the
table. If you have classified the key values as numeric in the function properties, they must not be equal
to the constant ResultList.SUPPRESS. See also: The ResultList Object
● Using the second parameter, you pass the values from the second column of the table.
If there is a discrepancy between the number of keys and values, the mapping runtime triggers an
exception. The function returns a queue with the values sorted according to the keys.
mapWithDefault
Replaces empty contexts in the inbound queue with a default value, which you specify in the function
properties.
formatByExample
This function has two inbound queues, which must both have the same number of values. To generate
the result queue, the function takes the values from the first queue and combines them with the context
changes from the second queue.
1. remove context:
You use removeContexts () to delete all the top contexts for an element. This removes all top hierarchy
levels, so that all elements of the target queue are assigned to a root element of the source queue.
Advanced user-defined functions can import either just one context into the input arrays, or complete
queues. Make your selection by selecting or deselecting the Save Entire Queue in Cache checkbox in the
function editor.
2. split by value:
The SplitByValue() function is the counterpart to removeContexts(): Instead of deleting a context, you
can insert a context change in the source value queue. You then receive this element for each inserted
context change instead of a top node element. However, for this to be possible, the top node source
field must be assigned a top node target field and minOccurs must be >0. You can insert a context
change in the queue after each value, after each change to the value, or after each tag without a value.
Variable - Persistent
To Read the Variable value use Content Modifier Select the Source type as Local
SAP :
IDOC
RFC
Proxy - we use either soap adaptor or xi adaptor
In CPI
Consumer Proxy: its also called Reciever proxy( Abap proxy at receiver side
Provider Proxy: it is also called Sender proxy (ABAP proxy at sender side)
SAP application (IDOC/RFC/Proxy) on Premise if it is at Sender side then we don’t need to use cloud
Connector
SOAP adaptor :
End point will be given to Basis team so that In SOA manager they will create logical port
XI Adaptor:
Rfc g type
PI – ALE Support
SM59->rfc destination
Port
Logical system
Partner profile
CPI :
We need to import certificates at both ends ie., ECC and CPI side
CPI :
Common iflow - For every IDoc ki ade RFC port vadutamu adi RFC destination vadutamu sap side CPI
lo ki ragane value mapping dawara idoc MatMas ayithe ee receiver ki velali.. orders ayithe ee
receiver ki velali ani Value Mapping lo we put conditions .. we identify the receiver based on
source agency ..target agency..idoc type ,Message type) we are routing Messages
Idoc Sender- CPIValue MappingProcess Direct
Idoc Sender SIDE AYITHE In Control records you will have document Number field -> Idoc
Number it will created in SAP application we get it from SAP Application
IDOC receiver side AITHE we are generating it in SAP Application it will come in IDOC response
header Property(SAP will be generating it)..
Process direct - to call from one iflow to another iflow Its Internal adaptor
1) You can use relevant adapter based on the Target Systems call( they have IDoc or they have proxy or
they have web service exposed from SOAMANAGER).
2) If standard IDocs are available in SAP S4 or ECC either inbound or outbound for the particular
interface you will be designing your IFlow having IDoc adapter.
3) If there are no standard IDocs available then they might take Custom Proxy approach then you should
go with XI adapter.
4) If there are any services exposed as web services from SOAMANAGER[ ex Successfactors Employee
and ORG Data replication ] then you will go with SOAP Adapter.
Yes its possible.If you are sending IDoc from CPI to ECC ask your ECC team to enable ALEAUD which gives
the response.
RFC adaptor—if we are not able to connect we will contact Basis team
XI adaptor
Outbound Configuration
Go to T-Code : BD54
Go to T-Code : SALE => Logical Systems => Assign Logical System to Client
Go to T-Code : WE21
Click F7 / Create
Step 5 : Test connection from SAP ERP to SAP CPI ( Check connection HTTP to External Server)
Go to T-Code : SM59
Choose connection type G : HTTP connection to external server
Inbound Configuration:
Go to T-Code : SICF
Go to T-Code : SRTIDOC
Go to T-Code : SICF
Note the URL that comes in your browser. This is the URL where your IDoc adapter in HCI needs to point.
The URL will be of format :
http://host:port/sap/bc/srt/idoc?sap-client=<clientnumber>
Go to T-Code : SALE
Go to T-Code : WE20
Choose create and input information in step 4 – local system as Partner No.
Go to T-Code : SOAMANAGER
Cloud connector
Add Resources of this host. Take in URL when run Test service in above step. In this case
is : /sap/bc/srt/idoc
Finally, check on SAP BTP
CPI Configuration:
CPI Configuration
Step 1 : Create Credential Name with User/Pass use for create IDOC on SAP/ERP. This user have to
create on SAP and take ROLE can create IDOC and Posting document
Go to SAP BTP->Go to Integration Suite Application>Go to Monitor>Go to Security Material under group
Manage Security
Fill user/pass. This name will use for in step config Integration Flow.
Name Value
Name SapMessageId
Process Direct :
Use Case:
Scenario: Let’s say you have multiple iFlows that need to validate incoming data using the same
validation logic.
o Implementation: You can create a dedicated "Data Validation" iFlow with the validation
logic and expose it via Process Direct. Other iFlows that need to perform validation can
call this iFlow using the Process Direct adapter, rather than replicating the same logic in
each iFlow.
Process call : It’s mainly used for breaking down a larger integration flow into smaller, reusable sub-
processes. Process Call is used to call a sub-process within the same integration flow.
Use Case:
Scenario: You have a complex iFlow that handles multiple tasks, such as data transformation,
validation, routing, and logging.
o Implementation: Use Process Call to divide the iFlow into sub-processes, each handling
a specific task. For example, one sub-process could handle transformation, another
validation, and another logging.
4) Routing Concept mean where and how should my messages flow to?
Routing is common usage in integration, suggest to first learn these 3 routing patterns:
Router step to send message to different routes based on condition, process using different logic and
optionally back to single main route.
Multicast step to send the same message to multiple routes to process differently, and optionally gather
back results from all different routes.
Splitter step to tackle challenge of splitting large payload to smaller one then only process. Should
understand different between General and Iterating Splitter, and learn how to split by XPath, Line Break
and Token.
There will be cases you might need get different payloads from multiple sources or multiple calls, then
only enrich the main payload with data from lookup payload. You should understand default capability
and limitation of Content-Enricher step, and good to know ways to do advanced data enrichment using
groovy mapping, since message mapping(multi-mapping) that involved multiple messages normally will
be relatively much complex to develop and maintain.
11) Message Mapping (Graphical Mapping) and Queue and Context concept
If you need to support existing message mapping from standard pre-packaged content iflow, or need to
migrate SAP PI/PO message mapping to SAP CPI, then there is no other choice but need to understand
those tricky mapping queue and context handling using node functions. It will be difficult to work on
complex message mapping without good understanding on message mapping queue and context
concept.
First security concept is authentication. Should aware different authentication supported by SAP CPI. For
example username/password, OAuth2 token or client-certificate based authentication. Second concept
is authorization mean what access granted example developer role don't have admin access to manage
CPI tenant setting or security artifact. Other security concept is message level encryption(encrypt and
decrypt) and signature (sign and verify) that commonly used in file-based transfer like in SFTP, but some
also apply encryption in API-based HTTP calls.
13) Persistence/Variable/Data Store mean CPI keep data for later usage
For normal integration flow processing, all header, properties and body temporarily hold in CPI during
runtime will not able to retrieve them back again after iFlow processing end. This
persistence/variable/Data store concept is asking CPI to ‘remember’ by storing in SAP CPI. Generally,
global/local variable is for storing single value, while Data Store is to store list of value/payloads under
same data store name.
Should learn the usage of standard converter JSON To XML, XML to JSON, CSV to XML, XML to CSV,
because these are common payload format hence common requirement.
This is simple but important step in CPI that used frequently. Original Camel exchanges and Camel
concept can refer this SAP Press blog. The key takeaways for practical CPI usage are:
a) Pipeline concept that CPI pass message contain headers, properties and body from one CPI step to
next step connected by arrow. Multiple CPI steps can be chained together sequentially/parallelly.
b) Content Modifier able to add/change/delete headers, properties and body (also number range &
variables).
c) Generally, header store smaller size data that might need send to receiver(s)a. E.g. as HTTP
headers.
d) Generally, property store larger size data and data need to store in earlier step of iflow and need
retrieve back at later step.
e) Body is main payload that can send to receiver(s).
Request-Reply simply mean make some request call to receiver and get reply/response back. Suggest to
first learn these 3 adapter types: HTTP, OData, SOAP because majority of use-case especially those
cloud-based API believed fall under these 3 adapter types.
If you come from imperative/procedural programming background (e.g. Java, ABAP, C#), you might
wondering how to looping in SAP CPI? What are the equivalent CPI ways to do for loop/while loop since
all these CPI steps is drag-and-drop only without coding? The answer is using CPI “Looping Process Call”
with condition to exit loop. You should also aware the concept of OData V2 vs V4 looping and leverage
build-in looping feature of OData adapter.
Filter concept mean only take necessary data required, either source system only send required data, or
use CPI filter step to retain only required data. For filter step you will need fair good knowledge of XPath
to filter effectively. Alternatively, can use message mapping for filtering or use groovy mapping to do
advanced complex filtering.
For sorting, there are no build-in ready step by CPI to do sorting. Will have to resort to custom solution,
either XSLT, message mapping (not recommended due to need apply sorting to all fields) or groovy
mapping.
There will be cases you might need get different payloads from multiple sources or multiple calls, then
only enrich the main payload with data from lookup payload. You should understand default capability
and limitation of Content-Enricher step, and good to know ways to do advanced data enrichment using
groovy mapping, since message mapping(multi-mapping) that involved multiple messages normally will
be relatively much complex to develop and maintain.
10) Groovy Scripting capable to tackle complex mapping (and many things else)
Since Groovy is general purpose programming language so it is very flexible and powerful, basically you
can use Groovy script to do a lot of tasks (not only mapping) as long as you able code the solution in
Groovy language, and supported in CPI environment. For example use groovy mapping to so source-
target fields mapping between these data format: Xml, Json, CSV, IDOC, Odata Batch. Using Groovy
scripting is extensible because able to import Java Jar libraries that contain the feature you needed.
Example CSV processing Jar library.
13) Persistence/Variable/Data Store mean CPI keep data for later usage
For normal integration flow processing, all header, properties and body temporarily hold in CPI during
runtime will not able to retrieve them back again after iFlow processing end. This
persistence/variable/Data store concept is asking CPI to ‘remember’ by storing in SAP CPI. Generally,
global/local variable is for storing single value, while Data Store is to store list of value/payloads under
same data store name.
When error happened in CPI message processing, we can either do nothing then let it failed in CPI in red
error, or add exception sub-process. Ideally should at least able to get back the error message and error
stack trace, then see how to handle errors, e.g. send alert email, store error message in SFTP server or
design advanced re-processing mechanism using data store.
We have three environments dev ,qa ,prod first we need to develop in dev do our unit testing then
download th iflow an import in party and functional team qa environment Mostly end to end testing is
done here involving third once your end to tend testing is successful you need an business approval to
move this to production you have to raise one transport request once it got approved the the same way
you need to download the iflow from qa and import in prod and do the necessary changes as per
channel configuration ..just monitor the iflow in cpi is messaging processing successfully or not
For Every change you do after moving to the prod save it as version
Gather the requirements build the interface perform the hyper care support
Say client provided some idocs and said they failed to reach receiver system now how do you reprocess
them from cpi as we don’t have idocs number to see why it failed any help greatly appreciated
Answer:
In general, there are three main use cases for the Poll Enrich pattern:
You want to enrich the message payload with additional data retrieved from a file on the sftp
server.
You want to poll from an sftp server triggered by an external trigger, for example triggered via
HTTP call.
You need to poll from an sftp server but want to define the configuration in the sftp adapter
dynamically, for example from a partner directory.