Informatica Interview Questions

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 153
At a glance
Powered by AI
Data warehousing involves extracting, transforming and loading data from source systems into a data warehouse for analysis.

it could range somewhere between 25-40 terabytes

Data wili have Special characters,Symbols,Nulls,Zeros,,, You need Cure the Data using Default values,Null updates,,,like many things r there,, In Inf trans Avoid the nulls,Correct DataType with Length,Unique data,,,,like,,many ways u can cure the data.

1.

What is the filename which you need to configure in UNIX


while installing infromatica?
Answer
pmserver.cfg
Answer
in informatica 7, under $PMRootDir there is one utility
(script) called pmconfig exist, through it we can configure
the inforamtica
Question
What happens if you increase commit intervals and also
decrease commitExplain grouped cross tab?
Answer
if you have increased your commit interval to ~25000 rows
the session will run faster but when your session fails at
24000th record you will not have any data in your target.

When you decrease your commit interval to ~10000 rows your


session will be slow when compared to previous but if the
session fails at 24000th record you will lose only 4000
records.
Answer
if commit interval is set to high value, performance will
be high. if commit is given for every 1000 rows say for eg,
it will affect the performance badly
 Question
What is hash partition?
Answer
Hash partitioning:-
use hash partitioning , when we want the power center server
to distribute rows to the partitions by groups.

Eg :- we need to sort items by item ID , but we don't know


how many many items have a particular ID mamber
Question
What is the approximate size of data warehouse?
Answer
it could range somewhere between 25-40 terabytes
Question
What is data quality? How can a data quality solution be
implemented into my informatica transformations, even
internationally?
Answer
Data wili have Special characters,Symbols,Nulls,Zeros,,,
You need Cure the Data using Default values,Null
updates,,,like many things r there,,
In Inf trans Avoid the nulls,Correct DataType with
Length,Unique data,,,,like,,many ways u can cure the data.
Question
What is the difference between view and materialised view?
Answer
View - store the SQL statement in the database and let you
use it as a table. Every time you access the view, the SQL
statement executes.
Materialized view - stores the results of the SQL in table
form in the database. SQL statement only executes once and
after that every time you run the query, the stored result
set is used. Pros include quick query results
Answer
materialized view can be used to precalculate the expensive
joins and aggregates prior to execution and the result is
stored in a table in database and can refer to it in
future. the adv of this is increse in performance.
view is nothing but an sql query stored. it will not store
data in tables
Answer
In view if we do any operation that may be changes in table
and viveversa.View dont have memery allocation.if we create
a view as 'v1' on emp table and then we delete the emp
table there by 'v1' name will be exist in view list, but
data will not be there, if we create the emp table once
again it will automatically link to that 'v1'view.

But in the case of M.views the changes that made in table


cannot be seen in M.view.M.views have separate memery
allocation if we delete the table the M.view will be there.
M.view is read only view,D.D.L oerations are not possible
on this M.view. M.view are used in dataware housing.
Answer
Views contains query whenever execute views it has read from
base table
Where as Materialised views loading or replicated takes
place only once which gives you better query performance

Refresh materialised views


1.on commit
2. on demand
(Complete, never, fast, force)
 Question
How is Data Models Used in Practice?
Answer
What do you want to know exactly. You mean want to learn
freshly the Data Modeling?
Question
What is an MDDB? What is the difference between MDDBs and
RDBMSs?
Answer
in mddb we can analyze the data in multiple ways,here we
can create power cubes.
but in rdbm we can create cross tab reports,simple list
reports.
Answer
MDDB stands for Multi Dimensional Database
MDDB: In MDDB, it views data in mutidimensional
(Perspective)i.e.,through various dimensions at a time with
the help of cubes developed using dimensions and stores
data in multidimensional i.e., stores data in power cubes.
In power cubes, each axis is a dimension and each member of
dimension is column.
In MDDB, at a glance we can see the dimensions and data
present in the dimensions
RDDB: In RDDB, it views data in two dimensional and stores
data in two dimensional i.e., in rows and columns in a
table .
In RDDB, we can just see the rows and columns, but only
after issuing select over that u can see the data.

Storage wise:
suppose I have 3 dimensions and 9 rows of data then
in MDDB: takes 9 cells to store the data
in RDDB: takes 27 cells to store the data
Question
What is active and passive transformation?
Answer
Active transformations number of output rows will be lesser
than input rows.
and active transformations work at column level.
Answer
A transformation is said to be active when the no. of input
rows to the transformation is not equal to the number of
output rows from the transformation.
input rows != output rows either it can be less or more

when input rows is equal to the output rows then it is


called passive transformation.
Answer
When a row enters a transformation, informatica assigns a
rownumber. if this number change for a row, that's an
Active transformation. In other words the nth row comming
IN will go as n'th row, then the transformation is PASSIVE.

Sorter is an active transformation!! IT never changes


the total row count, it changes only the order!!
Answer
In Sorter, there is an option as "Allow only distinct",
which can change the no.of output records. So it is
classified as Active.
Answer
Active transformation is a transformation which may or may
not output the same number of rows as the number of rows it
got as the input.

Ex: Aggregator - It will filter the duplicates based on the


group by ports. But, it doesn't mean that it has to filter
the input rows everytime. If it gets the distinct set of
group by values in all the input rows, it will output all
the input rows. So, in this case it cannot filter any row
and the number of input rows and output rows will be
exactly same.

Sorter - Sorter will be basically used to arrange the


incoming records for easier processing(for example, an
aggregator need not read all the rows to find a maximum
value of a sales, if u sort by sales ascending and group by
sales in the connecting aggregator from the above sorter).
In the above case, it will not filter any rows but it will
just rearrange the order of rows as u specify the specific
port(sales) to arrange on.
On the other hand, the sorter transformation is also
provided to output only the distinct rows, where it can
filter the duplicate rows and send the unique set. Here, it
has to filter the duplicates, which in turn changes the row
count i.e input vs Output no. of rows.

If u take a filter, it doesn't have to filter the rows all


the time just because it is name as 'FILTER', it will just
apply the filter condition on all the input rows and pass
those records that qualify the condition. If u set a 'TRUE'
condition in the filter it will pass all the rows as it
gets from the input.

so, based on my above scenarios, i would like to reiterate


that an active transformation is provided with the ability
to filter the rows from the input based on that particular
criteria. It never will be compulsory to change the input
and output number of rows.

Hope my little effort of this answer helped atleast few of


you.

If you have any other questions, just email me at


[email protected]
Answer
An active transformation can change the no. of rows that
passes through it eg: filter transformation passes the rows
from the source to target that which meet the filter condition.

an passive transformation cannot change the rows that which


passes through it eg: an expression transformation that
performs the calculation on the data and passes all the rows
from source to target
Answer
I totally agree with Sai Krishna Karri that any
transformation
can be active or passive depends to the condition we
provided.
Question
Why do we use DSS database for OLAP tools?
Answer
DSS stands for decision support system.
it can used to implement effective decision making to
understood middle level management for OLAP systems.
that means reports are generated effectively in two and
multidimensional.
Question
What is up date strategy and what are the options for update
strategy?
Answer
We can use update strategy at two different levels
1) within a session :- When you are configuring a
session you can give instructions to treat a)all rows as
insert b)all rows as update c)data driven (use instructions
coded into the session mapping to flag rows for different
database operations.)

2) within mapping :- You can flag rows for


insert,update,delete or reject.
Don't forget to set "Treat source rows as" to Data Driven
in the session properties if you are flagging rows within
the mapping.
Answer
Update strategy is used for flagging the records by using
the options insert/update/delete/reject.
Update strategy is used in two levels.
session level:at session level we have to use the options
update as insert,update as update,update else insert.
mapping level:at mapping level we have to use the options
dd-insert,dd-update,dd-reject,dd-delete.
Question
What is data merging, data cleansing and sampling?
Answer
Data Cleansing: A two step process of detection and
correction of errors in a data set.
Answer
data merging :multiple detailes values are summarised into
single summaeised value.
data cleansing:to eliminate the inconsistant data
sampling:it is the process ,orbitarly reading the data from
group of records.
Answer
DATAMERGING : IT IS THE PROCESS OF INTEGRATING THA DATA
WITH SIMLIAR SOURCE ,STRUCTURE AND TYPE
DATA CLEANSING : IT IS THE PROCESS OF IDENTIFING AND
CHANGING THE INCONSISTANCIES AND INACCURACIES
DATA SAMPLING : IT IS A PROCESS , ORBITARILY READING THE
DATA FROM GROP OF RECORDS
Answer
datacleaging:it is the process of identifying and changing
inconsistency and inacquries
datamerging:it is process of integreated multiple
inputsource into singleoutput with similar srtucture and
datatype
Question
What is staging area?
Answer
This is temporary area where the data is kept for
transformation and clensing.
Answer
staging area is used to integrate data from various
heterogenous sources. the advantage is recoverability. ie
if load session fails we can get data from staging area.
Answer
It is the temparory storage area where the reconcilation of
data is possible.
you can extract the data from diff source systems and
transform,you can aggregate, cleans the data.
staging area reduce the burden on the source system.
Answer
Staging area is a where data transformaions takes place. It
is temparly storage area . Transformations like "DATA
SCRUBBING, DATA CLEANSING,DATA AGGREGATION,DATA MERGING"
The above transformations are takes place in the staging
area, in staging area data transformed from one format to
required business format then it will load into the target.
Answer
A place where data is processed before entering the
warehouse
Question
What is difference between a connected look up and
unconnected look up?
Answer
connected look up is not reusable and only one output port
are return port will be there at a time and this is more
faster than connected.
Answer
Connected LookUp returns multiple values to other
Transformation whereas unconnected lookup return single
value, If condtion is match then connected lookup returns
user defined default values whereas unconnected lookup
returns null values, connected lookup supports dynamic as
well as static caches whereas unconnected lookup supports
static cache only
regards,
ande
Answer
1.conneted T/r takes multiple inputs and gives multiple
outputs
where as unconnected t/r takes multiple inputs but gives
single output

2. in connected lookup input, output, lookup ports are


available
where as in unconnected look up having input,output,lookup
as well as return port

3. connected t/r used dynamic and static cache


where as unconnected t/r uses only dynamic cache
Answer
1) A connected Lookup transformation receives input values
directly from another transformation in the pipeline.
An unconnected Lookup transformation receives input values
from the result of a :LKP expression in another
transformation. You can call the Lookup transformation more
than once in a mapping.

2)IN connected The Integration Service passes return values


from the query to the next transformation.

In Unconnected The Integration Service returns one value


into the return port of the Lookup transformation.

3)Connected uses Dynamic or Static Cache,whereas


Unconnected use a static cache
Question
What is a look up function? What is default transformation
for the look up function?
Answer
look up compares the source to target.it update and insert
the new rows to the target.

the default transformation for look up is source qualifier


Answer
Lookup is a transformation in Informatica which is mainly
used for obtaining the "Key" values from the dimensions.
Lookup can be connected or unconnected. If it is
unconnected then it can be used as a function but it can
return only one value.
Where as connected lookup return more than one value.
Lookup contains:
1. <Input Column/Value>
2. <Output Column(s)>
3. <Condition>

1.What is query panel?


Answer
An editor used to define database queries, where users
manipulate BusinessObjects rather than the tables and
columns of the relational database.
Question
How can you define a transformation? What are different
types of transformations in Informatica?
Answer
transformation is which type of table col do you want in
source table to target then u use transformations.
transformations are various types.
1. sequential qualifier trans
2. lookup trans
3. Expression trans
4. Rank trans
5. Sequential trans....etc
Answer
Transformation is define, Transform the source data
according to requirement of the target system.

Its two types 1-active Tran 2-passive Tran


Answer
transformation is a repositorty object,which transforms the
data as per requiremnt to load into target.
Transformations are two types
1)Active
2)Passive
Answer
A transformation is a repository object that generates,
modifies, or passes data
Transformations in a mapping represent the operations the
Informatica Server performs on the data
It can be
Connected , unconnected , active or passive
Answer
Transformation is a repositorty object,which loads the data
into target as per the busines rule.

Transformations are two types


1)Active
2)Passive
Question
What is hash partition?
Answer
Hash Partitioning, which maps data to partitions based on a
hash function. The physical location of the data depends on
the outcome of hash function. Data used to distributed
between the partitions. This is typically used where ranges
are not appropriate
Answer
The value of a hash function determines membership in a
partition. Assuming there are four partitions, the hash
function could return a value from 0 to 3.
Question
Which kind of index is preferred in DWH?
Answer
we have bitmap index, b-tree index, function based index,
reverse key index and composit index. we will use bitmap
index in DWH
Regards,
ande
Answer
to load the data into the fact table b tree index is created
and bit is used where the cordinality of the coloumn is very
low i.e(in dwh-denormalised one)(OLAP)
Question
What is power play plug in?
Answer
power plug is used to connect the external d.bs like
ibmmq,saibase,informix etc
Question
What is difference macros and prompts?
Answer
Macros is nothing but a conditions Total in project level
but prompt is dynamic condition (in Repositary level) this
is implimenting report level in particular filter
condition.......
Question
What is IQD file?
Answer
IQD:-- Impromptu Query definition file (Report)
Using Catalog(in Impromptu Administration) create a report,
save that report in IQD format,that one you can use as a
source(Meta Data source) in Cognos PowerPlay Transformer
Question
What is the Difference between PowerPlay transformer and
power play reports?
Powerplay transformer means genarate cubes and reports but
power play reports means just existing reports seaing its
possible Reports in(ppi)
Question
What is the capacity of power cube?
Answer
i am not sure,To my knowledge it is of 4 or 5 GB
Question
What is confirmed dimension?
Answer
A dimension which can be shared with multiple fact tables

OR

A dimension which can be used by one or more fact tables is


called confirmed dimension
Answer
in the schema, if any any dimension table has been shared
by more than one fact table, it is called as "confirmed
dimension"
Answer
In addition to that , it is consistent across the data
marts.
Question
What is factless fact schema?
Answer
The fact which does not contain any facts or mesaurables.
Ex:the fact table which is used to store the students
information is he came to school or not cannot have weather
they are attending all classes.
which is a measure
Question
What is meta data and system catalog?
Answer
as of my knoledge metadata is data about data or structure
of the database objetcts like tables or indexs etc.

no idea about syscatlog


Answer
Meta data is data about data. Power Center 6x / 7x creates
155 / 183 tables, when informatica is installed on an
environment. These tables contains all information of
transaction / changes done in Informatica.

Eg: we can find all the mappings / transformations - last


modified time / last successful run time etc. These tables
will usually have the naming convention OPB_.....

These objects / tables are created under schema pointing to


the database connection hich is provided when Power center
is installed
Question
What is operational data source (ODS)?
Answer
thsi is the data base used to captur daily business
activites and this is normalized databse.
Answer
ODS captures Day to day transactions .and u can generate
report on ODS
Answer
ODS is nothing but a staging area, in which you can keep
your OLTP type data like your day to day transactional data.
it is fully normalizied.
Answer
ODS can be described as a snap shot of the OLTP system.It
acts as a source for EDW(Enterprise datawarehouse).ODS is
more normalised than the EDW.Also ODS doesnt store any
history.Normally the Dimension tables remain at the ODS
(SCD types can be applied in ODS)where as the Facts Flow
till the EDW.
More importantly Client report reqiurements determing what
or what not to have in ODS or EDW.
Question
What are the Advantages of de normalized data?
Answer
De-normalized data can save more space and have better
performance when run a sql for it
Answer
De Normalised Data:

A table storing denormalised data occupies more space


because lot of duplicate information creeps in when we
denormalise a table.

As less number of join conditions are required to retrieve


data from one/more denormalised tables, the performance
will be fast.

DWH environment prefers denormalised data structures.

After draging the ports of three sources(sql


server,oracle,informix) to a single source qualifier, can
you map these three ports directly to target?
Answer
NO.Unless and until u join those three ports in source
qualifier you cannot map them directly.
Answer
To Swetha,
I am afraid these 3 ports have been joined already in source
qualifier, and therefore can map directly to a target, why not?
Answer
Here they are having Heterogeneous sources how can those
can be joined at source quaifier.i think the question is
not clear u cannot darg the three heterogeneous siurces to
sinle source qualifier
  Answer
we cannot join haterogeneous sources in the s.q so we
cannot map those ports to target directly if we want we must
use the joiner transformation
Answer
You cannot join Hetrogeneous sources in a SQ.For that you
need use joiner.For example if you have n hetrogeneous
sources then you must use n-1 joiners.
Question
If i done any modifications for my table in back end does it
reflect in informatca warehouse or maping desginer or source
analyzer?
Answer
NO. Informatica is not at all concern with back end data
base.It displays you all the information that is to be
stored in repository.If want to reflect back end changes to
informatica screens, again you have to import from back end
to informatica by valid connection.And you have to replace
the existing files with imported files.
Answer
The Informatica Repositoy will have already an imported
tables and if changes been done at backend is concerned
tables. Untill it is Re-import the tables from the database
then changes will be reflected.
Question
What are the circumstances that infromatica server results
an unreciverable session?
Answer
The source qualifier transformation does not use sorted ports.

If you change the partition information after the initial


session fails.

Perform recovery is disabled in the informatica server


configuration.

If the sources or targets changes after initial session fails.

If the maping consists of sequence generator or normalizer


transformation.

If a concuurent batche contains multiple failed sessions.


Question
How can you complete unrcoverable sessions?
Answer
Under certain circumstances, when a session does not
complete, you need to truncate the target tables and run the
session from the beginning. Run the session from the
beginning when the Informatica Server cannot run recovery or
when running recovery might result in inconsistent data.
Question
How can you recover the session in sequential batches?
Answer
If you configure a session in a sequential batch to stop on
failure, you can run recovery starting with the failed
session. The Informatica Server completes the session and
then runs the rest of the batch. Use the Perform Recovery
session property

To recover sessions in sequential batches configured to stop


on failure:

1.In the Server Manager, open the session property sheet.

2.On the Log Files tab, select Perform Recovery, and click OK.

3.Run the session.

4.After the batch completes, open the session property sheet.

5.Clear Perform Recovery, and click OK.

If you do not clear Perform Recovery, the next time you run
the session, the Informatica Server attempts to recover the
previous session.

If you do not configure a session in a sequential batch to


stop on failure, and the remaining sessions in the batch
complete, recover the failed session as a standalone session.
Question
How to recover the standalone session?
Answer
A standalone session is a session that is not nested in a
batch. If a standalone session fails, you can run recovery
using a menu command or pmcmd. These options are not
available for batched sessions.

To recover sessions using the menu:

1. In the Server Manager, highlight the session you want to


recover.
2. Select Server Requests-Stop from the menu.

3. With the failed session highlighted, select Server


Requests-Start Session in Recovery Mode from the menu.

To recover sessions using pmcmd:

1.From the command line, stop the session.

2. From the command line, start recovery.


Answer
SIMPLE BY USING MENU COMMAND PMCMD
Question
If a session fails after loading of 10,000 records in to the
target. How can you load the records from 10001 th record
when u run the session next time?
Answer
As explained above informatcia server has 3 methods to
recovering the sessions.Use performing recovery to load the
records from where the session fails.
Answer
U CAN DO IT BY PERFORMANCE RECOVERY
WHEN THE SERVER RUNS THE RECOVERY SESSION , SERVER READS
THE DATA FROM OPR_SRVR_RECOVERY TABLE AND NOTES THE ROW ID
OF THE LAST ROW COMMITTED TO THE TARGET TABLE ,THEN
INFORMATICASERVER READS THE ENTIRE SOURCE AGAIN AND
PROCESS
THE DATA FROM NEXT ROW
BY DEFAULT PERFORMANCE RECOVERY IS DISABLE ,HENCE IT WONT
MAKE ANT ENTRIES IN TO OPR_SRVR_RECOVERY TABLE
Answer
In the workflow manager you have option go to workflow----
>EDIT---check this box suspend on error
if the work fails after loading 10000 records with error
you can correct the error and restart the workflow which
will load strating from 10001.
Answer
By setting session property as Resume last check point you
can acheive the same.Make sure that all recovery tables
PM_RECOVERY,PM_TGT_RUN_ID is created in the target database
or Realtion connection User have the
create,grant,insert,update previliages on target database.
Question
Explain about Recovering sessions?
Answer
If you stop a session or if an error causes a session to
stop, refer to the session and error logs to determine the
cause of failure. Correct the errors, and then complete the
session. The method you use to complete the session depends
on the properties of the mapping, session, and Informatica
Server configuration.

Use one of the following methods to complete the session:

Run the session again if the Informatica Server has not


issued a commit.

Truncate the target tables and run the session again if the
session is not recoverable.

Consider performing recovery if the Informatica Server has


issued at least one commit.
Question
What is difference between stored procedure transformation
and external procedure transformation?
Answer
In case of storedprocedure transformation procedure will be
compiled and executed in a relational data source.yoU need
data base connection to import the stored procedure in to
your maping.Where as in external procedure transformation
procedure or function will be executed out side of data
source.Ie you need to make it as a DLL to access in your
maping.No need to have data base connection in case of
external procedure transformation.
Question
What are the scheduling options to run a sesion?
Answer
yoU can shedule a session to run at a given time or
intervel,or u can manually run the session.
Different options of scheduling

Run only on demand: server runs the session only when user
starts session explicitly

Run once: Informatica server runs the session only once at a


specified date and time.

Run every: Informatica server runs the session at regular


intervels as u configured.

Customized repeat: Informatica server runs the session at


the dats and times secified in the repeat dialog box.
Answer
We can't schedule to run a session. But we can schedule to
run a workflow.
Question
what is incremantal aggregation?
Answer
When using incremental aggregation, you apply captured
changes in the source to aggregate calculations in a
session. If the source changes only incrementally and you
can capture changes, you can configure the session to
process only those changes. This allows the Informatica
Server to update your target incrementally, rather than
forcing it to process the entire source and recalculate the
same calculations each time you run the session.
Question
What are the new features in Informatica 5.0?
Answer
you can Debug U'r maping in maping designer

yoU can view the work space over the entire screen

The designer displays a new icon for a invalid mapings in


the navigator window

yoU can use a dynamic lookup cache in a lokup transformation

Create maping parameters or maping variables in a maping or


maplet to make mapings more flexible

yoU can export objects into repository and import objects


from repository.when u export a repository object, the
designer or server manager creates an XML file to describe
the repository metadata.

The designer allows u to use Router transformation to test


data for multiple conditions. Router transformation allows u
route groups of data to transformation or target.

yoU can use XML data as a source or target.

Server Enahancements:
yoU can use the command line program pmcmd to specify a
parameter file to run sessions or batches. This allows you
to change the values of session parameters, and mapping
parameters and variables at runtime.

If you run the Informatica Server on a symmetric


multi-processing system, you can use multiple CPUs to
process a session concurrently. You configure partitions in
the session properties based on source qualifiers. The
Informatica Server reads, transforms, and writes partitions
of data in parallel for a single session. This is avialable
for Power center only.

Informatica server creates two processes like loadmanager


process,DTM process to run the sessions.

Metadata Reporter: It is a web based application which is


used to run reports againist repository metadata.

yoU can copy the session across the folders and reposotories
using the copy session wizard in the informatica server manager

With new email variables, you can configure post-session


email to include information, such as the mapping used
during the session.
Question
How can u work with remote database in informatica?did you
work directly by using remote connections?
Answer
To work with remote datasource u need to connect it with
remote connections. But it is not preferable to work with
that remote source directly by using remote connections
.Instead u bring that source into U r local machine where
informatica server resides. If u work directly with remote
source the session performance will decreases by passing
less amount of data across the network in a particular time.
Question
What is power center repository?
Answer
The PowerCenter repository allows you to share metadata
across repositories to create a data mart domain. In a data
mart domain, you can create a single global repository to
store metadata used across an enterprise, and a number of
local repositories to share the global metadata as needed.
Question
What is Performance tuning in Informatica?
Answer
The goal of performance tuning is optimize session
performance so sessions run during the available load window
for the Informatica Server.

Increase the session performance by following.

The performance of the Informatica Server is related to


network connections. Data generally moves across a network
at less than 1 MB per second, whereas a local disk moves
data five to twenty times faster. Thus network connections
ofteny affect on session performance.So aviod twrok
connections.

Flat files: If u'r flat files stored on a machine other than


the informatca server, move those files to the machine that
consists of informatica server.

Relational datasources: Minimize the connections to sources


,targets and informatica server to improve session
performance.Moving target database into server system may
improve session performance.

Staging areas: If you use staging areas u force informatica


server to perform multiple datapasses. Removing of staging
areas may improve session performance.

yoU can run the multiple informatica servers againist the


same repository. Distibuting the session load to multiple
informatica servers may improve session performance.

Run the informatica server in ASCII datamovement mode


improves the session performance. Because ASCII datamovement
mode stores a character value in one byte.Unicode mode takes
2 bytes to store a character.

If a session joins multiple source tables in one Source


Qualifier, optimizing the query may improve performance.
Also, single table select statements with an ORDER BY or
GROUP BY clause may benefit from optimization such as adding
indexes.

We can improve the session performance by configuring the


network packet size,which allows data to cross the network
at one time.To do this go to server manger ,choose server
configure database connections.

If your target consists key constraints and indexes u slow


the loading of data. To improve the session performance in
this case drop constraints and indexes before you run the
session and rebuild them after completion of session.

Running a parallel sessions by using concurrent batches will


also reduce the time of loading the data. So concurent
batches may also increase the session performance.

Partittionig the session improves the session performance by


creating multiple connections to sources and targets and
loads data in paralel pipe lines.
In some cases if a session contains a aggregator
transformation ,you can use incremental aggregation to
improve session performance.

Aviod transformation errors to improve the session performance.

If the session containd lookup transformation you can


improve the session performance by enabling the look up cache.

If your session contains filter transformation ,create that


filter transformation nearer to the sources or you can use
filter condition in source qualifier.

Aggreagator,Rank and joiner transformation may oftenly


decrease the session performance .Because they must group
data before processing it. To improve session performance in
this case use sorted ports option.

what are the transformations that restricts the partitioning


of sessions?
Answer
Advanced External procedure tranformation and External
procedure transformation: This transformation contains a
check box on the properties tab to allow partitioning.

Aggregator Transformation: If you use sorted ports you can


not parttion the assosiated source

Joiner Transformation : yoU can not partition the master


source for a joiner transformation

Normalizer Transformation

XML targets.
Question
What is difference between partioning of relatonal target
and partitioning of file targets?
Answer
If you parttion a session with a relational target
informatica server creates multiple connections o the
target database to write target data concurently.If you
partition a session with a file target the informatica
server creates one target file for each partition. yoU can
configure session properties to merge these target files.
Question
How can you access the remote source into your session?
Answer
Relational source: To acess relational source which is
situated in a remote place , you need to configure database
connection to the datasource.

FileSource : To access the remote source file you must


configure the FTP connection to the host machine before you
create the session.

Hetrogenous : When your maping contains more than one


source type,the server manager creates a hetrogenous session
that displays source options for all types.
  Question
What is parameter file?
Answer
Parameter file is to define the values for parameters and
variables used in a session.A parameter file is a file
created by text editor such as word pad or notepad.

yoU can define the following values in parameter file

Maping variables

session parameters.
Question
What are the session parameters?
Answer
Session parameters r like maping parameters,represent values
U might want to change between

sessions such as database connections or source files.

Server manager also allows U to create userdefined session


parameters.Following are user defined session parameters.

Database connections

location of Source file names: use this parameter


when u want to change the name or

session
source file between session runs

location of Target file name : Use this


parameter when u want to change the name or

session
target file between session runs.

location of Reject file name : Use this


parameter when u want to change the name or

session
reject files between session runs.
Answer
Your answer is precisely correct but i want to point out
here if we can change the the session parameter value
during session run then what are session variables for. I
guess session parameter values cannot be changed during
session runs.
Question
How can u stop a batch?
Answer
By using server manager or pmcmd.
Question
Can you start a session inside a batch individually?
Answer
We can start our required session only in case of sequential
batch.in case of concurrent batch we cant do like this.
Question
Can you start a batches with in a batch?
Answer
yoU can not. If you want to start batch that resides in a
batch,create a new independent batch and copy the necessary
sessions into the new batch.
Question
In a sequential batch can u run the session if previous
session fails?
Answer
Yes.By setting the option always runs the session.
Answer
i am impressed with your answers and your curiousity or
zeal or interest in posting answers to the questions.
infact, i am trying for a break with the industry. i am
bsgsr infact u might have seen my name the way i did see
yours while surfing this site.

i am of the opinion that kt can certainly help in enriching


ones knowledge. i am intersted in sharing. if you too do
mail me on [email protected].

anticipating a positive response.


Question
What are the different options used to configure the
sequential batches?
Answer
Two options

Run the session only if previous session completes


sucessfully. Always runs the session.
Question
What is a command that used to run a batch?
Answer
pmcmd is used to start a batch.
Answer
pmcmd is a command line programme. this can be used to stop
,run and abort the session.
Question
When the informatica server marks that a batch is failed?
Answer
If one of session is configured to "run if previous
completes" and that previous session fails.
Answer
IF ANY ONE OF THE SESSION IN BATCH FAILS THEN THE BATCH
ALSO FAILES
Question
How many number of sessions that u can create in a batch?
Answer
Any number of sessions.
Answer
I dont see there is a limit set for number of tasks that
you can have it in workflow/batch. but best practice is to
have less number of tasks this will help especially during
migration.
Answer
At a time informatica server can execute 30 sessions
parallelly.
Question
Can you copy the batches?
Answer
NO
Question
What is batch and describe about types of batches?
Answer
Grouping of session is known as batch.Batches are two types

Sequential: Runs sessions one after the other

Concurrent: Runs session at same time.

If you have sessions with source-target dependencies you


have to go for sequential batch to start the sessions one
after another.If you have several independent sessions you
can use concurrent batches Which runs all the sessions at
the same time.

Can you copy the session to a different folder or repository?


Answer
Yes. By using copy session wizard u can copy a session in a
different folder or repository. But that target folder or
repository should consists of mapping of that session.
If target folder or repository is not having the maping of copying session ,you
should have to copy that maping first
before u copy the session.
Question
What is polling?
Answer
It displays the updated information about the session in the
monitor window. The monitor window displays the status of
each session when U poll the informatica server.
Question
In which circumstances that informatica server creates
Reject files?
Answer
When it encounters the DD_Reject in update strategy
transformation.

Violates database constraint

Filed in the rows was truncated or overflowed.


Answer
When the data in the file, may be tab separated, is
incorrect. i.e. if there is an extra tab for a set of
records and this is making the text data come under a
numeric column, this violates the datatype and the source
file is rejected.
Question
What aer the out put files that the informatica server
creates during the session running?
Answer
Informatica server log: Informatica server(on unix) creates
a log for all status and error messages (default name:
pm.server.log). It also creates an error log for error
messages. These files will be created in informatica home
directory.

Session log file: Informatica server creates session log


file for each session. It writes information about session
into log files such as initialization process, creation of
sql commands for reader and writer threads, errors
encountered and load summary. The amount of detail in
session log file depends on the tracing level that you set.

Session detail file: This file contains load statistics for


each targets in mapping. Session detail include information
such as table name,number of rows written or rejected. yoU
can view this file by double clicking on the session in
monitor window

Performance detail file: This file contains information


known as session performance details which helps yoU where
performance can be improved. To genarate this file select
the performance detail option in the session property sheet.

Reject file: This file contains the rows of data that the
writer does not write to targets.

Control file: Informatica server creates control file and a


target file when yoU run a session that uses the external
loader. The control file contains the information about the
target flat file such as data format and loading instructios
for the external loader.

Post session email: Post session email allows yoU to


automatically communicate information about a session run to
designated recipents. U can create two different
messages.One if the session completed sucessfully the other
if the session fails.

Indicator file: If u use the flat file as a target,yoU can


configure the informatica server to create indicator file.
For each target row, the indicator file contains a number to
indicate whether the row was marked for insert, update,
delete or reject.

output file: If session writes to a target file,the


informatica server creates the target file based on file
prpoerties entered in the session property sheet.

Cache files: When the informatica server creates memory


cache it also creates cache files. For the following
circumstances informatica server creates index and datacache
files.

Aggreagtor transformation

Joiner transformation

Rank transformation

Lookup transformation
Answer
i am getting these session log file in .bin format but i
want it in text format.how can i get it??
i am using informatica 8.5.1 with windows on client and
server both.
Answer
Check the backward compatibility option to view the .bin
log files in readble format.
Question
What are the data movement modes in informatcia?
Answer
Datamovement modes determines how informatcia server handles
the charector data.yoU choose the datamovement in the
informatica server configuration settings.Two types of
datamovement modes avialable in informatica.

ASCII mode

Uni code mode.


Question
What are the different threads in DTM process?
Answer
Master thread: Creates and manages all other threads

Maping thread: One maping thread will be creates for each


session.Fectchs session and maping information.

Pre and post session threads: This will be created to


perform pre and post session operations.
Reader thread: One thread will be created for each partition
of a source.It reads data from source.

Writer thread: It will be created to load data to the target.

Transformation thread: It will be created to tranform data.


Question
What is DTM process?
Answer
After the loadmanger performs validations for session,it
creates the DTM process.DTM is to create and manage the
threads that carry out the session tasks.I creates the
master thread.Master thread creates and manges all the other
threads.
Answer
DTM process: The Load Manager creates one DTM process for
each session in the workflow. it performs the following tasks:

• Reads session information from the repository.


• Expands the server, session, and mapping variables and
parameters.
• Creates the session log file.
• Validates source and target code pages.
• Verifies connection object permissions.
• Runs pre-session shell commands, stored procedures and SQL.
• Creates and run mapping, reader, writer, and
transformation threads to extract, transform, and load data.
• Runs post-session stored procedures, SQL, and shell commands.
• Sends post-session email.
Answer
After the loadmanger performs validations for session,it
creates the DTM process.DTM is to create and manage the
threads that carry out the session tasks.I creates the
master thread.Master thread creates and manges all the other
threads.\
DTM process: The Load Manager creates one DTM process for
each session in the workflow. it performs the following
tasks:

• Reads session information from the repository.


• Expands the server, session, and mapping variables
and
parameters.
• Creates the session log file.
• Validates source and target code pages.
• Verifies connection object permissions.
• Runs pre-session shell commands, stored procedures
and SQL.
• Creates and run mapping, reader, writer, and
transformation threads to extract, transform, and load
data.
• Runs post-session stored procedures, SQL, and shell
commands.
• Sends post-session email.
Question
What are the tasks that Loadmanger process will do?
Answer
Manages the session and batch scheduling: When you start the
informatica server the load maneger launches and queries the
repository for a list of sessions configured to run on the
informatica server. When u configure the session the
loadmanager maintains list of list of sessions and session
start times. When u sart a session loadmanger fetches the
session information from the repository to perform the
validations and verifications prior to starting DTM process.

Locking and reading the session: When the informatica server


starts a session lodamaager locks the session from the
repository. Locking prevents U starting the session again
and again.

Reading the parameter file: If the session uses a parameter


files,loadmanager reads the parameter file and verifies that
the session level parematers are declared in the file

Verifies permission and privelleges: When the sesson starts


load manger checks whether or not the user have privelleges
to run the session.

Creating log files: Loadmanger creates logfile contains the


status of session.
Question
Why you use repository connectivity?
Answer
When you edit, schedule the sesion each time, informatica
server directly communicates the repository to check whether
or not the session and users are valid. All the metadata of
sessions and mappings will be stored in repository.
Answer
The repository is a relational database that stores
information or metadata used by the informatica server and
client tools. The metadata stores the information regarding
mappings,mapplets,sessions,batches,shortcuts,source
definitions,target definitions and many more...
Question
How the informatica server increases the session performance
through partitioning the source?
Answer
For a relational sources informatica server creates multiple
connections for each parttion of a single source and
extracts seperate range of data for each connection.
Informatica server reads multiple partitions of a single
source concurently. Similarly for loading also informatica
server creates multiple connections to the target and loads
partitions of data concurently.

For XML and file sources,informatica server reads multiple


files concurently. For loading the data informatica server
creates a seperate file for each partition(of a source
file).U can choose to merge the targets.
Question
To achieve the session partition what r the necessary tasks
u have to do?
Answer
Configure the session to partition source data.

Install the informatica server on a machine with multiple CPU's.


Question
Why we use partitioning the session in informatica?
Answer
Partitioning achieves the session performance by reducing
the time period of reading the source and loading the data
into target.
Question
Which tool yoU use to create and manage sessions and batches
and to monitor and stop the informaticaserver?
Answer
Informatica server manager.
Answer
Workflow Manager & Workflow Monitor. Workflow manager is
used to create and manage the session & batches. Workflow
Monitor is used to monitor the session and bathes, if any
error it'll stop the batch's.
Answer
like before ,
Workflow Manager & Workflow Monitor. Workflow manager is
used to create and manage the session & batches. Workflow
Monitor is used to monitor the session and bathes, if any
error it'll stop the batch's.
Question
Define maping and sessions?
Answer
Maping: It is a set of source and target definitions linked
by transformation objects that define the rules for
transformation.

Session : It is a set of instructions that describe how and


when to move data from source to targets.
Answer
Mapping:When a Source definition transformation and Target
definition transformation are connected in a sequence
through a ETL follow of data.Such a sequence is called
Mapping.

Session:It is a task,used to migrate the data from source


to target using some instructions by informatica server is
called Session.
Answer
MAPPING: it is a graphical representation of data flow from
source to target which included some business
transformation logic

SESSION: It is a set of instructions that describe how and


when to move data from source to targets.
Question
What is metadata reporter?
Answer
It is a web based application that enables you to run
reports againist repository metadata. with a meta data
reporter,you can access information about your repository
with out having knowledge of sql,transformation language or
underlying tables in the repository.

What are the new features of the server manager in the


informatica 5.0?
Answer
yoU can use command line arguments for a session or
batch.This allows yoU to change the values of session
parameters,and mapping parameters and maping variables.

Parallel data processig: This feature is available for


powercenter only.If we use the informatica server on a SMP
system,yoU can use multiple CPU's to process a session
concurently.

Process session data using threads: Informatica server runs


the session in two processes.
Question
What are two types of processes that informatica runs the
session?
Answer
Load manager Process: Starts the session, creates the DTM
process, and sends post-session email when the session
completes.

The DTM process. Creates threads to initialize the session,


read, write, and transform data, and handle pre- and
post-session operations.
Answer
LOAD MANAGER : LOAD MANAGER IS THE FIRST COMPONENT NEEDS
TO
BE INITIALIZES WHEN THE SESSION IS SUBMITTED TO THE SERVER
LOAD MANAGER MANAGES THE LOAD ON THE INFORMATICA SERVER
BY
MAINTAINING A QUEUE OF SESSIONS AND RELEASES THE SESSIONS
BASED ON FIRST COME FIRST SERVE
ONCE THE LOAD MANAGER RELEASES THE SESSION IT
INTIALIZES THE DTM PROCESS
DTM FOLLOWS THE INSTRUCTIONS CODED IN THE SESSION
MAPPING ,FOR EACH SESSION DTM CRAETES READER ,WRITER AND
SHARED MEMORY
  Question
How can you recognise whether or not the newly added rows in
the source r gets insert in the target ?
Answer
In the Type2 maping we have three options to recognise the
newly added rows

Version number

Flagvalue

Effective date Range


Answer
by flag value we can identify the newly inserted rows
Question
What are the different types of Type2 dimension maping?
Answer
source will gets inserted in target along with a new
version number. And newly added dimension in source will
inserted into target with a primary key.

Type2 Dimension/Flag current Maping: This maping is also


used for slowly changing dimensions.In addition it creates a
flag value for changed or new dimension.

Flag indiactes the dimension is new or newlyupdated.Recent


dimensions will gets saved with cuurent flag value 1. And
updated dimensions r saved with the value 0.
Type2 Dimension/Effective Date Range Maping: This is also
one flavour of Type2 maping used for slowly changing
dimensions. This maping also inserts both new and changed
dimensions in to the target. And changes r tracked by the
effective date range for each version of each dimension.
Question
What are the mapings that we use for slowly changing
dimension table?
Answer
Type1: Rows containing changes to existing dimensions are
updated in the target by overwriting the existing dimension.
In the Type 1 Dimension mapping, all rows contain current
dimension data.

Use the Type 1 Dimension mapping to update a slowly changing


dimension table when you do not need to keep any previous
versions of dimensions in the table.

Type 2: The Type 2 Dimension Data mapping inserts both new


and changed dimensions into the target. Changes are tracked
in the target table by versioning the primary key and
creating a version number for each dimension in the table.

Use the Type 2 Dimension/Version Data mapping to update a


slowly changing dimension table when you want to keep a full
history of dimension data in the table. Version numbers and
versioned primary keys track the order of changes to each
dimension.

Type 3: The Type 3 Dimension mapping filters source rows


based on user-defined comparisons and inserts only those
found to be new dimensions to the target. Rows containing
changes to existing dimensions are updated in the target.
When updating an existing dimension, the Informatica Server
saves existing data in different columns of the same row and
replaces the existing data with the updates.
Answer
slowly changing dimensions can be classified by three types.
1)Type1:In this dimension we can store only current data
and here insert are treated as inserts and updates are
treated as updates.
2)Type2:In this dimension we can store complete historic
data and it can divided into three types
a)Flag
b)Version
c)Date Range
But in type2 inserts are treated as inserts and updates are
treated as inserts.
3)Type3:In this dimension we can store One time historic
data with current data.Here also inserts are treated as
insert and updates are treated as updates.

It's depend upon the granularity of the organization.


Question
What are the types of maping in Getting Started Wizard?
Answer
Simple Pass through maping :

Loads a static fact or dimension table by inserting all


rows. Use this mapping when you want to drop all existing
data from your table before loading new data.

Slowly Growing target :

Loads a slowly growing fact or dimension table by inserting


new rows. Use this mapping to load new data when
existing data does not require updates.
Question
What are the types of maping wizards that r to be provided
in Informatica?
Answer
The Designer provides two mapping wizards to help you create
mappings quickly and easily. Both wizards are designed to
create mappings for loading and maintaining star schemas, a
series of dimensions related to a central fact table.
Getting Started Wizard. Creates mappings to load static fact
and dimension tables, as well as slowly growing dimension
tables. Slowly Changing Dimensions Wizard.
Answer
Two mapping wizards are there.

1. Simple pass through


2. slowly changing dimensions
Question
What are the options in the target session of update
strategy transsformatioin?
Answer
INSERT
UPDATE AS UPDATE
UPDATE AS INSERT
UPDATE ELSE INSERT
DELETE
TRUNCATE TARGET TABLE
Question
What is Datadriven?
Answer
The informatica server follows instructions coded into
update strategy transformations with in the session maping
determine how to flag records for insert,update, delete or
reject

If you do not choose data driven option setting, the


informatica server ignores all update strategy
transformations in the mapping.
Answer
select Data Driven in Update Strategy Transformations.
if not ...then the informatica sever wil not consider
it and will ignore this.
Data driven is not Used when using the BULK load.
(may b it is correct ..plz try this)
Question
What is the default source option for update stratgey
transformation?
Answer
Data driven.
Question
Describe two levels in which update strategy transformation
sets?
Answer
Within a session. When you configure a session, you can
instruct the Informatica Server to either treat all records
in the same way (for example, treat all records as inserts),
or use instructions coded into the session mapping to flag
records for different database operations.

Within a mapping. Within a mapping, you use the Update


Strategy transformation to flag records
Question
what is update strategy transformation ?
Answer
This transformation is used to maintain the history data or
just most recent changes in to target table.
Answer
Updatestartegy used to define Transaction control
Informatica default transaction Insert
Using updatestartegy we can define
(1) Dd_insert
(2) Dd_update
(3) Dd_delete
(4) Dd_reject
    Answer
update strategy transformation is used to perform DML
operations for the already data populated targets.

by default informatica server will treat source row as


INSERT.if you use update strategy transformation in your
mapping the informatica server will treat source row as
DATA DRIVEN.so the datas transferd to a target that even
populated with some data can be inserted,updated,rejected
and deleted.based upon the functions like
dd_insert,dd_update,dd_reject,dd_delete you define in the
update strategy transformation.
Question
What are the basic needs to join two sources in a source
qualifier?
Answer
Two sources should have primary and Foreign key relation ships.

Two sources should have matching data types.


Answer
Both the sources should come from homogenious / same
database.
Answer
The Transformation which Converts the source(relational or
flat) datatype to Informatica datatype.
So it works as an intemediator between and source and
informatica server.

Tasks performed by qualifier transformation:-


1. Join data originating from the same source database.
2. Filter records when the Informatica Server reads source
data.
3. Specify an outer join rather than the default inner join.
4. Specify sorted ports.
5. Select only distinct values from the source.
6. Create a custom query to issue a special SELECT
statement for the Informatica Server to read source data.
Answer
Sources should be Homogeneous.Though they might not have a
joining condition.
For Example If you have Employee table and Dept Table and
the joining condition is on deptid then its not mandatory
that you must specify the joining condition.If you dont
specify the joining condition then it will have a cartesian
join.

Also the Ports Serial should be same as the SQ override


select.
Question
What is the default join that source qualifier provides?
Answer
Inner equi join.
Answer
yes
 
0
Jhgh
    Answer
normal join
 
0
Srinivas
    Question
What is the target load order?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
yoU specify the target load order based on source qualifiers
in a maping. If u have the multiple source qualifiers
connected to the multiple targets, yoU can designatethe
order in which informatica server loads data into the targets.
 
3
Swetha
    Answer
when we set the target load order, we create the groups,
each group may have multiple targets.
so data is loaded concurrently in all the targets of one
group and sequentially in different groups in the order.
 
0
Rashmi Garg
    Answer
By using the target load order,We can order the target's
(i.e In which target ou like to load the data first)

What are the tasks that source qualifier performs?


Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Join data originating from same source data base.

Filter records when the informatica server reads source data.

Specify an outer join rather than the default inner join

specify sorted records.

Select only distinct values from the source.


Creating custom query to issue a special SELECT statement
for the informatica server to read source data.
 
0
Swetha
    Question
What is source qualifier transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
When yoU add a relational or a flat file source definition
to a maping, yoU need to connect it to a source qualifer
transformation. The source qualifier transformation
represnets the records that the informatica server reads
when it runs a session.
 
3
Swetha
    Question
What is the status code?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Status code provides error handling for the informatica
server during the session. The stored procedure issues a
status code that notifies whether or not stored procedure
completed sucessfully. This value can not seen by the user.
It only used by the informatica server to determine whether
to continue running the session or stop.
 
0
Swetha
  

  Question
What are the types of data that passes between informatica
server and stored procedure?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
3 types of data

Input/Out put parameters

Return Values

Status code.
 
0
Swetha
    Question
Why we use stored procedure transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
For populating and maintaining data bases.
 
0
Swetha
    Answer
For best performance
 
0
Madhu
    Answer
Instead of writing complex logics in expression t/r we will
use Stored procedure t/r.

The main advantage of stored procedure t/r is the code


reusability.
 
0
Anilchandu
    Question
What are the types of groups in Router transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Input group Output group

The designer copies property information from the input


ports of the input group to create a set of output ports
for each output group.

User defined groups

Default group

yoU can not modify or delete default groups.


 
0
Swetha
    Question
What is the Router transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
A Router transformation is similar to a Filter
transformation because both transformations allow you to
use a condition to test data. However, a Filter
transformation tests data for one condition and drops the
rows of data that do not meet the condition. A Router
transformation tests data for one or more conditions and
gives you the option to route rows of data that do not meet
any of the conditions to a default output group. If you need
to test the same input data based on multiple conditions,
use a Router Transformation in a mapping instead of creating
multiple Filter transformations to perform the same task.
 
0
Swetha
    Question
What is the Rankindex in Ranktransformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
The Designer automatically creates a RANKINDEX port for each
Rank transformation. The Informatica Server uses the Rank
Index port to store the ranking position for each record in
a group. For example, if you create a Rank transformation
that ranks the top 5 salespersons for each quarter, the rank
index numbers the salespeople from 1 to 5:
 
0
Swetha
    Answer
The Designer creates a RANKINDEX port for each Rank
transformation. The Integration Service uses the Rank Index
port to store the ranking position for each row in a group.
For example if a Rank transformation is created on the top
five salespersons for each quarter

(Matrix being Sales Person and Measure is Sales) Criterion


Top or bottom (Quarter is a time based dimension)
 
0
Shankar Mahadevan
    Answer
hi all
if we have two records with same data

i mean duplicate records

at that time how the is calculated for that

let me know

example

name,age

suresh,23
ramesh,25
suresh,23
murali,20
 
0
Suresh
    Answer
Hi for the above question you are going to get the suresh
record twice if you select rank as two (2) . to get the
distinct records , in source qualifier put a distinct
clause in the query .. then you will get the correct result.
 
0
Lalamaheshkumar
    Answer
small correction for above ..

Hi for the above question you are going to get the suresh
record twice if you select rank as three (3) . to get the
distinct records , in source qualifier put a distinct
clause in the query .. then you will get the correct result.
 
0
Lalamaheshkumar
    Answer
like to add a little bit.
just go to any Query editor and use DENSE rank() this will
clarify much more.
 
0
Shashi
    Question
What are the rank caches?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
During the session ,the informatica server compares an inout
row with rows in the datacache. If the input row out-ranks a
stored row, the informatica server replaces the stored row
with the input row. The informatica server stores group
information in an index cache and row data in a data cache.
 
0
Swetha
    Answer
Swetha answer is correct.Expanding little bit with example.
If there r 3 same salaries..Rank will eliminate the
sequences..try this...rank will come 1,2,3,3,3,6..
To confirm this go to the SQL editor and check this with
Dense rank()function.
 
0
Shashiall
    Question
How the informatica server sorts the string values in Rank
transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
When the informatica server runs in the ASCII data movement
mode it sorts session data using Binary sortorder. If yoU
configure the seeion to use a binary sort order, the
informatica server caluculates the binary value of each
string and returns the specified number of rows with the
higest binary values for the string.
 
0
Swetha
    Question
Which transformation should we use to normalize the COBOL
and relational sources?
Rank Answer Posted By     Question Submitted By :: Swetha This Interview
Question Asked @   Lehman-Brothers , Ibm, Del © ALL Interview .com Answer
Normalizer Transformation.When yoU drag the COBOL source in
to the mapping Designer workspace,the normalizer
transformation automatically appears,creating input and
output ports for every column in the source.
 
0
Swetha
    Answer
hi,
swetha

wud you mind clarifying some queries of mine.

my mail id is [email protected]
 
0
Bsgsr
    Answer
normalizer tr is specially used for Cobol sources because
its contains "Identified by" and "OCCURS" keyword that's
merge multiple record in one Normalized creates Generate-Key
and Genetrete column key for normalize it.
 
0
[email protected]
    Question
What are the Differences between static cache and dynamic cache?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Static cache
Dynamic cache

U can not inert or update the cache. yoU


can insert rows into the cache as u pass

to the target

The informatic server returns a value from The


informatic server inserts rows into cache

the lookup table opr cache when the condition when the
condition is false.This indicates that

is true. When the condition is not true, the


the row is not in the cache or target table.

informatica server returns the default value yoU


can pass these rows to the target table.

for connected transformations and null for

unconnected transformations.
 
0
Swetha
    Question
What are the types of lookup caches?
Rank Answer Posted By     Question Submitted By :: Swetha This Interview
Question Asked @   IBM © ALL Interview .com Answer
Persistent cache: yoU can save the lookup cache files and
reuse them the next time the informatica server processes a
lookup transformation configured to use the cache.

Recache from database: If the persistent cache is not


synchronized with he lookup table, yoU can configure the
lookup transformation to rebuild the lookup cache.

Static cache: U can configure a static or readonly cache for


only lookup table. By default informatica server creates a
static cache. It caches the lookup table and lookup values
in the cache for each row that comes into the
transformation. when the lookup condition is true, the
informatica server does not update the cache while it
prosesses the lookup transformation.

Dynamic cache: If you want to cache the target table and


insert new rows into cache and the target, you can create a
look up transformation to use dynamic cache.The informatica
server dynamically inerts data to the target table.

shared cache: yoU can share the lookup cache between


multiple transactions.yoU can share unnamed cache between
transformations inthe same maping.
 
0
Swetha
    Answer
Types are
STATIC CACHE : IT IS A READ ONLY CACHE USED FOR SINGLE
LOOK UP
DYNAMIC CACHE : USE THIS CACHE TO REFLECT THE CHANGED DATA
DIRECTLY ,WHEN THE TARGET TABLE IS LOOKUP
SHARED CACHE : THE CACHE USED AMONG MULTIPLE SESSIONS
PERSISTANT CACHE : THE CACHE USED AMONG MULTIPLE
TRANSFORMATIONS
 
0
Rekha
    Answer
how we can identifiey the previous persistance cache?
can we use the persistance cache on another mapping?

what is the size of persistance cache?

advanced thanks for u r reply?


plz send the answer to [email protected]
 
0
Surya
    Answer
Persistent cache: U can save the lookup cache files and
reuse them the next time the informatica server processes a
lookup transformation configured to use the cache.

Recache from database: If the persistent cache is not


synchronized with he lookup table,U can configure the lookup
transformation to rebuild the lookup cache.

Static cache: U can configure a static or readonly cache for


only lookup table.By default informatica server creates a
static cache.It caches the lookup table and lookup values in
the cache for each row that comes into the
transformation.when the lookup condition is true,the
informatica server does not update the cache while it
prosesses the lookup transformation.

Dynamic cache: If u want to cache the target table and


insert new rows into cache and the target,u can create a
look up transformation to use dynamic cache.The informatica
server dynamically inerts data to the target table.

shared cache: U can share the lookup cache between multiple


transactions.U can share unnamed cache between
transformations inthe same maping.
 
0
Sridhar
    Question
what is meant by lookup caches?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
The informatica server builds a cache in memory when it
processes the first row af a data in a cached look up
transformation. It allocates memory for the cache based on
the amount you configure in the transformation or session
properties. The informatica server stores condition values
in the index cache and output values in the data cache.
 
0
Swetha
    Question
What are the Differences between connected and unconnected
lookup?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Connected lookup
Unconnected lookup

Receives input values diectly from


Receives input values from the result of a

the pipe line.


lkp expression in a another transformation.

yoU can use a dynamic or static cache U


can use a static cache.

Cache includes all lookup columns Cache


includes all lookup out put ports in the

used in the maping


lookup condition and the lookup/return port.

Support user defined default values Does


not support user defiend default values
 
0
Swetha
    Answer
Connected lookup participates in the mapping ( dataflow )
just like any other transformation. Unconnected lookup is
used when a lookup function is used instead in an expression
transformation in the mapping in which case the lookup does
not appear in the main flow ( dataflow ) of the mapping.
Connected lookup can return more than one value ( output
port ) whereas Unconnected gives only one output port.
Unconnected lookups are reusable.
 
0
Varma Koilada
    Answer
Connected and Unconnected Lookups
You can configure a connected Lookup transformation to
receive input directly from the mapping pipeline, or you
can configure an unconnected Lookup transformation to
receive input from the result of an expression in another
transformation.
Table 14-1 lists the differences between connected and
unconnected lookups:
Table 14-1. Differences Between Connected and Unconnected
Lookups
Connected Lookup Unconnected Lookup
Receives input values directly from the pipeline. Receives
input values from the result of a :LKP expression in
another transformation.
Use a dynamic or static cache. Use a static cache.
Cache includes all lookup columns used in the mapping (that
is, lookup source columns included in the lookup condition
and lookup source columns linked as output ports to other
transformations). Cache includes all lookup/output
ports in the lookup condition and the lookup/return port.
Can return multiple columns from the same row or insert
into the dynamic lookup cache. Designate one return port
(R). Returns one column from each row.
If there is no match for the lookup condition, the
Integration Service returns the default value for all
output ports. If you configure dynamic caching, the
Integration Service inserts rows into the cache or leaves
it unchanged. If there is no match for the lookup
condition, the Integration Service returns NULL.
If there is a match for the lookup condition, the
Integration Service returns the result of the lookup
condition for all lookup/output ports. If you configure
dynamic caching, the Integration Service either updates the
row the in the cache or leaves the row unchanged. If there
is a match for the lookup condition, the Integration
Service returns the result of the lookup condition into the
return port.
Pass multiple output values to another transformation. Link
lookup/output ports to another transformation. Pass one
output value to another transformation. The
lookup/output/return port passes the value to the
transformation calling : LKP expression.
Supports user-defined default values. Does not support
user-defined default values.
Connected Lookup Transformation
The following steps describe how the Integration Service
processes a connected Lookup transformation:
1. A connected Lookup transformation receives input
values directly from another transformation in the
pipeline.
2. For each input row, the Integration Service
queries the lookup source or cache based on the lookup
ports and the condition in the transformation.
3. If the transformation is uncached or uses a static
cache, the Integration Service returns values from the
lookup query.
If the transformation uses a dynamic cache, the Integration
Service inserts the row into the cache when it does not
find the row in the cache. When the Integration Service
finds the row in the cache, it updates the row in the cache
or leaves it unchanged. It flags the row as insert, update,
or no change.
4. The Integration Service passes return values from
the query to the next transformation.
If the transformation uses a dynamic cache, you can pass
rows to a Filter or Router transformation to filter new
rows to the target.
Note: This chapter discusses connected Lookup
transformations unless otherwise specified.
Unconnected Lookup Transformation
An unconnected Lookup transformation receives input values
from the result of a :LKP expression in another
transformation. You can call the Lookup transformation more
than once in a mapping.
A common use for unconnected Lookup transformations is to
update slowly changing dimension tables. For more
information about slowly changing dimension tables, visit
the Informatica Knowledge Base at
http://my.informatica.com.
The following steps describe the way the Integration
Service processes an unconnected Lookup transformation:
1. An unconnected Lookup transformation receives
input values from the result of a :LKP expression in
another transformation, such as an Update Strategy
transformation.
2. The Integration Service queries the lookup source
or cache based on the lookup ports and condition in the
transformation.
3. The Integration Service returns one value into the
return port of the Lookup transformation.
4. The Lookup transformation passes the return value
into the :LKP expression.
For more information about unconnected Lookup
transformations, see Configuring Unconnected Lookup
Transformations.
 
0
Balu
    Answer
Connected Lookup Unconnected Lookup
---------------- -------------------
1.With Pipeline Without Pipeline

2.Static and dynamic Static cache only


cache only

3.Return Multiple port Return Single Port

What are the types of lookup?


Rank Answer Posted By     Question Submitted By :: Swetha This Interview
Question Asked @   Wipro © ALL Interview .com Answer
Connected and unconnected
 
0
Swetha
    Answer
CONNECTED /UNCONNECTED
CACHE/UNCACHE
 
0
Rekha
    Answer
Differences Between Connected and Unconnected Lookup:
connected

o Receives input values directly from the pipeline.

o uses Dynamic or static cache

o Returns multiple values

o supports user defined default values.

Unconnected

o Recieves input values from the result of LKP


expression in another transformation
o Use static cache only.
o Returns only one value.
o Doesn’t supports user-defined default values.
NOTES
o Common use of unconnected LKP is to update slowly
changing dimension tables.
o Lookup components are
(a) Lookup table. B) Ports c) Properties d) condition.

Lookup tables: This can be a single table, or you can


join multiple tables in the same Database using a Lookup
query override.You can improve Lookup initialization time
by adding an index to the Lookup table.
Lookup ports: There are 3 ports in connected LKP
transformation (I/P,O/P,LKP) and 4 ports unconnected LKP
(I/P,O/P,LKP and return ports).
o if you’ve certain that a mapping doesn’t use a
Lookup ,port ,you delete it from the transformation. This
reduces the amount of memory.
Lookup Properties: you can configure properties such as SQL
override .for the Lookup,the Lookup table name ,and tracing
level for the transformation.
Lookup condition: you can enter the conditions ,you want
the server to use to determine whether input data qualifies
values in the Lookup or cache .
when you configure a LKP condition for the transformation,
you compare transformation input values with values in the
Lookup table or cache ,which represented by LKP ports .when
you run session ,the server queries the LKP table or cache
for all incoming values based on the condition.
NOTE
- If you configure a LKP to use static cache ,you
can following operators =,>,<,>=,<=,!=.
but if you use an dynamic cache only =can be used .

- when you don’t configure the LKP for caching ,the


server queries the LKP table for each input row .the result
will be same, regardless of using cache
However using a Lookup cache can increase session
performance, by Lookup table, when the source table is
large.
Performance tips:
- Add an index to the columns used in a Lookup
condition.
- Place conditions with an equality opertor (=) first.
- Cache small Lookup tables .
- Don’t use an ORDER BY clause in SQL override.
- Call unconnected Lookups with :LKP reference
qualifier.
 
0
Sandeep
  

  Question
Why use the lookup transformation ?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
To perform the following tasks.

Get a related value. For example, if your source table


includes employee ID, but you want to include the employee
name in your target table to make your summary data easier
to read. Perform a calculation. Many normalized tables
include values used in a calculation, such as gross sales
per invoice or sales tax, but not the calculated value (such
as net sales). Update slowly changing dimension tables. You
can use a Lookup transformation to determine whether records
already exist in the target.
 
0
Swetha
    Answer
1. GET A REALTED VALUE
2.UPDATE SLOWLY CHANGING DIMENSION
3.PERFORM CALCULATIONS
 
0
Rekha
    Question
what is the look up transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Use lookup transformation in u'r mapping to lookup data in a
relational table, view, synonym.

Informatica server queries the look up table based on the


lookup ports in the transformation. It compares the lookup
transformation port values to lookup table column values
based on the look up condition.
 
0
Swetha
    Answer
Add to sweta's answer:

We use lookup transformation to lookup data from a


relational table, view, synonym or flatfile against another
or the same table, view, synonym or flatfile.

This is mainly(not only)used for SCD's. So that the value


in the source can be compared with the existing target.

Lookup transformation are of different types named and


unnamed.

Mostly used lookup transformation types are Connected lookup


(Persistent Lookup, Dynamic lookup, Cached lookup, Uncached
lookup)
 
0
Ashis
    Question
What are the joiner caches?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
When a Joiner transformation occurs in a session, the
Informatica Server reads all the records from the master
source and builds index and data caches based on the master
rows.After building the caches, the Joiner transformation
reads records from the detail source and perform joins.
 
0
Swetha
    Answer
joiner caches are data cache and index cache data cache.

data cache that holds the master table rows and the index
cache which holds the join columns from the master table
 
0
Atiric Sofrware(raja)
    Question
What are the join types in joiner transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Normal (Default)

Master outer

Detail outer

Full outer
 
3
Swetha
    Question
what are the settiings that u use to cofigure the joiner
transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Master and detail source

Type of join
Condition of the join
 
0
Swetha
    Answer
join condition

type of join
master and detail source

increase or decrease the cache sizes


 
0
Raja
    Question
In which conditions we can not use joiner transformation
(Limitaions of joiner transformation) ?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Both pipelines begin with the same original data source.

Both input pipelines originate from the same Source


Qualifier transformation.

Both input pipelines originate from the same Normalizer


transformation.

Both input pipelines originate from the same Joiner


transformation.

Either input pipelines contains an Update Strategy


transformation.

Either input pipelines contains a connected or unconnected


Sequence Generator transformation.
 
0
Swetha
    Answer
You cannot use a Joiner
transformation in the following situations:
&#9830; Either input pipeline contains an Update Strategy
transformation.
&#9830; You connect a Sequence Generator transformation directly
before the Joiner transformation.

Swetha is wrong because we can use joiner transformation,


we can join the sources from the same data source.
 
0
Jaimeen Shah
    Question
What are the diffrences between joiner transformation and
source qualifier transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
yoU can join hetrogenious data sources in joiner
transformation which we can not achieve in source qualifier
transformation. yoU need matching keys to join two
relational sources in source qualifier transformation. Where
as you doesn't need matching keys to join two sources. Two
relational sources should come from same datasource in
source qualifier. yoU can join relatinal sources which aer
coming from different sources also.
 
4
Swetha
    Answer
One important thing is

IN source Qualifier 1)the columns used in the condition must


have primary-key and foreign-key relationship(Compulsory)
For example
If the condition is
Emp.deptno = dept.deptno1
The depeno and deptno1 should have primary and foreign
key relation

2)The datatypes of the couumns must be same


 
0
David
    Question
How can yoU improve session performance in aggregator
transformation?
Rank Answer Posted By     Question Submitted By :: Swetha This Interview
Question Asked @   TCS © ALL Interview .com Answer
Use sorted input.

The aggregator stores data in the aggregate cache until it


completes aggregate calculations. When u run a session that
uses an aggregator transformation, the informatica server
creates index and data caches in memory to process the
transformation. If the informatica server requires more
space, it stores overflow values in cache files.
 
0
Swetha
    Answer
use sorted input option to decrease the use of aggregator
cache.

use filter transformation before aggregator transformation


to to rduce unnecessary aggregation.

Limit the number of connected input/output or output ports


to reduce the amount of data the Aggregator transformation
stores in the data cache.
 
0
Haritha
    Answer
what u guys said is so far ok
but what happen if u have't check the option sorted in
input in aggregator transformation but u actully sorted
before with sorter transformation so what will happen in
agg by performance wise?
 
0
Chinna
    Answer
if you sort the input yet do not select the sorted input
option, the aggregator treats the data as unsorted data &
performs the task hence its cache size increases. the
performance is degraded despite using a sorter.
 
0
Dilip
    Answer
Chinna,
If u have't check the option sorted in
input in aggregator transformation but u actully sorted
before with sorter transformation in that case your session
will fail. You must check the option sorted in
input in aggregator transformation if you are using sorted
input to the aggregator .
Plz let do me know if you still have doubt.
[email protected]
 
0
Pg
    Answer
hi PG
u r absolutely right.

session will fail.

u have to use the check option for sorted input.


 
0
Cherry
    Question
Can you use the maping parameters or variables created in
one maping into any other reusable transformation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Yes.Because reusable tranformation is not contained with any
maplet or maping.
 
0
Swetha
    Question
Can yoU use the maping parameters or variables created in
one maping into another maping?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
NO. We can use mapping parameters or variables in any
transformation of the same maping or mapplet in which yoU
have created maping parameters or variables.
 
0
Swetha
    Question
What are the mapping paramaters and mapping variables?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Maping parameter represents a constant value that yoU can
define before running a session. A mapping parameter retains
the same value throughout the entire session. When you use
the maping parameter , yoU declare and use the parameter in
a maping or maplet. Then define the value of parameter in a
parameter file for the session. Unlike a mapping parameter,
a maping variable represents a value that can change
throughout the session. The informatica server saves the
value of maping variable to the repository at the end of
session run and uses that value next time yoU run the session.
 
0
Swetha
    Question
What are the unsupported repository objects for a mapplet?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
COBOL source definition

Joiner transformations

Normalizer transformations

Non reusable sequence generator transformations.

Pre or post session stored procedures

Target defintions

Power mart 3.5 style Look Up functions

XML source definitions

IBM MQ source defintions


 
0
Swetha
    Question
What are the methods for creating reusable transforamtions?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Two methods

1.Design it in the transformation developer.

2.Promote a standard transformation from the mapping


designer. After yoU add a transformation to the mapping ,
yoU can promote it to the status of reusable transformation.
Once yoU promote a standard transformation to reusable
status, yoU can demote it to a standard transformation at
any time.

If you change the properties of a reusable transformation in


mapping, yoU can revert it to the original reusable
transformation properties by clicking the revert button.
 
3
Swetha
    Answer
TWO WAYS
USING TRANFORMATION DEVELOPER
CREATE THE NORMAL ONE AND PROMOTE IT TO REUSABLE
 
0
Rekha
    Question
What are the reusable transforamtions?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Reusable transformations can be used in multiple mappings.
When you need to incorporate this transformation into
maping, U add an instance of it to maping. Later if yoU
change the definition of the transformation ,all instances
of it inherit the changes. Since the instance of reusable
transforamation is a pointer to that transforamtion, U can
change the transforamation in the transformation developer,
its instances automatically reflect these changes. This
feature can save yoU great deal of work.
 
0
Swetha
    Answer
If you want to perform similar task for diff mappings, and
if your logic is also similar for the diff requirements. in
that case instead of creating multiple mappings, we will
create it once and we will select the option as resuable.
so that we can reuse it in other mappings also.
but if you want to do the modifications in reusable mapping
its not possible. you need to do it in original
transformation it will will reflect all the other instances
of that particular transformation

How many ways you create ports?


Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Two ways

1.Drag the port from another transforamtion

2.Click the add buttion on the ports tab.


 
0
Swetha
    Question
What are the connected or unconnected transforamations?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
An unconnected transforamtion is not connected to other
transformations in the mapping.

Connected transforamation is connected to other


transforamtions in the mapping.
 
0
Swetha
    Answer
CONNECTED : IT INVOVLES IN THE MAPPING DATA FLOW
IT RECIVES MULTIPLE INPUTS AND PROVIDES MULTIPLE OUTPUTS
UNCONNECTED :
IT DOES NOT INVOLVE IN THE MAPPING DATAFLOE
RECEIVES MULTIPLE INPUTS AND PROVIDES SINGLE OUTPUT
 
0
Rekha
  

  Question
What are the active and passive transforamtions?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
An active transforamtion can change the number of rows that
pass through it.
A passive transformation does not change the number of rows
that pass through it.
 
0
Swetha
    Answer
The active transformation can change the rows pass
through by it. The passive transformation can not make the
changes the rows pass through by it.
The active transformations are aggregator, application
source qualifier, custom, filter,joiner, Normalizer, rank,
router,sorter,source qualifier, transaction control,union,
update strategy,XML parser,XML source qualifier
transformations.
The passive transformations are custom,expression,
external procedure,input,lookup,output,sequence generator,
stored procedure transformations.
 
0
Saral Mariyaprakasam
    Question
What are the designer tools for creating tranformations?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Mapping designer

Tansformation developer

Mapplet designer
 
0
Swetha
    Question
what is a transforamation?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
It is a repostitory object that generates, modifies or
passes data.
 
0
Swetha
    Answer
When transfering the data from Source to Target,we used
multiple steps that steps in informatica is called as
transformations.
 
0
Surya
    Answer
A Transformation is a type of metadata object which is
responsible for transforming the data or processing the data
 
3
Praveen.madishetty
    Answer
Tranformation is extract data from one data to another data base in condition,
 
0
Giri
    Question
How can yoU create or import flat file definition in to the
warehouse designer?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
yoU can not create or import flat file defintion in to
warehouse designer directly. Instead U must analyze the file
in source analyzer,then drag it into the warehouse designer.
When U drag the flat file source defintion into warehouse
desginer workspace, the warehouse designer creates a
relational target defintion not a file defintion. If u want
to load to a file, configure the session to write to a flat
file. When the informatica server runs the session, it
creates and loads the flatfile.
 
0
Swetha
    Question
Which transformation should u need while using the cobol
sources as source defintions?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Normalizer transformaiton which is used to normalize the
data. Since cobol sources r oftenly consists of
Denormailzed data.
 
2
Swetha
    Answer
normalizer transaformation we can use
 
0
Ram
    Question
To provide support for Mainframes source data,which files r
used as a COBOL files
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
vsam files, flat files
esds , ksds , rrds
 
0
Syed
    Question
Where should yoU place the flat file to import the flat file
defintion to the designer?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Place it in local folder
 
0
Swetha
    Answer
If Informatica server and client are in one machine, then
we can place it in local folder.other wise,

But I strongly, recommend that better place the files to be


imported to the designer in
<directory>:/informatica/...../Src /<.csv file>
 
0
Sri Ram
    Question
Howmany ways yoU can update a relational source defintion
and what are they?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Two ways

1. Edit the definition

2. Reimport the defintion


 
0
Swetha
    Question
While importing the relational source defintion from
database,what are the meta data of source U import?
Rank Answer Posted By     Question Submitted By :: Swetha © ALL Interview
.com Answer
Source name

Database location

Column names

Datatypes

Key constraints
 
2
Swetha
    Question
What are parallel querys and query hints?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
parallel queries to optimize query execution and index
operations for computers that have more than one
microprocessor (CPU). Because SQL Server can perform a
query or index operation in parallel by using several
operating system threads, the operation can be completed
quickly and efficiently.
 
0
Nagaraju Bhatraju
    Question
Explain reference cursor?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Basically, Reference cursor is datatype.Reference cursor
works
as a cursor variable.the advantage of using reference cursor
is it pass the result sets to the subprograms
(ie,procedures,packages and functions etc).
example for reference cursor usage:
type r_cursor is ref cursor;
c_emp r_cursor;
ename1 emp.ename%type;
begin
open c_emp is select ename from emp;
loop
fetch c_emp into ename1;
exit when c_emp% notfound;
dbms_output.put_line(ename1);
end loop;
close c_emp;
end;
--------------

A REF CURSOR have a return type and it as 2 type Strongly


Typed Cursor and Weakly Typed Cursor

but Cursor doesn't have return type

Ex:
TYPE ref_type_name IS REF CURSOR RETURN return_type;

return_type represents a record in the database

DECLARE TYPE EmpCurType IS REF CURSOR RETURN emp%ROWTYPE;

Cursor doesn't have a return type but


A Reference Cursor have a return type and it as 2 type one
is
Strongly Typed Cursor and Weakly Typed Cursor.

Another difference is REF curson can be assigned


dynamically while Normal cursor once defined you cann't
change it

ref cursor can be associated with many no. of sql


statements where cursor can be associated only with one sql
statement.

ref cursor is dynamic,cursor is static.

ref cursor points to a location.

Reference cursors have 2 types.

1 is strong cursors and 2 one is week-cursor

in stron cursor we given return type. in week cursor no


return type
A REF CURSOR have a return type and it as 2 type Strongly
Typed Cursor and Weakly Typed Cursor

but Cursor doesn't have return type

Ex:

TYPE ref_type_name IS REF CURSOR RETURN return_type;

return_type represents a record in the database

DECLARE TYPE EmpCurType IS REF CURSOR RETURN emp%ROWTYPE;


***********************************

Cursor doesn't have a return type but


A Reference Cursor have a return type and it as 2 type one
is
Strongly Typed Cursor and Weakly Typed Cursor.

**************************************************

ref cursor can be associated with many no. of sql


statements where cursor can be associated only with one sql
statement.

ref cursor is dynamic,cursor is static.

ref cursor points to a location.

CURSOR

In cursor there are 2 types explicit and implicit cursor

Explicit cursor

Explicit cursors are SELECT statements that are DECLAREd


explicitly in the declaration section of the current block
or in a package specification. Use OPEN, FETCH, and CLOSE
in the execution or exception sections of your programs.

IMPLICIT CURSOR

Whenever a SQL statement is directly in the execution or


exception section of a PL/SQL block, you are working with
implicit cursors. These statements include INSERT, UPDATE,
DELETE, and SELECT INTO statements. Unlike explicit
cursors, implicit cursors do not need to be declared,
OPENed, FETCHed, or CLOSEd.

REFERENCE CURSOR

A cursor variable is a data structure that points to a


cursor object, which in turn points to the cursor's result
set. You can use cursor variables to more easily retrieve
rows in a result set from client and server programs. You
can also use cursor variables to hide minor variations in
queries.

The syntax for a REF_CURSOR type is:

TYPE ref_cursor_name IS REF CURSOR [RETURN


record_type];If you do not include a RETURN clause, then
you are declaring a weak REF CURSOR. Cursor variables
declared from weak REF CURSORs can be associated with any
query at runtime. A REF CURSOR declaration with a RETURN
clause defines a "strong" REF CURSOR. A cursor variable
based on a strong REF CURSOR can be associated with queries
whose result sets match the number and datatype of the
record structure after the RETURN at runtime.

To use cursor variables, you must first create a REF_CURSOR


type, then declare a cursor variable based on that type.

The following example shows the use of both weak and strong
REF CURSORs:

DECLARE -- Create a cursor type based on the


companies table. TYPE company_curtype IS REF
CURSOR RETURN companies%ROWTYPE; -- Create the
variable based on the REF CURSOR. company_cur
company_curtype; -- And now the weak, general approach.
TYPE any_curtype IS REF CURSOR; generic_curvar
any_curtype;The syntax to OPEN a cursor variable is:

OPEN cursor_name FOR select_statement;

Hi

Cursor (explicit cursor) are static cursors which can be


associated with onlyone SQl statement at the same timeand
this statement is known when block is compiled. A Cursor
Variable, on the other hand, can be associated with
different queries at runtime.

Static Cursor are analogus to PL/SQL constants because they


can only be associated with one runtime query, whereas
reference cursor are analogus to PL/SQL variables, which
can hold different values at runtime.

Reference Cursor can have return type.

Beacause of reference type, no storage is allocated for it


when it is declared. Before it can be used, it needs to
point to a valid area of memory, which can be created
either by allocating it to the client-side program or on
the server by PL/SQL engine.

Ref cursor can contain multiple query in single variable.


where cursor can be associated only with one sql query

Ref cursor is dynamic,cursor is static.

CURSOR IS A STATIC TYPE. BUT REF CURSOR IS DYNAMIC.

 
0
Nagaraju Bhatraju
    Question
What is aggregate awareness?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Aggregate awareness can be used in following situations
1) you have summarized tables and detail tables and you
want BO to pick up right table based on what kind of object
you have pulled in your query.It helps to associate right
objects automatically and hence enhance your BO report
performance
2) Also, can be used if someone wats to setup complex logic
to pickup certain tables only, when picking certain columns
from one table.This can be done be setting up incompatible
objects in aggregate awareness panel.

Um....we could go many places with that, but here's my stab


at it. In
BusinessObjects it is the ability to, using the measure
objects, dynamically
interpret the context of the dimension objects and to "add
together" the
values of the measures involved. They call it "semantic
dynamism". For
instance, if I have a report listing 30 people, with their
name, gender and
their age (expressed as a measure object) each row of data
will show each
person's age. If I remove the name object from the table,
leaving only the
gender and age objects, BusinessObjects is "aware" that the
context of the
calculation has changed and that the only defining object
is gender.
Clearly, we should only have two genders (at least we hope,
right?) so the
age measure AGGREGATES to fit the new context, so you see
the average age
for each gender instead of individual ages of each person
(because the name
object was removed the rows are no longer defined by that
object). Does
that help explain the concept?
My definition of "Aggregate awareness" for a BI/Query tool
would be:
- the tool would automatically pick the right (i.e. highest
level/more aggregated) table based on the level of the
query requested.
 
0
Nagaraju Bhatraju
    Question
What is difference between Mapplet and reusable
transformation?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Wipro © ALL Interview .com Answer
Mapplet is nothing but reusable transformation, we can use
mapplet no of times. In case of reusable transformation we
can't use again.
 
9
Ande
    Answer
Reusable transformation is a single transformation which we
can use multiple times.
mapplet is a set of reusable transformations.which we can
use multiple times
The only diff is transformation is a single transformation
and mapplet is a set of reusable transformations
 
0
Harithareddy
    Answer
1.mapplet is a set of reusable transformations,we can use
multiple times
reusable transformations is a single transformation, that
we can used multiple times
2. in mapplet the transformation logic is hide
3.if u create any mapping variables or parameters in
mapplet that can't be used in another mapping or mapplet
unlike if u create in reusable transformation u can use in
another mapplet or mapping

4. we cant include source definition in reusable


transformation.but we can include source to mapplet

5. we cant use cobol source qualifier,joiner,normalizer


transformations in mapplet.
 
0
Suresh
    Answer
Suresh, We can use Joiner transformation with in the
mapplet. If you know correct answer then only you have to
give the answer otherwise you should not give your's
expected answers.
 
4
Rkreddy
    Answer
MAPPLET-REUSABLE COLLETION OF TRANSFORMATION CREATED TO
SOLVE A LOGIC.

REUSABLE TRANS-REUSABLE TRANSFORMATION.


 
0
Maruthi
    Answer
both were similar but mapplet consist set of reusable
transformation

What is surrogate key?


Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Surrogate key is a primary key which can be used for
sequence
 
0
Anil
    Answer
actually it appears in picture when a critical
column(example location) occurs in a table.In order to avoid
the critical column we are using surrogate key.surrogate key
is also a primary key.
 
0
Latha
    Answer
sarrogate key is ssystem generated sequence number,an
artificial key. which can be used in maintaing the history
of the data
 
0
Koti
  

  Answer
Surrogate Key is artificial that is created for
datawarehousing for generating the unique number for each
record. Surrogate key works on the behalf of primary key.
And main purpose of surrogate key is to maintain the
historical record.
 
0
Sunil Kumar Singh
    Question
How many repositories can we create in Informatica?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
we can create 200 repositories in informatica
 
0
Nagaraj.g
    Answer
unlimited repositories
 
0
Jagan
    Answer
A single global repository and any number of local
repositories.
 
0
Snu
    Question
What is the Hierarchy of DWH?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
To store the data in a particular dimension
 
0
Shiva
    Question
Explain grouped cross tab?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Accenture © ALL Interview .com Answer
Grouped cross tab means same as cross tab report particulary
grouped Ex:- emp dept tables take select row empno and
column in ename and group item deptno and cell select sal
then its comes
10
-------------------
raju|ramu|krishna|....
7098| 500
7034|
7023|600
--------------
20
......
....
like type ok
 
0
Nmlabsonline.net
    Question
What is the Difference between DSS & OLTP?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
DWH is DSS(Dicision support system),whereas, OLTP is not
DSS.
 
0
Shashiall
    Answer
stupid answer it is....:(
 
0
Pichi
    Answer
dss(decision support system) consist of past data and
provides analysis of that useful in decision making,where
as oltp(online transaction processing)consist of current
data.
 
0
Topstar
    Question
What is source qualifier?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   SDS © ALL Interview .com Answer
using sq'transformation we can driveout data from an
object,and it can drivein to the next transformation,this
is an mediator betwwen source object and to the next
transformation to get the data fom source object.
 
4
Pradeep
    Answer
source qualifier is also a table, it acts as a
intermediator between source and target metadata and, it
also generates sql, which creating mapping in between
soucre and target metadata.
 
4
Fatima
    Answer
The Transformation which Converts the source(relational or
flat) datatype to Informatica datatype.
So it works as an intemediator between and source and
informatica server.

Tasks performed by qualifier transformation:-


1. Join data originating from the same source database.
2. Filter records when the Informatica Server reads source
data.
3. Specify an outer join rather than the default inner join.
4. Specify sorted ports.
5. Select only distinct values from the source.
6. Create a custom query to issue a special SELECT
statement for the Informatica Server to read source data.
 
5
Madhu
    Question
What are mapping parametres and variables in informatica?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
mapping parametres is constant value it should defines in
separately using mapping parametres create function provided
informatica server,
it use to create name,type,datatype,precision,scale.

mapping variable should varied when the each tome session


runs,it also use to create
name,type,precision,scale,aggreation.
it mainlyuse full for incremental load processing
 
0
Raja
    Answer
Mapping Parameters: Mapping parameter is a constant value
which points to any datatype in the entire mapping. its
value will not be changed as earlier said.

Mapping variable: Mapping variable is same thing like


mapping parameter except its value will be changed in the
mapping.

For define the value you can give the intial value or you
can create mapping parameter file. But in the file you
should specify the folder name. session name .
Ex:
[foldername.sessionname]
$$parameter1=100;
Otherwise informatica server can not recognize
the session.
 
2
B.gopala Rao
    Answer
mapping parameters and variables make the mappings reusable
and saves time. ex: u have created a mapping to load the
data of only deptno 10 into the target. next time if you
want to load the data of deptno 20 u have to create the
mapping again which is fom the scratch again. if you create
a parameter and define the value in the parameter file you
can simply change the value of the parameter and can use
the mapping again.

mapping parameter represents a constant value that doesnt


change during the session run where as mapping variable
represents a value which changes during the session run
and saves the max value to the repository which is used
next time when you run the session.
 
0
Bsgsr
    Answer
Mapping parameter is a constant value that cannot be changed
during the mapping run.it is saved with an extension .PRM
Mapping variable is a value which can be changed during the
mapping run.
 
0
Chandramohan Reddy
    Answer
you can change the value of a parameter in the mapping.The
new value will be stored in the repository.In case in the
next session run you for some reason fail to define the
parameter in the parameter file informatica will pick up
the last value form the reposiroty
 
0
Sid
    Question
what are presession,postsession success and postsession
failure commands ?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
These commands are used to notify the status of the session
run.You can use thee commands to update the audit entries
in your Audit check tables..
 
0
Kalyan
    Question
How to identify bottlenecks in
sources,targets,mappings,workflow,system and how to
increase the performance?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Run ur session in Verbose Mode and check out the Busy
Percentage in the Log. IF its more at the Reader thread
than ur Source Query is the Bottleneck.Tune your SQ.

If its Writer thread, then you check with your target . May
be you need to drop and recreate the Indexes on the target
table.
If its the Transformation thread , then check with your
mapping logic. Concentrate More on Aggregator part..

Fine tune your logic. Don't drag the fields which are not
used to all the transformations. try to use as less
transformations as possible.
Cache your lookups .Whenever possible use the persistent
lookup concept.

This should help guys..


 
0
Kalyan
    Answer
identification of bottelnecks
target:configuring session to write to flatfiletarget
source:add filter t/r after sq t/t to false show that no
data is processed past the filter t/r,if it time takes to
run new session remains same to the original session there
is source bottel necks
mapping:add filter t/f before each target and set filter
condition to false,similar to source
session:use the collect performance data to identify the
session bottel necks
read from desk,write to disk counters other than zero,there
is bottelnecks
 
0
Srinu
    Answer
Source:
Create the Filter transformation after all the Source
Qualifiers and make the filter condition FALSE so that the
data will not go beyond this trasformation. Then run the
session and find out the time taken from source. If you
feel there is some lack in performance, then suggest the
necessary index creation in Pre Session.
Note: If the source is File, then there is no possibility
of performance problme in source side

Target:
Delete the target table from the mapping and create the
same structure as a Flat file. Run the session and find out
the time taken to write the file. If you feel problem in
performance, then delete the INDEX of the table before
loading the data. In post Session, Create the same index
Note:If the target is File, then there is no possibility of
performance problme in target side

Mapping:
The below steps need to be consider
#1. Delete all the transformations and make it as single
pass through
#2. Avoid using more number of transformations
#3. If you want to use more filter transformation, then use
Router transformation instead of it
#4. Calculate the index and data cache properly for
Aggregator, Joiner, Ranker, Sorter if the Power center is
lower version. Advance version, Power center itself will
take care of this
#5. Always pass the sorted i/p's to Aggregator
#6. Use incremental aggregation
#7. Dont do complex calculation in Aggregator
transformation.

Session:
Increas the DTM buffer size

System:
#1. Increase the RAM capacity
#2. Avoid paging
 
0
Praveenkumar.b
    Question
When do we use dynamic cache and static cache in connected
and unconnected lookup transformations?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
we can use dynamic and static cache in connected
lookup..and we can use only static cache in unconnected
lookup.
 
0
Bc
    Answer
When/Under what circumstances do we use dynamic and static
cache in connected
lookup?
When/Under what circumstances do we use static cache in
unconnected lookup?
 
2
Veera
    Answer
Dynamic cache uses for updation of Master Table & SCD type 1
Static uses for Flatfile
 
0
Vinod Saini
    Question
What are the different threads in DTM process?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
The DTM uses reader, transformation, and writer threads to
extract, transform, and load data
 
0
Guest
    Answer
DTM CREATES MASTER THREAD ,IT CONATINS THE FALLOWING
READER THRAD
WRITER THREAD
MAPPING THREAD
TRANSFORMATION THREAD
 
0
Rekha
    Question
How to enter same record twice in the target table,explain?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Guess this can be done only by if the target table has a an
id which is of serial number or the table does not have
promary keys
 
0
Guest
    Answer
Guess,Connect the output to two instances of same target
 
0
Guest
    Answer
with the help of warehouse keys
 
0
Peer
    Answer
use the transformation of nomalizer transformation
 
0
Informatica_learner
    Answer
i think its happen only if the target table does not have
any primary key.
 
0
Saradhi
    Question
What are the different types of schemas?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   ITC-Infotech © ALL Interview .com Answer
two types of schemas r there: Star schema and snow flake
Schema: in Snow flake schema normalozation is promoted
where as in star schema denormalization is promoted. incase
of snowflake schema DB size will be saved but no of joins
will be increased and poor performance compare with star
schema
regards,
ande
 
0
Ande
    Answer
three types of schemas are availble.Star schema,star flake
schema & snow flake schema.

Star schema:It is highly denormalised,so we can retrieve


the data very fast.

Star flake schema:Only one dimension contain one level of


hierachy key.

Snow flake schema:It is highly normalised,so retrievel of


data is slow.
 
4
Vignesh Muralidharan
    Answer
There are two types of shemas
1 star schema , 2 snowflake schema
star schema : a fact table related to tha dimension tables
like in the fact table it contain refernce keys of
dimension tables. and dimension tables dont related to the
other parent table . it view like a star so its call star
schema. if this use to avoid more joins
snowflake schema: a dimenion table with related to the it
hirerchi tables. like time dimensin table year dimension
table, month month dimension table, week dimension table
it view like snowflake
 
0
Chowdary
    Answer
star scema an snowflake schema
 
0
Priya
    Answer
Schema is structure of information

1.STAR SCHEMA
-------------
Centralized Fact table connect the one or more denormalized data

2.SNOW FLAKE SCHEMA


-------------------

Centralized Fact table connect the one or more normalized data

3.STAR FLAKE SCHEMA


-------------------

One or more centralized fact table connect the single denormolized data
 
0
Giri
    Question
If session fails after loading 10000 records in the
target,how can we load 10001 th record when we run the
session in the nexttime?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
using performance recovery to load the target

they are 3 types of performance recovery

1 ) we enable the recovery option in session

2) we put a commit intervel point in session


3) trancate the target and load once again
 
0
Guest
    Answer
Hey man ... first ofall u should need to mention that,
wether u r applied any commit or not.
with out apply any commit, How can u recover the session
yar?
ur quwstion is not valid. if u want exactly improve the
performence, then u must use commit points& commit
intervels.
if u really apply any commit,then if it fail according to u
at 10000 records, then u can recover remaining records by
using OPB_SRVR_RECOVERY table.

Hai friends am i correct ? think once


 
0
Rama Krishna
    Answer
In session properties we have "suspend on error" option is
there. It starts from the failed task not from the begining.
 
0
Gayathri
    Question
What is facttable granularity?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
After you gather all the relevant information about the
subject area, the next step in the design process is to
determine the granularity of the fact table. To do this you
must decide what an individual low-level record in the fact
table should contain. The components that make up the
granularity of the fact table correspond directly with the
dimensions of the data model. Thus, when you define the
granularity of the fact table, you identify the dimensions
of the data model.
 
0
Guest
    Answer
The level of details to be stored in fact table is termed
as granularity.
Eg: for a Retail store, the granularity for sales fact is
as that of Point Of Sales i.e., each transaction occurs,
the data is stored in the fact table.

What are reusable transformations in how many ways we can


create them?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
A reusable transformation is an object which contains
reusable business logic created by single transformation.

A reusable transformation developed in two ways

1. By using tranformation developer


2. Create a normal transformation and make it reusable by
selecting check box in th properties
 
4
Seshagiri K
    Question
What is confirmed dimension and fact?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
A dimension which links with more than one fact table is
called as Confirmed dimesion.
 
2
Prasad.nallapati
    Answer
If a dimension exists in more than one fact table is known
as confirmed dimension. For every fact table confirmed
dimension should be there ex:: Time dimension....
 
3
Praveen Kumar Pendekanti
  

  Answer
A DIMENSION WHICH IS FIXED AND REUSABLE IS CALLED CONFIRMED
DIMENSION
EX : TIME
 
0
Rekha
    Question
What are the two modes of datamovement in informatica sever?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Two modes of data movement are ASCII mode and uni code
 
0
Guest
    Question
What is status code?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Informatica issue a status code when it's run the stored
precedure transformation , it is use for error handling of
store procedure .
 
0
Sanjay
    Question
What is the difference between OLTP and ODS?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
OLTP is online transaction processing systems and ODS os
operational database system.
In OLTP we can save the current data, it depends on the day
to day transactions and it stores the day to day data.
In ODS we can store data for a month also and it is not
restricted to a specific day or transaction.
 
0
Mohit Sondhi
    Answer
OLTP refers a transactional Data base.

ODS refers a data base which actually contains the Basic


data like the details of available Procts,and Branches of
Company etc.
Means It will be small in size and can be modified
daily.Most of the times Thos ODs is used for loading the
Dimensions.
 
5
Sri47
    Answer
OLTP : Online Transaction Processing
The granularity of the data is Transaction i.e. stores each
transaction.

ODS: Operational Data Store


As dataware house contains summarized data
(Weekly/Monthly/Yearly). ODS may be used to maintain data
with lower granilarity (Hourly/Daily).

eg. Data for previous years may be summarized but sould not
be for the current year.
 
0
Himanshu
    Answer
I've got a web application used for creating product orders
backed by an RDBMS.

The RDBMS contains the necessary tables to store the facts


(orders and orderlines) and reference data (product, vendor,
price, customer, ...).
The RDBMS is ever growing. That is, there's no cleanup of
"old" records (e.g. orders from x years back).

What kind of database is this and why ?


An OLTP, an OLAP, an ODS, ..., something else ?

Please advise.

EDH
 
0
Edh
    Question
What are slowly changing dimensions?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Verinon-Technology-Solutions © ALL Interview .com
Answer
there are three types SCD:
type-1: in this we can over write original recourd with new
record
type-2: we can create new record
type-3: we can create new attribute
regards,
ande
 
0
Ande
    Answer
Slowly Changing Dimensions:(SCD)
Over period of time, the value /data associated with
dimensions may change. To track the changes we record the
changes as per the requirement.
There are three types of SCD
SCD 1:No history is maintained. As and when data comes, the
data is entered.
SCD 2: History is maintained
SCD 3: Partial History is maintained.
We maintain history for some columns but not for all.
For example,I have 3 records in a dimension
I have made 1 insert and 1 update. Then if the requirement
is that dimension is to be maintained
In SCD 1 Then
Total number of records is 4 records(1 insert & 1 update)
In SCD 2 Then
Total number of records is 5 records(1 insert & 1 update)
In SCD 3 Then
Total number of records is 4 records(1 insert & 1 update)
NOTE:
History means the slight change in the data stored and
incoming data but it doesn't means years of data.
 
0
Sri Ram
    Answer
DIMENSIONS ARE CLASSIFIED INTO THREE TYPES
SCD TYPE-1 (MAINTAIN CURRENT DATA)
SCD TYPE-2 (MAINTAIN CURRENT DATA+FULL HISTORY OF CHANGES)
SCD TYPE-3 (MAINTAIN CURRENT DATA+ONE TIME HISTORY)
 
0
Sandeep.t Nizamabad
    Question
AT the max how many transformations and mapplets can we use
in a mapping ?
Rank Answer Posted By     Question Submitted By :: © ALL Interview .com
Answer
No restrictions on the numbers, we can use as many se we can.
 
0
J.david Sukeerthi Kumar
    Question
After dragging the ports of 3 sources
sqlserver,oracle,informix to single source qualiofier can
we map these ports directly to target and how?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
No,

By joining the three sources, we can load in to the target


 
0
Veeru
    Answer
How can you join 3 heterogeneous sources to a single source
qualifier?.
For this you need to use 2 joiner transformations and the
resulted cloumns can be connected to the target.
 
0
Rao
    Question
How can we eliminate duplicate rows from flatfile,explain?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
By Using Aggregator transformation by selcting the group by
option or using sorter by using distinct option or by using
dynamic lookup by selecting distinct option.
 
2
Veera Reddy
    Answer
using sorter transformation for distinct in a flat files
 
0
Nagaraj.g
    Answer
veere reddy,

Dynamic lookup will not work for flat file only static
lookup will work for a flat file
 
0
Seshagiri
    Question
If we have lookup table in workflow how do you trouble
shhot to increase performance?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
You can calculate the size of lookup cache file needed
from number of rows and colulmn width needed. You can
increase Cache file size for good performance.
 
0
Deepak Aggarwal
    Answer
There are 2 ways :
1. Create a persistant cache
2. Add the lookup table in SQ make it outer joined
 
0
Priya
    Question
can we generate reports in informatica ? How?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Yes we can, by using Informatica Metadata driven reporting Tool
 
0
J.david Sukeerthi Kumar
    Answer
Depends on what sort of reporting you want.As someone
already suggested you can get it using MetaData Manager but
the better way is using Data Analyzer. If you want to get
to have auto document generated then use INFA stencil.

Hope that answers your question.


Cheers!
 
0
Sub
    Question
What are limitations of joiner transformation?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
1.Both the pipelines begin with the same original data spouece
2.Both input pipelines originate from the SQ transformation
3.Both input pipelines originate from the same normalizer t/r
4.Either input pipelines contains an update transformation
5.Either input pipelines contains a connected or unconnected
sequence generator transformation
 
0
Seshagiri K
    Answer
JOINER,AS IT JOINS DATA FROM HETROGENOUS SOURCES
1)STRUCTURE OF THE BOTH TABLES SHOULD BE SAME.(V.IMP)
ANYWHERE SAY IT FIRST
 
0
Maruthi
    Question
How can we join the tables if they don't have primary and
foreign key relationship and no matching port?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Using joiner transformation.
 
0
Veera Reddy
    Answer
Option 1:

Drag both the sources to an expression. Hardcord a new


field say NEW_VALUE=1, in both the expressions.
Now join both these pipelines using the joiner with join
condition on the new_field you havecreated(NEW VALUE)

Option 2:

How abt using a full outer join as the type of join in the
joiner Transformation...
 
0
Kalyan
    Answer
use a union transformation if there are no matching ports.
 
0
Developer
    Question
Name 4 output files that informatica server creates during
session running?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Session log, Workflow log, Errors log and Badfile
 
0
Ram Kumar
    Question
What is the functionality of update strategy?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Update strategy defines the sources to be flagged for
insert, update,
delete, and reject at the targets.
What are update strategy constants?
DD_INSERT,0 DD_UPDATE,1 DD_DELETE,2
DD_REJECT,3

If DD_UPDATE is defined in update strategy and Treat source


rows as INSERT in Session . What happens?
Hints: If in Session anything other than DATA DRIVEN is
mentions then Update strategy in the mapping is ignored
What are teh different tasks that can be created in
workflow manager?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
total 8 types of task r there:
Assignment,
Controll,
Command,
Decision,
e-mail,
Event-Wait,
Event-Raise,
timer.
generally in workflow manager we used to create command,
session, email notification
Regards,
ande
 
0
Ande
    Question
What are the new features of informatica 7.1?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Lookup on flat file is possible.
Union Transformation is present.
Version Control.
LDAP Authentication.
supporting of 64mb architecture.
 
0
Cuckoo Sreedhar
    Answer
1)We can use flat file as a target
2)we can use flat file as a lookup
3)Mainly union t/r is introuduced in 7.1
4)Data profiling and versioning
5)Propagate attributes
6)we can do partiotions upto 64
 
0
Sarath
  

  Question
Explain the flow of data in Iinformatica?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
I)go to create repository.
II)configar the Informatica server in workflow manager.
III)
1)create folder in repository manager.then exit
2)Go to designer connect Repository then open folder select
source go to upper toolbar select tools--->source ok
autometically comes upper toolbar source.
3)go to upper toolbar select source--->import source then
give ODBC connection then after import source from which
database you wont,select tables then ok.
3)same as target table go to upper toolbar select tools---
>warehouse designer ok, auto metically comes target in
upper toolbar
4)select upper toolbar targets import metadata table
through ODBC connection then ok
5) then after upper toolbar tools--->mapping designer
select ok, autometically comes upper toolbar mapping comes
6)select mapping create ok
give mapping name then after select source in leftside
navigater source table drag and drop and same target table
drag and drop autometically comes source and source
Qualyfier then give link SQ to target(TGT) table link grag
and drop then save repository its valid or not check its
valied go to next step other wise check again
IV)go to work flow manager
connect repository select folder create same as itis
session give session name and link to mapping it
autometically asking which mapping you want then after same
as itis workflowdesigner select workflow then give name
then ok
after drag and drop session then after give link in upper
toolbar tools in linktask ok
then session doubble click then after select upper toolbar
select mappings give source path and target path then after
save and ok then save repository workflow toolbar start
work flow
autometically comes work flow moniter
 
0
Nmlabsonline.net
    Question
Explain one complecated mapping?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Fidelity , Wipro, Wipro © ALL Interview .com Answer
SCD 2 is one of the complecated mapping in informatica
 
0
Ram Kumar
    Answer
normalizer transformation.which will performs normalized data.
ex:year q1 q2 q3
2006 10 20 30
2007 20 30 40
after performing the normalizer transformation the date
will be like this
year item sales
2006 1 10
2006 2 20
2006 3 30
like 2007 also
 
0
Rao
    Answer
upstare,Rao
this only use one nomalizer transformation
configure one item as occurs 3 is ok
it is easy
 
0
Informatica_learner
    Question
what are the real time problems generally come up while
doing or running mapping or any transformation?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
mapping is invalid.database driver error .
 
0
Rao
    Answer
populating null values inti not null columns,
expression errors in the expression editor,
pre n post sql errors,
overflow errors,
unique key constraint violation
and many such

reach me on [email protected]
 
0
Bsgsr
    Question
What is exact use of 'Online' and 'Offline' server connect
Options while defining Work flow in Work flow ?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
When the repo is up and PMSERVER is also up, workflow
monitor always will be connected online
When PMSERVER is down and the repo is still up we will be
prompted for an offline connection with which we can just
monitor the workflow
 
0
Dheebalakshmi
    Question
what is the logic will you implement to laod the data in to
one factv from 'n' number of dimension?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
By using Target load order plan and based on primary foreign
keys to load the data into n no of dimensions
If it wrong pls send the correct answer to my mail
 
0
Seshagiri K
    Question
what is the difference between Informatica 7.1 and
Abinitio?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
There is a lot of diffrence between informatica an Ab Initio

In Ab Initio we r using 3 parllalisim

but Informatica using 1 parllalisim

In Ab Initio no scheduling option we can scheduled manully


or pl/sql script

but informatica contains 4 scheduling options

Ab Inition contains co-operating system

but informatica is not

Ramp time is very quickly in Ab Initio campare than


Informatica

Ab Initio is userfriendly than Informatica

 
0
Nagaraju Bhatraju
    Question
What is Micro Strategy? Why is it used for?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Infosys © ALL Interview .com Answer
It is BI tool used for reporting purposes.
 
2
Sarun5
    Answer
MicroStrategy is a business intelligence (BI), enterprise
reporting, and OLAP (on-line analytical processing)
software vendor.
 
0
L.b.
    Question
what is the difference between stop and abort?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
stop is a command used to stop a session or a batch
abort is similar to stop except it has a 60 second timed out
 
1
Kalpana
    Answer
stop command immediately kills the reading process and
doesnt have any timeout period.

abort command gives a time out period of 60secs to the


informatica server to finish the dtm process else it kills
the dtm process.

reach me on [email protected]
 
1
Bsgsr
    Answer
Do not use abort unless absolutely necessary. It is messy.
Using stop is a cleaner way, and because it is cleaner, it
often takes more time.
Here's the difference:
ABORT is equivalent to:
1. Kill -9 on Unix (NOT kill -7) but YES, Kill -9
2. SIGTERM ABEND (Force ABEND) on Mainframe
3. Windows FORCE QUIT on application.
What does this do?
Each session uses SHARED/LOCKED (semaphores) memory blocks.
The ABORT function kills JUST THE CODE threads, leaving the
memory LOCKED and SHARED and allocated. The good news: It
appears as if AIX Operating system cleans up these lost
memory blocks. The bad news? Most other operating systems
DO NOT CLEAR THE MEMORY, leaving the memory "taken" from
the system. The only way to clear this memory is to warm-
boot/cold-boot (restart) the informatica SERVER machine,
yes, the entire box must be re-started to get the memory
back.
If you find your box running slower and slower over time,
or not having enough memory to allocate new sessions, then
I suggest that ABORT not be used.
So then the question is: When I ask for a STOP, it takes
forever. How do I get the session to stop fast?
well, first things first. STOP is a REQUEST to stop. It
fires a request (equivalent to a control-c in SQL*PLUS) to
the source database, waits for the source database to clean
up. The bigger the data in the source query, the more time
it takes to "roll-back" the source query, to maintain
transaction consistency in the source database. (ie: join
of huge tables, big group by, big order by).
It then cleans up the buffers in memory by releasing the
data (without writing to the target) but it WILL run the
data all the way through to the target buffers, never
sending it to the target DB. The bigger the session memory
allocations, the longer it takes to clean up
.
Then it fires a request to stop against the target DB, and
waits for the target to roll-back. The higher the commit
point, the more data the target DB has to "roll-back".
FINALLY, it shuts the session down.
WHAT IF I NEED THE SESSION STOPPED NOW?
Pick up the phone and call the source system DBA, have them
KILL the source query IN THE DATABASE. This will send an
EOF (end of file) downstream to Informatica, and Infa will
take less time to stop the session.
If you use abort, be aware, you are choosing to "LOSE"
memory on the server in which Informatica is running
(except AIX).
If you use ABORT and you then re-start the session, chances
are, not only have you lost memory - but now you have TWO
competing queries on the source system after the same data,
and you've locked out any hope of performance in the source
database. You're competing for resources with a defunct
query that's STILL rolling back.
 
0
Vijay
    Question
Two relational tables are connected to SQ Trans,what are
the possible errors it will be thrown?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
1.two or more relational tables are with primary key-
foreign key relationships by linking the sources to one
Source Qualifier transformation.
2.the data types of the ports must match.

violation of these two conditions may lead to errors while


joining two tables
 
0
Krishna
    Answer
First thing to note is that Both the sources must be
Homogeneous.
Also the Order of the ports in SQ and the order of the
Columns in the select query (if you are using SQL override)
must be same.
 
0
Bidhar
    Question
what are cost based and rule based approaches and what is
the difference?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
costbased and rule based approaches r used as optimization
techniques in improving the performance of queries
 
0
Kalpana
    Question
Explain about the concept of mapping parameters and
variables ?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   HCL , Tcs © ALL Interview .com Answer
Mapping parameters s a constant value and mapping variables
will change in the mapping.It is mainly usefull for
incremental load processing
 
0
Manju
    Answer
Mapping parameters represents a constant value that we cant
change during the seesion run.

Mapping variables represents a value that we can change


during the seesion run.
 
0
Neena
    Answer
MAPPING PARAMETERS :
MAPPING PARAMETERS REPRESENT A CONSTANT VALUE AND DOES
NOT
CHANGE DURING THE SESSION
MAPPING REUSABILITY CAN BE ACHEIVED

MAPPING VARIABLE REPRESENTS A VALUE THAT CHANGE THE VALUE


DURING THE EXECUTION FROM INITIAL VALUE TO THE FINAL VALUE
MAPPING VARIABLES USE IN INCREMENTAL LOAD PROCESS

Can you generate reports in Informatcia?


Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
yes,using metadata reporter
 
0
Raghavendra Reddy
    Question
What are the different types of Type2 dimension maping?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   CTS © ALL Interview .com Answer
type2 scd it wil maintain historical informtion + currnt
information along with 3 options .....
1.effective date
2.version number
3.flag value
 
3
Sumankumar
    Question
What are the types of maping in Getting Started Wizard?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Slowly growing Target, Simple pass through
 
0
Ram Kumar
  

  Question
What are the out put files that the informatica server
creates during the session running?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
session log files
workflow log files
Cache files;created and deleted at the end of session.
 
0
Guest
    Question
What are the data movement modes in informatcia?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
ASCII and UNIMODE
 
0
Sanjay
    Answer
ascii and unicode
ascii-single byte holds all data
unicode-2 byte for each character
 
0
Maruthi
    Question
What is difference between maplet and reusable
transformation?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Mapplet consists of set of transformations that is
reusable. A reusable transformation is a
single transformation that can be reusable.
 
0
Mahesh
    Answer
Mapplet:this is a set of transformation that can be made
reusable.Reusable T/F is a single transformation which can
be made reusable.
Parameter and variable defined in Mapplet can not be used
in other mapping or mapplets but which defined in reusable
T.F can be used in other mappings/mapplets.
cobol,normalizer,xml can not be used in mapplet but they
can be made reusable.
 
0
Ravi
    Question
How to recover the standalone session?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
How to recover the standalone session?

A standalone session is a session that is not nested in a


batch. If a standalone session fails, you can run recovery
using a menu command or pmcmd. These options are not
available for batched sessions.

To recover sessions using the menu:


1. In the Server Manager, highlight the session you want to
recover.
2. Select Server Requests-Stop from the menu.
3. With the failed session highlighted, select Server
Requests-Start Session in Recovery Mode from the menu.

To recover sessions using pmcmd:


1.From the command line, stop the session.
2. From the command line, start recovery.

 
0
Nagaraju Bhatraju
    Answer
If you configure a session in a sequential batch to stop on
failure, you can run recovery starting with the failed
session. The Informatica Server completes the session and
then runs the rest of the batch. Use the Perform Recovery
session property

To recover sessions in sequential batches configured to stop


on failure:

1.In the Server Manager, open the session property sheet.

2.On the Log Files tab, select Perform Recovery, and click
OK.

3.Run the session.


4.After the batch completes, open the session property
sheet.

5.Clear Perform Recovery, and click OK.

If you do not clear Perform Recovery, the next time you run
the session, the Informatica Server attempts to recover the
previous session.

If you do not configure a session in a sequential batch to


stop on failure, and the remaining sessions in the batch
complete, recover the failed session as a standalone
session.

How to recover the standalone session?

A standalone session is a session that is not nested in a


batch. If a standalone session fails, you can run recovery
using a menu command or pmcmd. These options are not
available for batched sessions.

To recover sessions using the menu:


1. In the Server Manager, highlight the session you want to
recover.
2. Select Server Requests-Stop from the menu.
3. With the failed session highlighted, select Server
Requests-Start Session in Recovery Mode from the menu.

To recover sessions using pmcmd:


1.From the command line, stop the session.
2. From the command line, start recovery.

 
0
Nagaraju Bhatraju
    Question
If you done any modifications for a table in back end does
it reflect in informatca warehouse or maping?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
No there is no option in informatica for automatic update
of source/target/lookup/stored procedure definition from
DB/file.
Though you can compare source/target/../.. definitions from
existing DB objects and make out if informatica definitions
needs a updation.
 
0
Deepak Aggarwal
    Question
How to recover sessions in concurrent batches?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
when we use concurrent process to load when session
fails ...all other session will run normally and load the
data..the failed session will considered as a standalone
sesion ...and we have perform recovery for the failed
session.
there three type where we can recover a failed session

1.start the session again if the Informatica server doesnt


perform the commit

2.else perform recovery on the session

3. if we can recover the session truncacte the data from


target table and run the session.

 
0
Nagaraju Bhatraju
    Question
Explain about perform recovery?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
A standalone session is a session that is not nested in a
batch. If a standalone session fails, you can run recovery
using a menu command or pmcmd. These options are not
available for batched sessions.

To recover sessions using the menu:

1. In the Server Manager, highlight the session you want to


recover.

2. Select Server Requests-Stop from the menu.

3. With the failed session highlighted, select Server


Requests-Start Session in Recovery Mode from the menu.

To recover sessions using pmcmd:

1.From the command line, stop the session.

2. From the command line, start recovery.

U CAN DO IT BY PERFORMANCE RECOVERY


WHEN THE SERVER RUNS THE RECOVERY SESSION , SERVER READS
THE DATA FROM OPR_SRVR_RECOVERY TABLE AND NOTES THE ROW ID
OF THE LAST ROW COMMITTED TO THE TARGET TABLE ,THEN
INFORMATICASERVER READS THE ENTIRE SOURCE AGAIN AND
PROCESS
THE DATA FROM NEXT ROW
BY DEFAULT PERFORMANCE RECOVERY IS DISABLE ,HENCE IT WONT
MAKE ANT ENTRIES IN TO OPR_SRVR_RECOVERY TABLE
 
0
Nagaraju Bhatraju
    Question
If a session fails after loading of 10,000 records in to
the target.How can you load the records from 10001?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
After running the session once it will load 10000 records
and it will fail afterwards then we can take the minimum
and maximum count of the number of records in the target in
a paramter and we can specify these parameters in the SQl
override of sql qualifier so once again when it will run
the session then it will take those minimum and maximum
values from the parameter and will specify a condition in
the sql overide that dont take any values from min to max
and take rest of the values so in this case the minimum
will be 1 and the maximum will be 10000 so the sql overide
will take the values after 10000 i.e 10001.

select * from employee where value is not in between $$min


and $$max
I think may be this can be a solution
 
0
Mohit
    Answer
one option is session recovery.
or
select * from select rownum from tablenaem) where rownum>10000
 
1
Rao
    Answer
simple. in session properties configure the session to
enable recovery. it then create s a database log file which
contains the rowids ot all records committed to the target.
when you run recovery session it checks with the log file
the last rowid committed to the target and writes to the
target from the next rowid. condition the session is
configured to run in a normal mode.
 
4
Bsgsr
    Answer
mr rao is answer is correct .another way use exp t/r put it
>1000
 
0
Anil
    Question
Why we use lookup transformations?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
we use lookup transformation to lookup data in a relational
table ,view or synonym.and by using lookup transformation
we can perform the following jobs
-to get the related value
-to perform the calculations
-to update the slowy changing dimentions
 
0
Guest
    Question
What are Dimensions and various types of Dimensions?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Dimensions contains Textual attributes.
It is Wide not Deep.(whereas FACT is deep not wide)

about Dimensions..

The Slowly Changing Dimensions Wizard creates mappings to


load slowly changing dimension tables:

Type 1 Dimension mapping. Loads a slowly changing dimension


table by inserting new dimensions and overwriting existing
dimensions. Use this mapping when you do not want a history
of previous dimension data.
Type 2 Dimension/Version Data mapping. Loads a slowly
changing dimension table by inserting new and changed
dimensions using a version number and incremented primary
key to track changes. Use this mapping when you want to
keep a full history of dimension data and to track the
progression of changes.
Type 2 Dimension/Flag Current mapping. Loads a slowly
changing dimension table by inserting new and changed
dimensions using a flag to mark current dimension data and
an incremented primary key to track changes. Use this
mapping when you want to keep a full history of dimension
data, tracking the progression of changes while flagging
only the current dimension.
Type 2 Dimension/Effective Date Range mapping. Loads a
slowly changing dimension table by inserting new and
changed dimensions using a date range to define current
dimension data. Use this mapping when you want to keep a
full history of dimension data, tracking changes with an
exact effective date range.
Type 3 Dimension mapping. Loads a slowly changing dimension
table by inserting new dimensions and updating values in
existing dimensions. Use this mapping when you want to keep
the current and previous dimension values in your dimension
table.

apart from these: there are...

Confirmed dimensions

Junk Dimensions
Degenerated dimension

Thanks...
 
0
Shashiall
    Answer
Subject of the mesurable data. Dimensions are mainly 11
types. They are
1. Changing Dimension
2. Slowly Changing Dimension
3. Rapidly Changing Dimension
4. Conformed Dimension
5. Degenerate Dimension
6. Junk Dimension
7. Inferred Dimension
8. Role Playing Dimension
9. Shrunken Dimension
10. Out Triggers
11. Static Dimension
 
0
Vvm.sp
    Answer
Type of Dimension
Conformed Dimension
Dimensions are conformed when they are either exactly the
same (including keys) or one is a perfect subset of the
other. Most important, the row headers produced in the
answer sets from two different conformed dimensions must be
able to match perfectly.

Conformed dimensions are either identical or strict


mathematical subsets of the most granular, detailed
dimension. Dimension tables are not conformed if the
attributes are labeled differently or contain different
values. Conformed dimensions come in several different
flavors. At the most basic level, conformed dimensions mean
the exact same thing with every possible fact table to
which they are joined. The date dimension table connected
to the sales facts is identical to the date dimension
connected to the inventory facts.[1]

Junk Dimension
A junk dimension is a convenient grouping of typically low-
cardinality flags and indicators. By creating an abstract
dimension, these flags and indicators are removed from the
fact table while placing them into a useful dimensional
framework.[2]

Degenerated Dimension
A dimension key, such as a transaction number, invoice
number, ticket number, or bill-of-lading number, that has
no attributes and hence does not join to an actual
dimension table. Degenerate dimensions are very common when
the grain of a fact table represents a single transaction
item or line item because the degenerate dimension
represents the unique identifier of the parent. Degenerate
dimensions often play an integral role in the fact table's
primary key.[3]

Role-Playing Dimensions
Dimensions are often recycled for multiple applications
within the same database. For instance, a "Date" dimension
can be used for "Date of Sale", as well as "Date of
Delivery", or "Date of Hire". This is often referred to as
a "role-playing dimension".
 
0
Atul
    Question
What is Code Page Compatibility?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
it is compatibility of code for maintaining the accuracy of
data. Used when the the sorce data might be in different
language.
 
0
Sanjay
    Question
What are Target Options on the Servers?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Target option for flie: FTP, LOADER, MQ
For Relational : Oracle, Teradata, sybase, Informix etc.

What is tracing level and what are the types of tracing


level?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
Tracing Level -Amount of detail in Session Log.
Four Types:

Normal
Terse
Verbose Intialization
Verbose Data
 
0
Nathan Vaithi
    Answer
If u observe in the mapping for every trarsformation has
the property Tracing Level and it means The amount of
detail in the session log depends on the tracing level that
you set.
Normal: PowerCenter Server logs initialization and status
information, errors encountered, and skipped rows due to
transformation row errors. Summarizes session results, but
not at the level of individual rows.

Verbose Initilization: In addition to Normal tracing, the


session log file contains names of index and data files
used,and detail transformation information statics.

Verbose data: In additon to verbose intitialigation, Power


centre server logs each row that passing through mapping.

Terse: PowerCenter Server logs initialization information


as well as error messages and notification of rejected data
 
0
Sekhar
    Question
What are the types of metadata that stores in repository?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Following r the types of metadata that stores in the
repository

Database connections
Global objects
Mappings
Mapplets
Multidimensional metadata
Reusable transformations
Sessions and batches
Short cuts
Source definitions
Target definitions
Transformations
 
0
Mahesh
  

  Question
Define informatica repository?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Wipro © ALL Interview .com Answer
Hi

Informatica repository is a central meta-data storage place,


which contains all the information which is necessary to
build a data warehouse or a data mart.

Meta-data like source def,target def,business


rules,sessions,mappings,workflows,mapplets,worklets,database
connections,user information, shortcuts etc
 
0
J.david Sukeerthi Kumar
    Question
In a sequential batch can you run the session if previous
session fails?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Yes. By setting the option always runs the session
 
0
Seshagiri K
    Answer
YES ,THIS CAN BE OBTAINED BY SETTING SESSION TO ALWAYS RUN
EVEN PREVIOUS SESSION FAILURE
 
0
Satya
    Question
What is a command that used to run a batch?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
pmcmd
 
0
Guest
    Question
When the informatica server marks that a batch is failed?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
A batch fails when the sessions in the workflow are checked
with the property "Fail if parent fails"
and any of the session in the sequential batch fails.
 
2
Krishna
    Question
Can you copy the session to a different folder or
repository?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
we can, but we have to copy the mapping first and then we
have to copy the session....
 
0
Praveen Kumar
    Answer
Folder :Yes
Repository: No.

if you want to move the session to another repo, then you


have to export the session and then import it in the target
Repo.
This assuming that the mapping has alraedy been moved..
 
0
Kalyan
    Question
In which circumstances that informatica server creates
Reject files?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Informatica server rejects records beasuse of mainly 2
reasons

1) When there are database constraints


2) When the data gets overflowed
 
0
Dr.jornalist
    Answer
also when the update strategy contains dd_reject
 
0
Bsgsr
    Question
What are the basic needs to join two sources in a source
qualifier?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
1) Both sources should be from same Database.
2) They should have a common field which can be used for a
join.
 
0
Sameer
    Answer
1) both tables should have PK-FK relationship
2) both should have matching datatypes.
 
0
Ravi
    Answer
1. two sources must be from same datbase.
2.there should be primary-foreign key relationship between
them.
 
0
Krishna
    Question
What is aggregate cache in aggregator transforamtion?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Cache is used to increase the performance of the mapping.
There r 2-typs of cache: Data & Index Cache
 
0
Ram Kumar
    Answer
It is place where all the rows entering the Aggregator
Transformation will be placed till the aggregate
calculations like sum,Avg are made.
all the group information will be stored in index files and
row data in data cache.
 
3
Ravi Sadanand
    Answer
aggregate cache is used to aggregate the data for that we
are using data cache.
 
0
Saradhi
    Answer
In aggrefate cache we have 2 types of caches are there
1) Index
2) Data
Index cahce data is used for sorting purpose.
data is used for storing the data.
 
0
Saradhi
    Question
What are the reusable transforamtions?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
a transformation a repository object...which
receives,modifies and passes the data...reusable
trasfermation is a trasformation where it can be used any
number of time in a mapping or in mapplet.
we can creat a reusable transformation in trasformation
developer or we can creat a normal trasformation in mapping
designer and promot it as a reusable transformation by
changing the properties.
 
0
Krishna
    Question
what is a time dimension? give an example?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Example of time dimension is Calender year.

 
0
Shaheem Qhizer
    Answer
Time Dimension:Generally, to generate dates as per the
requuirement we use date dimension.

If your loading of data in fact table on the basis of


time/date then we use the values of date dimension to
populate the fact.

we take the last date on which the fact is populated. Then


check for the existence of dates for the data to be
populated.ifnot we generate through some stored procedure
or as per requirement.

Eg:Daily,weekly,financial year, calender year, business


year etc.,
 
0
Sri Ram
    Question
Discuss the advantages & Disadvantages of star & snowflake
schema?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Star Schema: It is fully denormalized schema.the diagram of
fact table with dimension tables resembles a star that's
why it is called star schema.All the dimensions will be in
2nd normal form.
Snow flow schema:In this all dimensions will be in
normalized form.that's why it is called normalized star
schema.For each attribut a seperate table will be
created.As there is a possibility for more number of
joins,obviously the performance will be degraded.
 
0
Ravi Sadanand
    Answer
in starschema the fact is denormalised ...all dimension
tables are normalise..there will be primary foreignkey
relation ship between fact and dimension tables.
for better perfomance we use starschema when compare to
snow flake schema ..where fact table and dimension tables
are normalised...for every dimension table tthere will be a
look table ..we have to dig from top to bottom in the
snowflake schema.
 
0
Krishna
    Answer
Star Schema: A star schema is a specialized design that
consists of multiple dimension tables, which describe
aspects of a business, and one fact table, which contains
the facts about the business.For example, if you have a
mail-order business selling books, some dimension tables are
customers, books, catalogs, and fiscal years. The fact table
contains information about the books that are ordered from
each catalog by each customer during the fiscal year.
 
0
Susanta Karmakar
    Question
What is the difference between Normal load and Bulk load?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
If you enable bulk loading, the PowerCenter Server
bypasses the database log.This improves session performance.
But the disadvantage is that target database cannot perform
rollback as there is no database log.

In normal load the databse log is not bypassed and


therefore the target databse can recover from an incomplete
session.The session performance is not as high as is in the
case of bulk load
 
0
Guest
    Answer
normal load performance recovery is possible where as in
bulk mode it is not possible bez, there is no databaselog
to perform rollback
 
0
Harithareddy
    Question
what is a junk dimension ?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
A junk dimension is a convenient grouping of flags and
indicators. It's helpful, but not absolutely
required, if there's a positive correlation among the
values. The benefits of a junk dimension
include: ? Provide a recognizable, user-intuitive location
for related codes, indicators and their
descriptors in a dimensional framework.
? Clean up a cluttered design that already has too many
dimensions. There might be five
or more indicators that could be collapsed into a single 4-
byte integer surrogate key in
the fact table.
? Provide a smaller, quicker point of entry for queries
compared to performance from
constraining directly on these attributes in the fact
table. If your database supports bitmapped
indices, this potential benefit may be irrelevant, although
the others are still
valid.
 
2
Aparna
    Answer
junk data is nothing but...

data consists of any unwanted data...

Ex:
praveen kumar(orginal)

prav$%$^&een ku$#^#mar(the data other than name is called


junk data)
 
0
Praveen Kumar
    Answer
sorry for last answer

i think that junk data

sorry for that


 
0
Praveen Kumar
    Answer
junk dim is convient for group of flags and attributes to
get them out of a fact table inti useful dimension framework
 
0
Srinu
    Answer
A "junk" dimension is a collection of random transactional
codes, flags and/or text attributes that are unrelated to
any particular dimension.

The junk dimension is simply a structure that provides a


convenient place to store the junk attributes. A good
example would be a trade fact in a company that brokers
equity trades.

The fact would contain several metrics (principal amount,


net amount, price per share, commission, margin amount,
etc.) and would be related to several dimensions such as
account, date, rep, office, exchange, etc. This fact would
also contain several codes and flags that were related to
the transaction rather than any of the dimensions ... such
as origin code (that indicates whether the trade was
initiated with a phone call or via the Web), a reinvest
flag (that indicates whether or not this trade as was the
result of the reinvestment of a dividend payout) and a
comment field for storing special instructions from the
customer.

These three attributes would normally be


removed from the fact table and stored in a junk
dimension ... perhaps called the trade dimension. In this
way, the number of indexes on the fact table would be
reduced, and performance (not to mention ease of use) would
be enhanced. Hope this helps.

What is the procedure to load the fact table.Give in


detail?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Fact table basically consists of Foreign keys which are
taken from the primary keys of all dimension as well as
facts.
So, for this reason take all the dimensions as Unconnected
lookups and try to take the foreign keys and at the same
time try to use the avaialble sources and make all possible
facts with the necessary transformations.
 
0
Dr.journalist
    Question
What is the use of incremental aggregation?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
in incremental aggregation only the captured changes will be
considered for aggregate calculation i.e. when new data will
be inserted for calcutation , informatica power center
server will not start the processing from the begining .
 
1
Ranjan
    Answer
The first time you run an incremental aggregation session,
the PowerCenter Server processes the entire source. At the
end of the session, the PowerCenter Server stores aggregate
data from that session run in two files, the index file and
the data file. The PowerCenter Server creates the files in
a local directory.

Each subsequent time you run the session with incremental


aggregation, you use only the incremental source changes in
the session.
 
0
Lokendra
  

  Question
why dimenstion tables are denormalized in nature ?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
For fast retrieval(to Perform a SELECT Operation)
 
0
Siddu
    Question
What is the difference between Power Centre and Power Mart?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
Both power centre and power mart can perform ETL task but
the difference in between them is

In power centre mapping is done

In power mart both mapping and mapplet is done


 
0
Siva Prasad
    Answer
Informatica PowerCenter - has all options, including
distributed metadata, ability to organize repositories into
a data mart domain and share metadata accross
repositories,Partioning is available.

Informatica PowerMart - a limited license (all features


except distributed metadata and multiple registered
servers). No Partioning is available.
 
0
Mubbasir
    Answer
All native source connectivities to systems such as
mainframe legacy, ERP, EAI only available with PowerCenter
while PowerMart can source from ralational and flat file
sources only.

You will also find design features to handle very large


volumes only available in PowerCenter.

On the repository side only PowerCenter can support the


Global Repository and networked repositories. All other ETL
capabilities and transformation features are the same.
 
0
Pradeep
    Answer
Power Center Power Mart
------------ -----------
1.Local and Global Local Repository
Repository

2.Session Partition available Session Partition Not


available

3.24 transformation available 16 transformation available


 
0
Giri
    Question
what are the enhancements made to Informatica 7.1.1 version
when compared to 6.2.2 version?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
lookup on flat file in informatica7.1
no such provision in 6.1
union all trnasformation in 7.1 not in 6.1
transaction control tranformation not in 6.1
file reposioty in 7.1

Except to previous points.........

1. PMCMD: By using this option we can use parameter file at


command prompt. Why this needed suppose we are using more
than on parameter file, in such case we have to change the
parameter file in session properties. By this option u can
run from command prompt by changing parameter file, no need
to go to session properties.

2. Test load: we can test the load by taking the specified


no. of records.
 
0
Shashiall
    Question
what is the exact meaning of domain?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Hi , The Domain concept arises in Informatica version 8,
we have global integration service where in which we can
have domains and each domain can be configured to diffrent
environments, and each domain can be configured to have
diffrent nodes and we can assign a particular workflow to
run on different nodes , and Informatica corporation can
customize for particular node for better performance
and moreover if particaular workflow fails due to
connectivity problems , it automatically reassigns to
some other node which is avaialable
 
0
Deshmukh Sachin
    Answer
The PowerCenter domain is the fundamental administrative
unit in PowerCenter. The domain supports the administration
of the distributed services. A domain is a collection of
nodes and services that you can group in folders based on
administration ownership.
 
0
Sushil Ramteke
    Question
How do you handle decimal places while importing a flatfile
into informatica?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
1.)use to_integer('column name','newcolumn)
it works

 
0
Maruthi
    Question
What is IQD file?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
it is a Impramtu Quarry deffination file its genarate in
Impramtu administrator in cognos EP 7 Series then genarate
report save as in IQD then its using in Power play
transformer ok
 
0
Nmlabsonline.net
    Question
What is data merging,data cleansing,sampling?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Datamerging: This is the process of
Datacleansing:- Removing the Data inconsistencies and
Inaccuracy

Data sampling:- Arbitorily taking the data from the group


of records for the sample purpose.
 
1
Dr.jornalist
    Answer
DATA MERGING : IT IS THE PROCESS OF INTEGRATING THE SOURCES
WITH SIMILAR STRUCTURE AND SIMILAR TYPE
TABLE A
ENO ENAME
100 REKHA VARCHAR2(10)
TABLE B
ENO ENAME
101 SAHAN VARCHAR
SO U CAN MERGE TO COMMAON DATATYPE STRING IN THE ABOVE
CASE
dATA CLEANSING :
IT IS THE PROCESS OF IDENTIFING THE INCONSISTANCIES AND
INACCURACIES

DATASAMPLING:
ARBITARILY CHOOSING THE RECORDS FROM GROUP OF RECORDS FOR
TEST
 
0
Rekha
    Answer
Data Merging: It is a process of combning Non-Similar
structures or Similar structure data into Target Warehouse
system.
To combine Non Similar we can use Joins Concept,For
Similar We can use Union Concept

Data Cleansing:It is a process of converting Non Unique


data format of the source system into unique data format of
Target Warehouse system.
I dont Know definion for Data Sampling..can anyone plz give
the answer...
 
0
Prasuna
    Question
How to import oracle sequence into Informatica?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Satyam © ALL Interview .com Answer
with the help of Stored procedures as well as SQL Override
 
0
Dr.jornalist
    Answer
yes first answer is right but through unconnectet lookup
also we can import sequence into informatica
 
0
Abhishek
    Answer
an oracle function also can serve the purpose
 
0
Bsgsr
    Question
what is worklet and what use of worklet and in which
situation we can use it?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Worklet:- Group of tasks taken to accomplih a task is known
as a Worklet.
Use of Worklet:- You can bind many of the tasks in one
place so that they can easily get identified and also they
can be of a specific purpose.
 
0
Dr.jornalist
    Answer
Worklets are objects that represent a set of workflow tasks
that allow to reuse a set of worflow logic in several
window.
 
0
Vinutha
    Question
what is difference between dimention table and fact table?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
dimensional table stores the textual description of data
whereas fact table stores numerical values.
 
0
Guest
    Question
what is polling?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
It will give the updated information about the session in
monitor tool.......
 
0
Praveen Kumar
    Question
what happens if you try to create a shortcut to a non-
shared folder?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
If you try to create a shortcut to a nonshared
folder, the Designer creates a copy of the object instead.
 
0
Krishna
    Question
If you want to create indexes after the load process which
transformation you choose?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   HCL © ALL Interview .com Answer
use the post session SQL to craete indexes on the target
table..
 
0
Kalyan
    Answer
stored procedure transformation can be used to create and
drop indexes before and after loading into the target.

Where is the cache stored in informatica?


Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Infosys , Satyam © ALL Interview .com Answer
Informatica server allocate the cache
 
0
Ramana
    Answer
Cache is stored in the cache directory.
For Aggregator,Joiner and Lookup transformations cache
values is stored in the cache directory.
For sorter transformation cache values stored in the temp
directory.
 
0
Sgk
    Answer
For lookup, by default the cache is stored in $PMCACHEDIR
in the informatica server directory
You can aslo give ur own settings where you need to store
the cache values.
 
0
Dr.jornalist
  

  Answer
Cache is stored in a cache directory.(You can specify
whereever you want in your unix box(repository)).
 
0
Ashok
    Question
what is surrogatekey ? In ur project in which situation u
has used ? explain with example ?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Surrogate Key:- Surrogate key is a system generated primary
key used in informatica for maintaining history of values.
 
0
Dr.jornalist
    Answer
Some times the primary constraint may violate at that time
we go for surrogate key.
 
0
Kalpana
    Answer
the above one is primary key constraint
 
0
Kalpana
    Answer
the surrogate key is the primary key used to avoid the
problem of critical column. let me clarify u with example...

table A:
Cust_no(PK) Cust_name Loc
100 X Mumbai

this is what ur OLAP DHW is having abt Customer X

But, he has changed his address from X to Y. U have to


update this in ur OLAP DWH. then, i u try to insert this
into ur Table A,
Error: Prinary key Constraint violated (Coz,PK is same for
the both the records)... then, LOC is the Critical Column.
to avoid this, we need to generate a primary key either by
user defined or system generated.
 
0
Murali
    Question
what is Partitioning ? where we can use Partition?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
The Partitioning Option increases Power Center?s
performance through parallel data processing, and this
option provides a thread-based architecture and automatic
data partitioning that optimizes parallel processing on
multiprocessor and grid-based hardware environments.
Partitions are used to optimize the session performance. We
can select in session properties for partitions
Types- Default----pass through partition, key range
partition, round robin partition, hash partition.
 
4
Mahesh
    Question
what are the different types of transformation available in
informatica and what are the mostly used ?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Lookup,Expression,Rank,Sorter,Agreegator,Update
Strategy,Filter,Router,Source Qualifier,stored
Procedure,Sequence generator and joiner.
The mostly used transformations are :
Lookup, Expression, Agreegator,Router and Update
Strategy,joiner and sequence generator.

sumi..
 
0
Guest
    Answer
Lookup,Expression,Rank,Sorter,Agreegator,Update
Strategy,Filter,Router,Source Qualifier,stored
Procedure,Sequence generator and joiner,Update
Strategy,Normalizer,XML source qualifier.
 
0
Bharathi
    Answer
Look up,Expression,update
strategy ,joiner,router,filter,sequence generator,
aggregator,normalizer,stored procedure, webservice
transformation,XML generator, XMLparser,source qualifier

Mostly used transformations are


1.Look up, Expressions
 
0
Sailaja
    Question
How to recover sessions in concurrent batches?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
by removing the target table and run the session again

if it is wrong pls send the answer to my mail


 
0
Seshagiri K
    Answer
when we use concurrent process to load when session
fails ...all other session will run normally and load the
data..the failed session will considered as a standalone
sesion ...and we have perform recovery for the failed
session.
there three type where we can recover a failed session

1.start the session again if the Informatica server doesnt


perform the commit

2.else perform recovery on the session


3. if we can recover the session truncacte the data from
target table and run the session.
 
0
Krishna
    Question
what is the gap analysis?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
its the difference between what is needed and what is
available...
 
0
Slim
    Question
what is difference between COM & DCOM?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Satyam © ALL Interview .com Answer
COM is technology developed by Microsoft which is based
on OO Design .COM Componenets exposes its interfaces at
interface pointer where client access Components
intrerface.

DCOM is the protocol which enables the s/w componenets in


different machine to communicate with each other through
n/w .
 
0
Latha
    Question
How to view and Generate Metadata Reports?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
By using metadata reporter we can generate the reports
against the informatica . Matadata reporter is web based
apllication tool
 
0
Seshagiri K
    Question
How to call stored Procedure from Workflow monitor in
Informatica 7.1 version?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Accenture , Accenture © ALL Interview .com Answer
If the stored procedure id used to do any operations on the
database tables (say Dropping the indexes on the tgt table
or renaming it or truncating it)then call them at the Pre
SQL and Post SQL options at the session properties of the
Target.
 
0
Kalyan
    Question
What is critical mapping?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
SCD 2 is critical Mapping
 
0
Ram Kumar
    Question
How to improve the performance of Aggregate
transformation?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Dmss © ALL Interview .com Answer
By giving sorted inputs and do not forget to tick the
sorted input box
 
0
Ritu
    Answer
Place the agg transformation after souce qualifier,becoz
if yur using filter,sorting before agg ,it will reduce the
agg performance .according to my view
 
0
Uzval_vee
    Answer
1.Use sorted data and enable option 'sorted i/p'
2.By using incremental aggregation
3.group by simple columns like numeric columns
4.use filter transformation before aggregator to avoid
unnecessary aggregation on unwanted columns.
 
0
Deepthi
    Question
Why touse stored procedure in ETL Application?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
i used to stored procedure for time conversion,droping and
creating indexes,for loading time dimension,to know the
status of the database,to know the space availble.
 
0
Grhyrngnn Yjhyj
    Answer
in our project we used store procedure to implement CDC
(change data capture) for incremental data load.

generaly store procedure used when you have to implement


your complex business logic.
 
0
Susheel Sharma
    Question
How do you create single lookup transformation using
multiple tables?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   TCS © ALL Interview .com Answer
Lookup transformation: Based upon one/more keys the data is
retreived from one/more tables.

create a single lookup transformation by Joining the


multiple tables, having connected the keys defined in
lookup tranformation.
 
0
Latha
    Answer
Re: How do you create single lookup transformation using
multiple tables?
Answer
# 1 Lookup transformation: Based upon one/more keys the
data is
retreived from one/more tables.

create a single lookup transformation by Joining the


multiple tables, having connected the keys defined in
lookup tranformation.
 
0
Koti
    Answer
we have the lOOKUP OVERRIDE Query in the Lookup
transformation. Use the SQl Query to join the tables you
lookup on. Thsi is similar to what yo do at the Source
Qualifier
 
5
Kalyan
    Answer
you cannot join two tables in a lookup. Lookup works only
on one underlying table.
 
0
Murali Vishnuvajhala
    Answer
join the tables in the database itself then do the look up
else override the lookup sql and look up.
i believe this would work.
 
0
Bsgsr
    Answer
Apologies for giving wrong answer in the past (Answer #4)

You can actually join multiple tables in the lookup.


Here are the steps.
1. Click on Lookup transformation
2. Click on "Skip" button to the right
3. A green Lookup Transformation will apprear without any
ports.
4. Put your query in the SQL Override.
5. Make sure you specify the columns you selected in your
query as ports IN THE SAME ORDER
6. Your lookup is ready

NOTE: When you specify the query make sure you specify
column alises for each column.... else you will get invalid
lookup error during run time.

Good Luck!
 
0
Murali Vishnuvajhala
    Question
In update strategy target table or flat file which gives
more performance ? why?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Flate file give better performance bcz Retrival of data
from flat file is fast then relation database
 
0
Sanjay
    Question
How to load time dimension?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   Ford © ALL Interview .com Answer
Run the procedure to load the ttime dimension.Its not
loaded frequently,but once or twice a year.
what is the flow?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
Data from source to target followed by some business logic
in terms of transformations is called FLOW.
 
0
Shashiall
    Question
what is the architecture of any Data warehousing project?
Rank Answer Posted By     Question Submitted By :: Guest This Interview
Question Asked @   IBM © ALL Interview .com Answer
Basically there are two types of architectures.
1. Top Down(Dependent) and 2. Bottom up(Independent)

In Top Down approach initially DW comes and then DM.


In Bottom Up approach intially DM comes and then DW.
 
0
Veera Reddy
    Answer
step-01------>source to staging
step-02------>staging to dimension
step-03------>dimension to fact

This is the general procedure.........


 
0
Praveen Kumar
  

  Answer
PROJECT PLANNING
REQUIREMENTS GATHERING ---PRODUCT SELECTION AND
INSTALLTION

DIMESIONAL MODELING --- PHYSICAL


MODELING ---- DEPLOYMENT --- MAINTENANCE

IN EASY TREMS FOR DIMENSIONAL MODELING


1. SELECT THE BUSINEE PROCESS
2 IDENTIFY THE GRANINS
3 . DESIGN THE DIMENSION TABLE
4 . DESIGN THE FACT TABLE
ONCE THESE 4 STPE ARE OVER IT WILL MOVE TO PHYSICAL
MODELING
IN PHYSICAL MODELING U APPLY THE ETL PROCESS AND
PERFORMANCE TECHNOQUES .
 
4
Rekha
    Answer
i simply put this way.
source to staging
staging to dwh
 
0
Bsgsr
    Question
what is lookup ?
Rank Answer Posted By     Question Submitted By :: Guest © ALL Interview
.com Answer
I think look up is a transformation it is mostly used for
slowly changed dimenctions(SCD). with the help of look up
T/R we can join to dimention tables. These tables may
Hetrogonies tables

[email protected]
sai suresh
 
0
Sai Suresh
    Answer
2 . Look up data in relational able, view, or synonym,
Input a look up definition from any relational data base to
which both the informatica client and server can connect.
You can use multiple lookup transformations in mapping.
 
0
Sai Suresh
    Answer
Use to Lookup a data of table,view and synonym from a
relational or flat files. U can use multiple lookup
transformations in mapping.
- Used to get a related value
- used to perform a calculation
- mainly used in slowly changing dimension
 
0
Nagarjuna Redd
    Answer
look up transformation is used to look the data either from
source or target whenever the req feild is not in the
mapping.
we can use two or more tables as look up by using the join
condition
 
0
Priya
    Answer
LOOK UP IS A PASSIVE TRANSFORMATION.
LOOK UP IS USED TO LOOK UP THE DATA IN RELATIONAL TABLES
FLAT FILES ,VIEWS OR SYNONYMS
LOOK UP IS USED TO 1 GET A RELATED VALUE
2 PERFORMING CALCULATIONS
3. UPDATING SLOWLY CHANGING DIMENSIONS
TYPES OF LOOK UP
CONNECTED
UNCONNECTED
CACHED/UNCACHED
 
0
Rekha
    Answer
Lookup connected,unconnected and Passive transformation. Looking for related
values and inserting flake values.

Inserting Flag value

0 - No Change
1 - Insert
2 - Update

You might also like