Cognos Interview Questions Gave by SatyaNarayan
Cognos Interview Questions Gave by SatyaNarayan
Cognos Interview Questions Gave by SatyaNarayan
data module
2. Data Set
3. Difference between dataset and data module.
Data modules
Data modules are source objects that contain data from data servers, uploaded
files, or other data modules, and are saved in My content or Team content.
Data Modules: Data modules are a feature introduced in
Cognos 11 that offer a web-based data modeling and blending
experience. Data can be pulled from a large number of
databases as well as excel files and Cognos packages. Key
features include automatic relative time (YTD, MTD, etc…), table
split/merge, custom grouping, data cleansing and multi-fact and
even multi-database query aggregation.
Data sets
Data sets contain extracted data from a package or a data module, and are
saved in My content or Team content.
Simply put, a Data Set is data source type in Cognos Analytics
that contains data extracted from one or more sources and
stored within the Cognos system itself as an Apache parquet file.
The parquet file is then loaded into application server memory at
run-time on an as-needed basis. This (usually) greatly enhances
interactive performance for end users while reducing load on
source databases. When combined with Data Modules, Data
Sets offer incredible out-of-the-box capabilities like
automatic relative time, easy data prep and custom table
creation.
4. In the data module I have two csv files that I want to connect with package how
to achieve that.
The biggest difference between PYI/PYJ and MDL file is that in the binary PYI
file stores passwords to the datasources(usually database connections) and
MDL file contains only an userid and password needs to be provided every time
the cube is refreshed.
Also, keep in mind that a Pyi model may become corrupt at some time. If =
you
only have a pyi, you will lose the complete model. It's best to save =
your
pyi regularly as a mdl to keep as a backup. A mdl file will not become
corrupt as quickly as a pyi.
14. What is max. size of the cube, if it exceeds max limit how you will do partition.
15. What are the possible sources of transformer model.
16. Transformer supports how many levels, what is alternative hierarchy? how you
will define drill through in cube.
17. How apply object level security at report. Ex particular region user can see only
list and other user can see only crosstab when they log in
Ans : User identity function ko case me le jaake expression likhna hai. Jab table
import karte hai tab Object level security ka option lagao and usme expression
we need to write case .. user identity function agar usa to 1 and Australia hai to 2
show karo naye column me. Package publish kar do. Report studio me jao and
render variable ka use kar do agar 1 hai to list 2 hai to crosstab.
CQE (sometimes called "CQM" or "Compatible Query Mode") is the older query
engine that has its roots in Cognos Series 7 products and was the only query engine
in Cognos ReportNet and Cognos 8. It is 32-bit and generally relies on having a full
(thick) database client installed to be able to run queries.
DQM was introduced a few years ago in Cognos 10. It is a 64-bit query engine and is
generally faster and more efficient. DQM also has more accessible methods of
analyzing your queries' structures and performance using a tool called "Dynamic
Query Analyzer". DQM also uses JDBC drivers to connect to databases.
- The 64-bit build of Cognos can run in either 32-bit or 64-bit mode.
- 32-bit builds of Cognos (or a 64-bit server running in 32-bit mode) can process both
CQM and DQM queries, using a 32-bit variant of DQM that may be a bit slower than the
64-bit version of DQM.
- I recommend updating all models that use CQM by changing them to use DQM, as
CQM is likely going to be deprecated by IBM Cognos in the next couple of years. Only
use CQM if you are using a datasource that cannot be queried in DQM at this time.
Junk Dimension
In data warehouse design, frequently we run into a situation where there are
yes/no indicator fields in the source system. Through business analysis, we
know it is necessary to keep such information in the fact table. However, if
keep all those indicator fields in the fact table, not only do we need to build
many small dimension tables, but the amount of information stored in the fact
table also increases tremendously, leading to possible performance and
management issues.
Let's look at an example. Assuming that we have the following fact table:
In this example, TXN_CODE, COUPON_IND, and PREPAY_IND are all
indicator fields. In this existing format, each one of them is a dimension. Using
the junk dimension principle, we can combine them into a single junk
dimension, resulting in the following fact table:
Note that now the number of dimensions in the fact table went from 7 to 5.
The content of the junk dimension table would look like the following:
In this case, we have 3 possible values for the TXN_CODE field, 2 possible
values for the COUPON_IND field, and 2 possible values for the
PREPAY_IND field. This results in a total of 3 x 2 x 2 = 12 rows for the junk
dimension table.
Christina is a customer with ABC Inc. She first lived in Chicago, Illinois. So,
the original entry in the customer lookup table has the following record:
At a later date, she moved to Los Angeles, California on January, 2003. How
should ABC Inc. now modify its customer table to reflect this change? This is
the "Slowly Changing Dimension" problem.
There are in general three ways to solve this type of problem, and they are
categorized as follows:
Type 1: The new record replaces the original record. No trace of the old
record exists.
Type 2: A new record is added into the customer dimension table. Therefore,
the customer is treated essentially as two people.
After Christina moved from Illinois to California, the new information replaces
the new record, and we have the following table:
Advantages:
- This is the easiest way to handle the Slowly Changing Dimension problem,
since there is no need to keep track of the old information.
Disadvantages:
Usage:
After Christina moved from Illinois to California, we add the new information
as a new row into the table:
Advantages:
Disadvantages:
- This will cause the size of the table to grow fast. In cases where the number
of rows for the table is very high to start with, storage and performance can
become a concern.
Usage:
Customer Key
Name
Original State
Current State
Effective Date
After Christina moved from Illinois to California, the original information gets
updated, and we have the following table (assuming the effective date of
change is January 15, 2003):
Advantages:
- This does not increase the size of the table, since new information is
updated.
Disadvantages:
- Type 3 will not be able to keep all history where an attribute is changed more
than once. For example, if Christina later moves to Texas on December 15,
2003, the California information will be lost.
Usage:
Type 3 is rarely used in actual practice.
Type III slowly changing dimension should only be used when it is necessary
for the data warehouse to track historical changes, and when such changes
will only occur for a finite number of time.
13. We have two options, FM model and transformer model what option you can suggest to the
user?
14. What is the purpose of Shortcut and alias in FM model?
2. How do you create a PDF setting if the columns are breaking in next page?
3. What are the Data Modules and Data Sets?
4. Is there a way for External File Mapping in 11, if yes How?
5. Explain about the different type of Charts you worked upon?
6. Can Cognos Analytics 11 use data from two packages in a same report?
Yes
Update object karke ek option hota hai usse hi FM me data update hota