Hyperion Interview Questions

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 249
At a glance
Powered by AI
The key takeaways from the document are around optimizing Hyperion Essbase performance through dimension ordering, data load optimization, calculation optimization, and preventing and removing fragmentation.

When optimizing Hyperion Essbase, main considerations include dimension ordering, block size and density, order of sparse dimensions, incremental data loading, and database outlines with multiple flat dimensions.

Some ways to improve performance during data loads in Hyperion Essbase include grouping sparse member combinations, making the data source and fields as small as possible, positioning data in the outline order, and loading from the Essbase server.

Hyperion Essbase Questions

How do you optimize outline?


Usually the outline is optimized using the hourglass design for dimension ordering i.e,
· Dimension with Accounts tag
· Dimension with Time tag
· Largest Dense dimension
· Smallest dense dimension
· Smallest Sparse dimension
· Largest Sparse dimension

What are the ways to improve performance during data loads?

There are several ways to optimize load

1. Grouping of Sparse member combinations

2. Making the data source as small as possible

3. Making source fields as small as possible

4. Positioning the data in the same order as the outline

5. Loading from Essbase Server

6. Managing parallel data load processing

What are the design considerations for calculation optimization?

You can configure a database to optimize calculation performance. The best configuration for the
site depends on the nature and size of the database.

· Block Size(8Kb to 100Kb) and Block Density

· Order of Sparse Dimensions

· Incremental Data Loading

· Database Outlines with Two or More Flat Dimensions

· Formulas and Calculation Scripts

When does Fragmentation occur?

Fragmentation is likely to occur with the following:


· Read/write databases that users are constantly updating with data

· Databases that execute calculations around the clock

· Databases that frequently update and recalculate dense members

· Data loads that are poorly designed

· Databases that contain a significant number of Dynamic Calc and Store members

· Databases that use an isolation level of uncommitted access with commit block set to zero

How can you measure fragmentation?

You can measure fragmentation using the average clustering ratio or average fragmentation
Quotient.

Using the average fragmentation quotient

Any quotient above the high end of the range indicates that reducing fragmentation may help
performance

Small (up to 200 MB) 60% or higher

Medium (up to 2 GB) 40% or higher

Large (greater than 2 GB) 30% or higher

Using the average clustering ratio:

The average clustering ratio database statistic indicates the fragmentation level of the data (.pag)
files. The maximum value, 1, indicates no fragmentation.

How do you can prevent and remove fragmentation?

You can prevent and remove fragmentation:

· To prevent fragmentation, optimize data loads by sorting load records based upon sparse
dimension members. For a comprehensive discussion of optimizing data load by grouping sparse
members.

· To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.

· To remove fragmentation, force a dense restructure of the database.


Why is database restructuring?

As your business changes, you change the Essbase database outline to capture new product lines,
provide information on new scenarios, reflect new time periods, etc. Some changes to a database
outline affect the data storage arrangement, forcing Essbase to restructure the database.

What are the types of database restructuring?

The two ways by which a database restructure is triggered:

• Implicit Restructures
• Dense restructure
• Sparse restructure
• Outline-only restructure
• Explicit Restructures

What are the conditions affecting Database restructuring?

Intelligent Calculation, name changes, and formula changes affect database restructuring:

· If you use Intelligent Calculation in the database, all restructured blocks are marked as dirty
whenever data blocks are restructured. Marking the blocks as dirty forces the next default
Intelligent Calculation to be a full calculation.

· If you change a name or a formula, Essbase does not mark the affected blocks as dirty.
Therefore, you must use a method other than full calculation to recalculate the member or the
database.

What are the files used during Restructuring?

When Essbase restructures both the data blocks and the index, it uses the files described

essxxxxx.pag Essbase data file

essxxxxx.ind Essbase index file

dbname.esm Essbase kernel file that contains control information used for db recovery

dbname.tct Transaction control table

dbname.ind Free fragment file for data and index free fragments

dbname.otl Outline file in which is defined all metadata for a database and how data is stored

What are the actions that improve performance for restructuring?


There are a number of things you can do to improve performance related to database
restructuring:

· If you change a dimension frequently, make it sparse. · Use incremental restructuring to control
when Essbase performs a required database restructuring. · Select options when you save a
modified outline that reduce the amount of restructuring required.

Which restructure operations are faster?

These types of restructure operations are listed from fastest to slowest:

· Outline only (no index or data files)· Sparse (only index files) · Dense (index files and data
files) as a result of adding, deleting, or moving members and other operations · Dense (index
and data files) as a result of changing a dense dimension to sparse or changing a sparse
dimension to dense

What is Implicit Restructures?

Essbase initiates an implicit restructure of the database files after an outline is changed using
Outline Editor or Dimension Build. The type of restructure that is performed depends on the type
of changes made to the outline

What is Explicit Restructures?

When you manually initiate a database restructure, you perform an explicit restructure. An
explicit restructure forces a full restructure of the database. A full restructure comprises a dense
restructure plus removal of empty blocks.

What is Dense restructure?

If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in
the data files and creates new data files. When Essbase restructures the data blocks, it regenerates
the index automatically so that index entries point to the new data blocks.

Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense
restructure you need to recalculate the database.

What is Sparse restructure?

If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index
and creates new index files.

Restructuring the index is relatively fast; the amount of time required depends on the size of the
index.

What is Outline-only restructure?


If a change affects only the database outline, Essbase does not restructure the index or data files.

Member name changes, creation of aliases, and dynamic calculation formula changes are
examples of changes that affect only the database outline.

Explain the process of dense restructure?

To perform a dense restructure, Essbase does the following:

1. Creates temporary files that are copies of the .ind, .pag, .otl, .esm, and .tct files. Each
temporary file substitutes either N or U for the last character of the file extension, so the
temporary file names are .inn, essxxxxx.inn, essxxxxx.pan, dbname.otn, dbname.esn, and
dbname.tcu. 2. Reads the blocks from the database files copied in step 1, restructures the
blocks in memory, and then stores them in the new temporary files. This step takes the most
time. 3. Removes the database files copied in step 1, including .ind, .pag, .otl, .esm, and .tct
files. 4. Renames the temporary files to the correct file names: .ind, .pag, .otl, .esm, and .tct.

Explain the process of sparse restructure?

When Essbase does a sparse restructure (restructures just the index), it uses the following files:·
essxxxxx.ind· dbname.otl· dbname.esm

What is data compression?

Essbase allows you to choose whether data blocks that are stored on disk are compressed, as well
as which compression scheme to use. When data compression is enabled, Essbase compresses
data blocks when it writes them out to disk. Essbase fully expands the compressed data blocks,
including empty cells, when the blocks are swapped into the data cache.

Generally, data compression optimizes storage use. You can check compression efficiency by
checking the compression ratio statistic.

What are types of data compression?

Essbase provides several options for data compression:

1. Bitmap compression, the default. Essbase stores only non-missing values and

uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell
value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to
store every non-missing cell. In most cases, bitmap compression conserves disk space more
efficiently. However, much depends on the configuration of the data.

2. Run-length encoding (RLE). Essbase compresses repetitive, consecutive values --any value
that repeats three or more times consecutively, including zeros and #MISSING values. Each data
value that is repeated three or more times uses 8 bytes plus a 16 byte repetition factor.
3. zlib compression. Essbase builds a data dictionary based on the actual data being compressed.
This method is used in packages like PNG, Zip, and gzip. Generally, the more dense or
heterogeneous the data is, the better zlib will compress it in comparison to bitmap or RLE
compression.

4. Index Value Pair compression. Essbase applies this compression if the block density is less
than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the
blocks are highly sparse. zlib does not use this.

5. No compression. Essbase does not compress data blocks when they are written to disk

When do you use RLE over Bitmap Compression?

Use RLE over Bitmap When,

Average block density very low (< 3%).

Database has many consecutive repeating Values.

When do you disable compression?

You may want to disable data compression if blocks have very high density (90% or greater) and
have few consecutive, repeating data values. Under these conditions, enabling compression
consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue
compared to your application. It can become a drain on the processor.

What are data locks?

Essbase issues write (exclusive) locks for blocks that are created, updated, or deleted, and issues
read (shared) locks for blocks that should be accessed but not modified. By issuing the
appropriate locks, Essbase ensures that data changed by one operation cannot be corrupted by a
concurrent update.

What is a transaction?

When a database is in read/write mode, Essbase considers every update request to the server
(such as a data load, a calculation, or a statement in a calculation script) as a transaction.

What is transaction control file?

Essbase tracks information about transactions in a transaction control file (dbname.tct).

The transaction control file contains an entry for each transaction and tracks the current state of
each transaction (Active, Committed, or Aborted).

What is isolation level and what are the types of isolation levels?
Isolation levels determine how Essbase commits data to disk. Essbase offers two isolation levels
for transactions --committed access and uncommitted access (the default).

What is commited access?


When data is committed, it is taken from server memory and written to the database on disk.
Essbase automatically commits data to disk. There are no explicit commands that users perform
to commit data blocks.

Talk about committed and uncommitted access?

Committed:

Committed at the end of a transaction. Data retained till then.

All blocks in question locked.

Pre-Image Access: If enabled, Read only access allowed

Wait Times:

Indefinite

Immediate Access or no Wait

No. of Seconds Specified

Uncommitted:

Committed only at synchronization points.

Block by Block Locks.

Commit Row: No of rows of data loaded when Sync point occurs.

Commit Block: No. of Blocks Modified when Sync Point occurs.

For Rollback, Commit Row=0 and Commit Block=0

What are the advantages and disadvantages of using committed access?

You can optimize data integrity by using committed access.

Setting the isolation level to committed access may increase memory and time requirements for
database restructure.

Which transaction is always in committed mode?


The Spreadsheet Add-in lock and Send and the Grid API are always in Committed
Access Mode

What are the memory caches used by Essbase to coordinate memory


usage? Essbase uses five memory caches to coordinate memory usage 1. Index Cache 2. Data
File Cache 3. Data Cache 4. Calculator Cache 5. Dynamic Calculator Cache

What is Index cache?

The index cache is a buffer in memory that holds index pages. How many index pages are in
memory at one time depends upon the amount of memory allocated to the cache.

What is Data file cache?

The data file cache is a buffer in memory that holds compressed data files (.pag files). Essbase
allocates memory to the data file cache during data load, calculation, and retrieval operations, as
needed. The data file cache is used only when direct I/O is in effect.

What is Data cache?

The data cache is a buffer in memory that holds uncompressed data blocks. Essbase allocates
memory to the data cache during data load, calculation, and retrieval operations, as needed.

What is Calculator cache?

The calculator cache is a buffer in memory that Essbase uses to create and track data blocks
during calculation operations.

What is Dynamic calculator cache?

The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks
needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a
query).

What are the memory caches used by Essbase to coordinate memory usage?

Essbase uses five memory caches to coordinate memory usage

Index Cache: Min -1024 KB (1048576 bytes) Default - Buffered I/O : 1024 KB (1048576
bytes);Direct I/O : 10240 KB (10485760 bytes) Opt -Combined size of all essn.ind files, if
possible; as large as possible otherwise.Do not set this cache size higher than the total
index size, as no performance improvement results.

Data File Cache: Min - Direct I/O: 10240 KB(10485760 bytes) Default -Direct I/O: 32768
KB(33554432 bytes)Opt -Combined size of all essn.pag files, if possible; otherwise as
large as possible.This cache setting not used if Essbase is set to use buffered I/O.
Data Cache:Min - 3072 KB (3145728 bytes) Default - 3072 KB (3145728 bytes) Opt -
0.125 * the value of data file cache size.

Calculator Cache:Min - 4 bytes Max: 200,000,000 bytes Default - 200,000 bytes Opt -The
best size for the calculator cache depends on the number and density of the sparse
dimensions in your outline. The optimum size of the calculator cache depends on the
amount of memory the system has available.

What is the structure of currency applications?

In a business application requiring currency conversion, the main database is divided into at
least two slices. One slice handles input of the local data, and another slice holds a copy of the
input data converted to a common currency.

Essbase holds the exchange rates required for currency conversion in a separate currency
database. The currency database outline, which is automatically generated by Essbase from the
main database after you assign the necessary tags, typically maps a given conversion ratio onto a
section of the main database. After the currency database is generated, it can be edited just like
any other Essbase database.

What are the three dimension that should be present in main database of currency
application?

The main database outline can contain from 3 to n dimensions. At a minimum, the main database
must contain the following dimensions:

· A dimension tagged as time.

· A dimension tagged as accounts.

· A market-related dimension tagged as country.

What are the dimensions that should be present in currency database of currency
application?

A currency database always consists of the following three dimensions, with an optional fourth
dimension:

· A dimension tagged as time, which is typically the same as the dimension tagged as time in
the main database.
· A dimension tagged as country, which contains the names of currencies relevant to the
markets (or countries) defined in the main database.

· A dimension tagged as accounts, which enables the application of various rates to members
of the dimension tagged as accounts in the main database.

· A currency database, which typically includes an optional currency type dimension, which
enables different scenarios for currency conversion.

What are the conversion methods supported by Essbase for currency applications?

Different currency applications have different conversion requirements. Essbase supports two
conversion methods:

· Overwriting local values with converted values.

· Keeping local and converted values.

Either of these two methods may require a currency conversion to be applied at report time.
Report time conversion enables analysis of various exchange rate scenarios without actually
storing data in the database.

What is the process to build a currency conversion application and perform conversions?

To build a currency conversion application and perform conversions, use the following process:

1. Create or open the main database outline. 2. Prepare the main database outline for currency
conversion. 3. Generate the currency database outline. 4. Link the main and currency
databases. 5. Convert currency values. 6. Track currency conversions. 7. If necessary,
troubleshoot currency conversion.

What is CCONV? After you create a currency conversion application, you convert data values
from a local currency to a common, converted currency by using the CCONV command in
calculation scripts Ex: CCONV USD;CALC ALL;

Can we convert the converted currency back into its local currency? You can convert the
data values back to the original, local currencies by using the CCONV TOLOCALRATE
command.

When you convert currencies using the CCONV command, are the resulting data blocks
are marked as dirty or clean?When you convert currencies using the CCONV command, the
resulting data blocks are marked as dirty for the purposes of Intelligent Calculation. Thus,
Essbase recalculates all converted blocks when you recalculate the database.
What is CCTRACK? You can use the CCTRACK setting in the essbase.cfg file to control
whether Essbase tracks the currency partitions that have been converted and the exchange rates
that have been used for the conversions. By default CCTRACK is turned on.

What are the reasons to turn off CCTRACK? For increased efficiency when converting
currency data between currency partitions, you may want to turn off CCTRACK. For example,
you load data for the current month into the local partition, use the DATACOPY command to
copy the entire currency partition that contains the updated data, and then run the conversion on
the currency partition.

How can you turn off CCTRACK? You can turn off CCTRACK in three ways: · Use the SET
CCTRACKCALC ONOFF command in a calculation script to turn off CCTRACK temporarily ·
Use the CLEARCCTRACK calculation command to clear the internal exchange rate tables
created by CCTRACK.

Set CCTRACK to FALSE in the essbase.cfg file.

What is LRO (Linked reporting objects)? An LRO is an artifact associated with a specific
data cell in an Essbase database. LROs can enhance data analysis capabilities by providing
additional information on a cell.

An LRO can be any of the following:

· A paragraph of descriptive text (a "cell note")

· A separate file that contains text, audio, video, or graphics

· A URL for a Web site

· A link to data in another Essbase database

How do you create LRO's?

Users create linked objects through Essbase Spreadsheet Add-in for Excel by selecting a data cell
and choosing a menu item. There is no limit to the number of objects you can link to a cell. The
objects are stored on the Essbase Server where they are available to any user with the appropriate
access permissions. Users retrieve and edit the objects through the Essbase Spreadsheet Add-in
for Excel Linked Objects Browser feature, enabling them to view objects linked to the selected
cell.

Does adding or removing links to a cell does not affect the cell contents?

No.LROs are linked to data cells --not to the data contained in the cells. The link is based on a
specific member combination in the database.
Give a few examples of LRO's?

Ex1: A sales manager may attach cell notes to recently updated budget items. Ex2: A finance
manager might link a spreadsheet containing supporting data for this quarter's results. Ex3: A
product manager might link bitmap images of new products. Ex4: A sales manager may link the
URL of a company's Web site to quickly access the info on the Web

How does Essbase locate and retrieve linked objects?

Essbase uses the database index to locate and retrieve linked objects. If you clear all data values
from a database, the index is deleted and so are the links to linked objects. If you restructure a
database, the index is preserved and so are the links to linked objects.

Do shared members share LRO's?

Shared members share data values but do not share LROs. This is because LROs are linked to
specific member combinations and shared members do not have identical member combinations.
To link a given object to shared members, link it to each shared member individually.

Can you change the member combination associated with any linked object?

You cannot change the member combination associated with any linked object. To move an
object to another member combination, first delete it, then use Essbase Spreadsheet Addin for
Excel to re-link the object to the desired member combination.

Why do we need to limit the LRO file sizes for storage conversion?

Because Essbase stores linked files in a repository on the server and, by default, the size is
unlimited. Limiting the file size prevents users from taking up too much of the server resources
by storing extremely large objects. You can set the maximum linked file size for each
application. If a user attempts to link a file that is larger than the limit, an error message displays.

The maximum file size setting applies only to linked files and does not affect cell notes or URLs.
The lengths of the cell note, URL string, and LRO descriptions are fixed.

What is partitioning?

A partition is the piece of a database that is shared with another database. An Essbase partitioned
application can span multiple servers, processors, or computers.

What is Essbase Partitioning?


Essbase Partitioning is a collection of features that makes it easy to design and administer
databases that span Essbase applications or servers. Partitioning is licensed separately from
Essbase.

What are the types of Partitions available in Essbase?

Three types of partitions are there.

1. Transparent partition:

A form of shared partition that provides the ability to access and manipulate remote data
transparently as though it is part of your local database. The remote data is retrieved from the
data source each time you request it. Any updates made to the data are written back to the data
source and become immediately accessible to both local data target users and transparent data
source users

2. Replicated Partition:

A portion of a database, defined through Partition Manager, used to propagate an update to data
mastered at one site to a copy of data stored at another site. Users can access the data as though it
were part of their local database.

3. Linked Partition:

A shared partition that enables you to use a data cell to link two databases. When a user clicks a
linked cell in a worksheet, Essbase opens a new sheet displaying the dimensions in the linked
database. The user can then drill down those dimensions.

What is the process for designing a partitioned database?

Here is the suggested process for designing a partitioned database.

1. Learn about partitions.

2. Determine whether the database can benefit from partitioning.

3. Identify the data to partition.

4. Decide on the type of partition.

5. Understand the security issues related to partitions.

What are the parts of partition?

Partitions contain the following parts,


· Type of partition: A flag indicating whether the partition is replicated, transparent, or linke

· Data source information: The server, application, and database name of the data source.

· Data target information: The server, application, and database name of the

data target.

· Login and password: The login and password information for the data source and the data
target.

· Shared areas: A definition of one or more areas, or sub cubes, shared between the data source
and the data target.

· Member mapping information: A description of how the members in the data source map to
members in the data target.

· State of the partition: Information about whether the partition is up-to-date and when the
partition was last updated.

What are benefits of partitioning?

Partitioning applications can provide the following benefits:

· Improved scalability, reliability, availability, and performance of databases

· Reduced database sizes

· More efficient use of resources

· Data synchronization across multiple databases.

· Outline synchronization across multiple databases.

· Ability for user navigation between databases with differing dimensionality.

Can you define different types of partitions between the same two databases?

No

Can a single database serve as the data source or data target for multiple partitions?

Yes
What is overlapping partition?

An overlapping partition occurs when similar data from two or more databases serve as the data
source for a single data target in a partition.

Is overlapping partition valid in all the partitions?

An overlapping partition is allowed in linked partitions, but is invalid in replicated and


transparent partitions and generates an error message during validation.

When do you use substitution variables in partitions?

Using substitution variables in partition definitions enables you to base the partition definition on
different members at different times.

Can we use attribute values to partition a database?

Yes,You can use attribute functions for partitioning on attribute values. But you cannot partition
an attribute dimension.

Can we partition an attribute dimension?

No, we cannot partition an attribute dimension.

What is the limitation on version and mode during partition?

Both ends of a transparent, replicated, or linked partition must be on the same release level of
Essbase Server. For example, if the source of a linked partition is on a Release 7.1.2 server, the
target must also be

on a Release 7.1.2 server.

In addition, for transparent and replicated (but not linked) partitions, the application
mode of both ends of the partitions must be the same--either Unicode mode or non-
Unicode mode.

What are the major difference between ASO & BSO?

If we have more dimensions (generally more than 10) then we will go for ASO that simply rollup
If we have less dimensions then we will go for BSO We cannot write back in ASO we can write
back in BSO Most of the dimensions are sparse in ASO Most of the dimensions are dense in
BSO

What is "Enterprise Analytics"? ASO in System 9 is called Enterprise Analytics.


Explain in detail about the features of ASO?

· ASO databases are created specifically to deal with the requirements of very large sparse data
sets with a high no of dimensions and potentially millions of members. · ASO do not have
indexes or data blocks. · ASO do not use calculation scripts. Bcoz calculations are not
complex. · ASO uses a new kind of storage mechanism that allows improved calculation times
from 10 to100 times faster than BSO. · ASO can store up to 252 dimensional combinations. · The
front end tools usually do not care if the database is ASO or BSO. Even Maxl sees minor
differences. · We can have attribute dimensions in ASO. · In ASO there is no concept as dense
and sparse dimensions. · We do not have two pass logic and built in time balance
functionality.( time balance functionality is present from 9.3 version onwards). · Member
formulas are not supported in stored hierarchies. · Only non consolidation (~) and addition (+)
operators are supported in shared hierarchies. · We cannot create more than 1 database in ASO. ·
ASO does not utilize procedural calculation scripts. · ASO formulas are written in MDX
syntax. · ASO has Accounts dimension but it is completely different from the account dimension
of BSO. · ASO is read-only. You cannot write to ASO databases, but there is a workaround
using transparent partitions and pointing to an attached BSO database for those duties. · You can
load data to level zero members only. · The database must restructure after any members in the
standard dimensions are added ,deleted or moved. In fact most actions on an ASO outline will
either cause a loss of data or restructure.

How do you differentiate ASO applications?

You can easily differentiate the ASO database in the Administrative Services Console by the red
star beside the application name.

How do you create an ASO application? ASO has two types of hierarchies: stored and
dynamic. The dimension can contain both types of hierarchies (if you enable multiple
hierarchies).Other properties that need to be set for dimensions and members include ·
Dimension Type · Data Storage(store, never share, label only) · Member solve order

· Alias

You can add dimensions using the visual editor or the rules files.

Unlike in block storage ASO does not allow you to preview the outline changes. If you are
unsure of the build file, make a backup of your outline before running the new build rule. For
ASO databases after the data values are loaded into the level 0 cells of an outline, the database
requires no separate calculation step. For retrieving from the ASO database, retrieve and analyze
just as in BSO database.

How do you create an ASO database using ASO Outline Conversion Wizard ? You can also
create an ASO database using ASO Outline Conversion Wizard. This wizard uses the existing
BSO database to convert to an ASO database. This is advantageous because we do not need to
create an ASO database from the Scratch. However we need perform reengineering of
dimensions and hierarchies.
How do you create ASO in the Automated Way? The final way of creating an ASO
application is by using "Create Application" , "Create Database" ,"Create Outline " commands
using MaxL. Typically this method is used when you are running the MaxL command as a part
of the batch job.

**Unicode is supported for BSO databases only.

**Data Mining is not supported by ASO databases.

**MDX is the only mechanism for defining member calculations in databases.

Unicode applications use UTF-8 encoding form to interpret and store character text, providing
support for multiple character sets.

To set up a Unicode application

1. Setup a computer for Unicode support by doing one of

· Install the for that supports UTF-8 encoding

· Install a Unicode editor

2. Set the Essbase server to Unicode Mode via Administrative Services or MaxL.

3. Check the Unicode box when creating a new Unicode -mode application.

4. You can also migrate from non-Unicode applications to Unicode applications (but not the
other way round).

Report Scripts are outdated but still can be helpful when extracting subsets of data from Essbase
for online backups or feeding into other systems.

The Wizards Tab of Administrative Services Console menu has the following

components

1. Migration

2. Aggregate Storage Outline Conversion

3. Aggregate Storage Partition

4. User Setup

5. Data Mining Wizard


Posted by Dornakal's-Hyperion-Blog at 4:46 PM 2 comments Links to this post

Reactions:
Thursday, January 14, 2010
Smart View retrieve Error
One of my Business user recieved an error when she was running Smart view query,
which had lot of attributes in her retrive sheet.

The error she received is as follows:


"The request timed out. Contact your administrator to increase netRetrycount and
netRetryInterval"

Eventhough I increased NetDelay and NetRetrycount in essbase.cfg, but that does not
resolve this problem. She was using IE7 on XP SP3.

Here is the reason and resolution for the issue:

By design, Internet Explorer imposes a time-out limit for the server to return data.
The time-out limit is five minutes for versions 4.0 and 4.01 and is 60 minutes for
versions 5.x and 6. Also, the time-out limit is 30 seconds for Windows Internet
Explorer 7. As a result, Internet Explorer does not wait endlessly for the server to
come back with data when the server has a problem.
Apparently the query which the user was running takes around 45 seconds. The IE7
time out setting was causing it to error out.

we need to change (add) three new registry keys to resolve the issue.

1. Open the Registry (Start -> Run -> Regedit)


Locate the following section:
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet
Settings]\
2. Create the following new keys for dword with decimal values:
"ReceiveTimeout"=dword:00dbba00
"KeepAliveTimeout"=dword:00180000
"ServerInfoTimeout"=dword:00180000
3. Restart the machine for the new settings to take effect

If the above settings does not make any difference and the version of SmartView is
9.3.1.2, it may need to upgrade SmartView 9.3.1.4 or higher version.

Here are the Screen shots for changing the registry settings.
Right click and select DWORD (32-bit) value

Rename to KeepAliveTimeout

Right click and select Modify

Select Decimal and enter the following data 00180000

repeat the same process for other 2 keys


Validate whether you have right keys or not.
Posted by Dornakal's-Hyperion-Blog at 4:11 PM 1 comments Links to this post
Labels: Smart View retrieval error.

Reactions:
Friday, October 2, 2009
Correct Order of restarting planning
services
If you are planning admin and had to restart planning services.
here is the order you have to follow to restart.

Stop Order
Planning Service
RMI registry
AAS

Start Order
AAS
RMI registry
Planning Service

Screen shots of services shown below:


Posted by Dornakal's-Hyperion-Blog at 10:19 AM 1 comments Links to this post
Labels: Order of restarting planning services

Reactions:
Thursday, September 10, 2009
Hyperion Essbase Overview
please find some Q&A, which provide you with essbase overview.

What is Essbase OLAP server?


An OLAP server is a multidimensional database for storing data with an unlimited
number of dimensions such as time, region, accounts, channel or products. Essbase
server manages analytical data models, data storage, calculations, and data security.

What are the Essbase Application tools?


Applications tools are used for extending Essbase applications
Spreadsheet Add-in
Smart view for Microsoft applications
Currency Conversion
Essbase SQL interface
Essbase Application Programming Interface (API)

What is Partitioning?
Copying a slice of a large database to work with it locally, or link from your database
directly to other databases.

What is a Hybrid Analysis?


Hybrid analysis integrates relational databases with Essbase databases to combine the
size and scalability of the relational database with conceptual power and analytical
capabilities of the multidimensional Database.

What are administrative requests and client requests?


Administrative requests such as
· Logging in and logging out
· Starting and stopping applications and databases,
· Viewing users security information
are handled by the Essbase server agent

Client requests such as


Data loads
Spread sheet reports
Data lock and unlock
are handled by the application server (ESSVR)
What is the use of multidimensional database such as Essbase?
Multidimensional database supports multiple views of data sets for users who need to
analyze the relationships between data categories.
For example marketing analyst needs detailed information in different view than the
manager.
Multidimensional DB consolidates and calculates data to provide different views. Only
Database outline, the structure that defines all elements of the DB, limits the number
of views
With the multidimensional DB users can pivot the data to see information from
different view point, drill down to find more detailed information or drill up to see an
over view.
Relational database have more data and have all transactions information. Whereas
the Essbase has limitations on data which it can hold. In Essbase the filters (security)
can be given until the data cell value. Whereas in RDBMS the security can be given
only until table view. In RDBMS we can see only one view.

What are Standard dimensions?


Standard dimensions are those which represent the core components of the business
plan an often relate to the departmental functions
Examples of standard dimensions are as follows
Time
Accounts
Products
Market
Dimension
Dimensions are static in most databases. DB dimensions rarely change over the life of
an application.

What is an Outline?
Outline is the structure that defines all elements of the Database. It dictates how
data is to be stored into the database. Outline defines the structural relationship
between the members in DB. Organizes all data in the DB
Defines consolidations and mathematical relationships between members
It defines type of dimensions. Aliases, member formulas etc.
The order of the outline is that how dimensions are arranged is as follows:
Dense dimensions followed by sparse dimensions followed by Attribute dimensions.

How do you order the Outline?


All the attribute dimensions should be placed at the end of the outline. Dense
dimensions should be placed in the first then followed by the sparse dimensions
The order determines
How quickly calculations are run
How long it takes users to retrieve information

The order of the outline for query performance


· Dense
· Most queried Sparse
· Least queried Sparse
· Most queried Attribute
· Least queried Attribute

The order of Outline for Calculation time


· Dense
· Smallest Sparse dimension
· Largest Sparse
· Smallest Attribute
· Largest Attribute

What is the highest level of consolidation in the outline?


Dimension

Is there any limitation on number of members?


No. Essbase does not limit the number of members within a dimension and allows you
to add new members as needed.

Parent is a member, which has a branch below it.


Child is a member, which has a parent above it.
Siblings are the child members of same immediate parent, at the same generation.
Descendants are all members in branches below a parent.
Ancestors are all members in above a member.
Root is a top member in a branch.
Leaf member has no children.
Generation number refers to the consolidation levels within a dimension.
Level also refers to a branch within a dimension.

What is a cell/data value?


A data value is defined by the intersection of all standard dimensions in the database.

What is the maximum number of values in the database?


Product of members in each dimension

Why do we need to have classification as dense and sparse dimensions?


As the data is no smoothly and uniformly distributed and data does not exist for the
majority of members. Essbase speeds up data retrieval while minimizing the memory
and disk requirements.
A sparse dimension is a dimension with a low percentage of available data positions
filled.
A dense dimension is a dimension with a high probability that one or more data points
are occupied in every combination of dimensions.
What are the two types of internal structures in Essbase?
Data blocks
Index system
The two types of internal structures are to store data and access data.

What is a Data block?


Data block is a cube created for each unique combination of sparse standard
dimension members (provided that atleast one data value exists for sparse member
combination).

The Data block represents all the dense dimension members for its combination of
sparse standard dimensions.

Its size depends on number of dense dimensions.

How does Essbase calculate the data?


Top down order

What is an Index entry?


The index entry provides a pointer to the data block
Essbase creates Index entry for each data block. The index entry represents the
combinations of sparse standard dimensions. It contains an entry for each unique
combination of sparse standard dimension members for which atleast one data value
exists.

What happens if you make all dimensions sparse?


Data blocks are created for each unique combination of sparse standard dimension
members. Thus a large number of dense blocks are created, and thus a huge index
containing pointers (or addresses) for all those data blocks.

Huge index consumes large amount of memory. The more index entries the longer
Essbase searches to find a specific block.

What happens if you make all dimensions dense?


Only one huge sparse block is created with only one index entry. This block is very
sparse. This configuration requires thousands of times more storage than other
configurations. Essbase need to load the entire data block into memory when it
searches for a data value, which requires enormous amounts of memory.

What is the Design Process?


Analyze business needs and plan the database
Define the database outline
Check the system requirements
Load the test data into the database
Define calculations
Define reports
Verify with the users
Repeat the process

What are the different types of data sources?


Flat files (column formatted)
Spreadsheet files
Any RDBMS files

What is a shared member?


The shared member concept lets two members with the same name share data. The
shared member stores only pointer to the data contained in the other member, so the
Essbase only stores the data once. Shared members should be in the same dimension.

What dimension can be given time balance properties?


Only accounts dimensions can be given time balance, expense reporting, and country
and currency properties.

What is the use of variance reporting?


Variance reporting properties defines how Essbase calculates the difference between
actual and budget data in members with @VAR, @VARPER functions in their member
formula.
· Expense reporting (Budget-Actual)
· Non-Expense reporting (Actual-Budget)
What is a Function?
It is a predefined routine to carry on specific task or calculation.

What is a formula?
Formulas are used to calculate relationships between members in the DB Outline.

What is a dynamic Calc?


When you tag a member as dynamic calc, Essbase calculates the combinations of the
member only when user retrieves data, instead of pre calculating member
combinations during the regular DB.
Dynamic calc shorten the regular calc time, but may increase the retrieval time for
dynamically calculated data values.

What are the advantages of Dynamic calc?


Low disk space
Reduced database restructure time
Reduced back up time
Optimum regular calculation time

What are the members for which Two-pass calculations can be given?
Accounts
Dynamic calc
Dynamic calc and store

How does a user can view data?


Spread sheet
Printed reports
Reports published on web
web analysis
smart view

Can you load data or calculate data on client machine?


No.
Applications and Databases created on client machine are used only to store database
objects, such as outlines and calc scripts. You cannot load or calculate data on a
client machine.

What are Database Objects?


Files that are related to databases are called Objects. Common types of objects are:
Database Outlines (.OTL)
Report Scripts (.REP)
Calculation Scripts (.CSC)
Data Load rules and Dimension build rules (.RUL)
Data sources
Security definitions
LRO’s (Linked Reporting Objects)
Partition definitions

What is a rule file?


Data load rules are set of operations that Essbase performs on the data from external
data source file as it is loaded, or copied into the Essbase database.
Specifying the data load rules is the most common way to load data into the
Database.
Dimension build rule files create or modify an outline dynamically based on the data
in the external source file.

What is a calculation script?


Calc script is a text file with set of instructions telling Essbase how to calculate data
in the database. It has “. csc” extension.
What is a Report script?
Report script is a text file with a set of instructions on data retrieval, formatting and
output to create a report from the database.
Report script has .REP extension.

What is a Linked Reporting Object (LRO)?


A LRO is an object associated with a specific data cell in the Essbase database. A LRO
can any of the following:
A paragraph of descriptive text (” cell note”).
A separate file that contain text, audio, video or graphics.
An URL (Uniform Resource Locator).
A link to data in another Essbase database.

What are “.EQD” files?


Within spreads sheet add-in, users can create queries using query designer (EQD).
Users can save the reports in the form of queries (.EQD files)

What are “.sel” files?


With the spreadsheet add-in, users can define member retrievals with the member
select feature. If users want to save member select specification, they can do so with
a “.sel” file.

How can you create a database?


Application manager file>new>database
Essbase administrative services console
ESSCMD (“CREATE DB”)
Maxl (“create database”)

Application and database names should be less than 8 characters.

create application------maxl
CREATEAPP------------ESSCMD

What is annotating database?


It is a database note that can provide useful information in the situations where you
need to broadcast messages to users about status of the database deadlines for
updates and so on

Select database>set note

What are substitution variables?


Substitution variables act as global placeholders for the information that changes
regularly.
Each variable has a value assigned to it. The value can be changed at any time by the
DB designer, thus manual changes are reduced. Ex :- currmnth.

You cannot use the substitution variables in formulae that are applied to the DB
outline. The value of the substitution variable cannot be more than 256 characters.

Substitution variables can be used only in


Calculation scripts
Report scripts
Spread sheet add-in
SQL interface

Server>substitution variable
Maxl (for creating/deleting/updating)
Alter system
Alter application
Alter db
ESSCMD
CREATE VARIABLE
DELETE VARIABLE
UPDATE VARIABLE

What is a location alias?


A location alias maps an alias name for a DB to the location of that DB.
You can use location aliases only with the @XREF function.
With @XREF function you can retrieve data value from another database to include in
calculation in the current database. In this case, the location alias points to the
database from which the value to be retrieved.

Database>location aliases

Create
Maxl ----------------------create location alias
ESSCMD----------------CREATE LOCATION

Edit /Delete
Maxl display location alias
drop location alias

ESSCMD LIST LOCATIONS


DELETE LOCATION
What happens if you open outlines in two instances?
If you open same outline with two instances of application manager using same login
id, each save will overwrite the changes of the other instance.

Copying database
Database > copy
Maxl create database as
ESSCMD COPYDB

What are important points while building an outline?


· All members and alias names should be unique;
· Only one dimension should be tagged as accounts, time, currency type and country;
· Level “0” members cannot be label only;
· Level “0” members cannot be assigned formulae but dynamic calc members of
standard dimensions may have formula;
· Dynamic calc should not have more than 100 children;
· Boolean attribute dimensions have only two members.

What are the restructuring options in saving database?

· All data
· Level 0 data (when all data in the outline is at level 0)
· Input data (when you load data into non level 0 members)
· Discard all data (when you expect to reload data or when outline is radically
changed that no existing data applies)

How do you set dense and sparse settings?


Settings>data storage
Data dictionary button
· You must set the standard dimensions with which you plan to associate attribute
dimension as sparse because attributes can only be associated to sparse standard
dimensions.
· Application manager automatically sets attribute dimensions as sparse.

How do you rename members?


· Data dictionary button
· Edit>properties
· Manually

When does a DB restructure?


When you add, delete, or move non-attribute (standard) dimensions or members,
Essbase restructure DB and you must recalculate your data.

What is Metadata?
Metadata is data is data about data. Metadata is the data contained in the database
outline that describes the values within a DB.
Ex:
East>New York>cola>sales>10000

East>New York>cola>sales> is metadata

What are different types of dimension tags?


· Time
· Accounts
· Country
· Currency
· Attribute

Can you add time members that are not tagged as time?
Yes
When do you give an accounts tag to a dimension?
You can tag a dimension as accounts if it contains items that you want to measure
such as profit or inventory.

Time dimension tag says how often you collect and update data. The time dimension
enables several accounts dimension functions such as first and last time balances.

What is the significance of time balance properties?


When you set a time balance property on a member in an accounts dimension, it
affects how Essbase calculates the parent of that member in the time dimension.
· TB FIRST (The parent value is the value of the first member in the branch)
· TB LAST (The parent value is the value of the last member in the branch)
· TB AVG (The parent value represents the average value of the children)
· TB NONE (default; rolls up parents in the time dimension in the usual way)

Skip Properties
· None
· Missing
· Zeros
· Missing and zeros
Skip properties, expense reporting, Time Balance properties are applicable only to
the accounts dimension.

What is a Two-Pass calculation?


By default Essbase calculates outlines from the bottom up first calculating the values
for children and then values for parent. Sometimes however the values of children
depend may be based on the values of parent or the values of other members in the
outline. To obtain correct values for these members, Essbase must first calculate the
outline and then recalculate the members that are dependent on the calculated
values of the other members. The members that are calculated on the second pass
through the outline are called Two-Pass Calculation.

Only accounts, dynamic calc, dynamic calc and store members can be given two pass
calculation.

Edit>properties
Data dictionary button

What does the consolidation properties do?


Member consolidation determines how children roll up into their parents. Default (+)
operator.

Essbase don’t use consolidation properties for attribute dimensions.


Essbase automatically tags members of the attribute dimensions as dynamic calc. you
cannot change this setting.

When do you use label only?


When no data is associated with members we use label only. They are used only to
ease navigation and reporting from the spread sheet add-in.
You cannot associate attributes to label only. If you tag label only to the base
member, which has, attributes associated with it, Essbase removes attributes and
displays a warning message
Posted by Dornakal's-Hyperion-Blog at 4:07 PM 1 comments Links to this post
Labels: Essbase Overview, Hyperion Essbase BSO, Hyperion Essbase Interview questions, Important Points in
Hyperion Essbase

Reactions:
Monday, September 7, 2009
Fragmentation in Essbase (BSO)
• What is Fragmentation?

Fragmentation is unused disk space.


• When does Fragmentation occur?

Fragmentation is likely to occur with the following:


Read/write databases that users are constantly updating with data

Databases that execute calculations around the clock

Databases that frequently update and recalculate dense members

Data loads that are poorly designed

Databases that contain a significant number of Dynamic Calc and Store members

Databases that use an isolation level of uncommitted access with commit block set to zero

• How can you measure fragmentation?

You can measure fragmentation using the average clustering ratio or


average fragmentation Quotient.

Using the average fragmentation quotient


Any quotient above the high end of the range indicates that reducing
fragmentation may help

performance, with the following qualifications:


The reported value of the Fragmentation Quotient is more accurate when there are no other write

transactions running on the database.

For databases less than 50 MB using the Direct I/O access mode, the fragmentation quotient tends to
be high. A high fragmentation quotient does not necessarily indicate a need to reduce
fragmentation, because the free space is created in 8 MB chunks and all of it might not get used
right away.

Database Size Fragmentation Quotient


Threshold
Small (up to 200 60% or higher
MB)
Medium (up to 2 40% or higher
GB)
Large (greater than 30% or higher
2 GB)

Using the average clustering ratio:

The average clustering ratio database statistic indicates the


fragmentation level of the data (.pag) files. The maximum value, 1,
indicates no fragmentation.
• How do you can prevent and remove fragmentation?

You can prevent and remove fragmentation:


To prevent fragmentation, optimize data loads by sorting load records based upon sparse dimension
members. For a comprehensive discussion of optimizing data load by grouping sparse members.

To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.

To remove fragmentation, force a dense restructure of the database.


How can I track changes in the outline?
You can use the OUTLINECHANGELOG setting in the ESSBASE.CFG file to capture
outline changes. This setting logs all outline changes into the database_name.OLG
file. The OUTLINECHANGELOG setting allows database administrators to review the
outline revision history and gather enough information to roll back changes if needed.

Example:

OUTLINECHANGELOG TRUE FALSE

TRUE Essbase logs outline changes into the file database_name.OLG.


FALSE Essbase does not log outline changes. The default is FALSE.
Posted by Dornakal's-Hyperion-Blog at 9:47 AM 0 comments Links to this post
Labels: Essbase

Reactions:
Wednesday, May 6, 2009
How to Delete Dimension from Hyperion
Planning Application
If you create a custom dimension in planning and wish to delete it, here is how you
might remove it from SQL manually. We recommend that only someone familiar with
relational databases attempt this.

Outlined below are the steps to remove a dimension from the Oracle relational store.
The directions are for SQL, but the table names should be the same in Oracle.

This procedure should only be performed by your SQL or Oracle DBA.

BE SURE YOU HAVE A BACKUP OF YOUR APPLICATION BEFORE ATTEMPTING THIS


PROCEDURE.

You will need to delete all the children first, then the dimension.

If you have the dimension in forms, you may also need to (after step 5) go into the
hsp_formobj_def_mbr table and delete all references of that object ID. Then delete
the dim id in the hsp_form_layout table.

Recycle services of Planning.

The following assumptions are being made about the dimension:

* No forms have any references to the dimension.


* Any and all attribute dimensions have been removed from the dimension.
* Any and all members have been removed from the dimension.
* All security access has been removed from dimension.
* Any Alias associated with the dimension has been removed.

In this example we have created a dimension called Dummy Dimension.

1. Open the Enterprise Manager for SQL server


2. Open the database that has your planning application
3. Open the table HSP_OBJECT
4. You need to find the row that has the dimension name you want to delete:
5. Take note of the OBJECT_ID in my example it is 50051, don't delete it yet.
6. Open the table HSP_MEMBER
7. Find and delete any rows that have a MEMBER_ID equal to the OBJECT_ID from step
5.
8. Close the Table HSP_MEMBER
9. Open the table HSP_MRU_MEMBERS.
10. Find and delete any rows that have a DIM_ID equal to the OBJECT_ID from step 5.
11. Close the table HSP_MRU_MEMBERS.
12. Open the table HSP_DIMENSION
13. Find and delete any rows that have a DIM_ID equal to the OBJECT_ID from step 5.
14. Close the table HSP_DIMENSION
15. Also delete it's reference from the HSP_UNIQUE_NAMES table.
16. Delete the row in the HSP_OBJECT table that you found in step 4.
17. Close the HSP_OBJECT table.
18. Open Essbase Application Manager for the application and remove the dimension
from the necessary Essbase outlines.
19. Restart the application server.

Now when you open the application in Planning the dimension will be removed.
The next step is to perform a refresh to make sure planning and Essbase are in sync.
Posted by Dornakal's-Hyperion-Blog at 5:23 PM 0 comments Links to this post
Labels: Delete Dimension in Hyperion Planning

Reactions:
Thursday, April 30, 2009
Hyperion Planning Installation Steps
This document demonstrates step by step procedure to install Hyperion Planning
Software. This includes the installation and configuration of Hyperion components and
other relative software.

First step is to download hyperion planning and related software from the following
website
http://edelivery.oracle.com/

After entering the details you will be directed to the page below. Select Hyperion
performance management and BI.
Download the following products:
1. Hyperion Shared Services Release 9.3.1 Microsoft Windows x86 Installer English
(V11229-01)
2. Hyperion Smart View for Office Release 9.3.1.2 Microsoft Windows x86 Installer
( V12736-01)
3. Hyperion Essbase - System 9 Release 9.3.1.3 Server Windows Installer (V14762-01)
4. Hyperion Essbase Administration Services - System 9 Release 9.3.1.3 Windows
Installer-- (V14792-01)
5. Hyperion Planning - System 9 Release 9.3.1.1 All Platforms Installer English--
V11378-01

After downloading the software. Extract the setup files from downloads and start
installation.
Make sure that you don't check Launch configuration utility in the last step in the
first run.
As seen above. complete installation of all 5 products below with out launching
configuration utility.

Hyperion Shared Services


Hyperion Essbase
Hyperion Essbase Hyperion Essbase Administration Services -
Hyperion Planning - System 9 Release 9.3.1.1
Hyperion Smart View

The next step is to create relational databases or repositories for


Shared Services
EAS
Planning System tables
Planning Database
I used sql server 2005 to create above repositories.
After creating sql server repositeries, go to the server where you installed shared
services and open Hyperion configuration utility from the start menu. we have to
configure shared services first and then EAS and then planning.
The next step is to configure EAS. Open the foundation services on the server where
you installed EAS and follow the procedure below:
Posted by Dornakal's-Hyperion-Blog at 11:28 AM 18 comments Links to this post
Labels: Hyperion Planning Installation and Configuration
Reactions:
Wednesday, April 1, 2009
Error Codes
Hi All,
you can find comprehensive list of error codes for essbase in the following website

www.EssbaseInfo.com

"Thanks to EssbaseInfo.com for putting this Essbase Error list together"


Posted by Dornakal's-Hyperion-Blog at 9:13 AM 1 comments Links to this post

Reactions:
Monday, March 30, 2009
Common Mistakes to avoid in Hyperion
Resume
Hi All,
Good evening.
We were recruiting for hyperion developer last year and got many resumes as
response for our advertisement.

Here are some things, which I observed while reviewing those resumes.

1. A candidate has overlapping experience. He claimed that he worked for a client in


Jacksonville, FL and also at other client in San Jose, CA at the same period of time.
We asked how he managed to work at both places (hectic schedule), he had no clue.
If you were working on multiple projects should answer questions related to that or
take care when listing project periods.

2.Putting lot of experience in unrelated technology.

Common why do your prospective employer want to know what you did in cold fusion
coding, when he advertised for hyperion developer.

3.Stating same responsibilities on different projects.

It is possible to have same type of responsibilities in different projects. If so club both


projects and state responsibilities once. The content of resume is more important
than the number of pages. The common myth is that employer will look into resumes,
only if it has more than 5 pages. That is untrue. The candidate, which we hired have a
clean and small resume focusing on main points.

4.Unable to answer what the candidate stated in resume.


Once a candidate stated that he is expert in SQR reporting. SQR was not we were
looking for and didn’t ask for that in our advertisement. We saw that he is an expert
in SQR in resume and asked him a question related to that. He was not able to answer
fundamentals in SQR, which undermined the credibility of the candidate.
Don’t put something which you heard of. Employer will read your resume entirely and
ask questions based on your resume. Very few people have set of questions prepared
to ask for an interview. One should have complete grip on resume. Should have
answers for every word you put in resume. Should not put any unrelated experience.
Like project on IBM mainframes doesn’t help you for the position in Hyperion.

Hope you will take care of above points when building your resume.
Posted by Dornakal's-Hyperion-Blog at 5:36 PM 1 comments Links to this post
Labels: Resume Building

Reactions:
Friday, March 27, 2009
Data transfer between two hyperion
Essbase Cubes
Data Transfer from one cube to another.
There are different ways to transfer data from one cube to other.
1. Hyperion Application Link (HAL)
2.Export data using report script and importing data into new cube
3.Jexport
4.XREF
Today we will learn about XREF calc script, which is used by most who want to
transfer data between cubes.
please find sample xref calc script below:
In this example I am trasfering payroll,social, bonus and headcount data from my
main P&L (profit and loss) application to work force application.
The first step of XREF is to create a location alias of the source application. In this
example my location alias is _LocAliasPL.
You can create location alias using EAS in the following way:
open the application
right click database
Click Edit
navigate to location alias.
Click to create location alias and give the details of the source cube.

/*XREF Calc Script Code*/


/*Information */
/*
Script Name : XREF
Created by : Dornakal, Hyperion Consultant, March 27, 2009
Purpose : Copy HR data from main application to work force application
Directions : Check location alias
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
*/
/*House Keeping*/
/*Set the calculator cache. */
SET CACHE HIGH ;
/* Display calculation statistics in application log. */
SET MSG Summary;
/* Display calculation completion messages in application log at 20% intervals. */
SET NOTICE LOW;
/*Turn off Intelligent Calculation */
SET UPDATECALC OFF;
/* Enables parallel calculation. */
SET CALCPARALLEL 4;
/* Baseline Fix */
FIX(HSP_INPUTVALUE, Local, USD, FINAL,Actual, FY08,
&ActualMnth,&NxtYr,@RELATIVE(Cost_Center,0),"EMPLOYEES")

SET CREATENONMISSINGBLK ON;


"PayRoll" = @XREF(_LocAliasPL, "PRODUCT");
"Social" = @XREF(_LocAliasPL, "PRODUCT");
"Bonus" = @XREF(_LocAliasPL, "PRODUCT");
"Headcount" = @XREF(_LocAliasPL, "PRODUCT");

SET CREATENONMISSINGBLK OFF;

ENDFIX;
/*END MAIN SCRIPT*/
Posted by Dornakal's-Hyperion-Blog at 3:10 PM 5 comments Links to this post
Labels: Data Transfer

Reactions:
Monday, March 16, 2009
Sample Calculation Script
/*Information */
/*
Script Name : CopyAct2Fcst
Created by : Dornakal, Hyperion Consultant, March 16, 2009
Purpose : Copy Actuals to Current Forecast
Directions : Set substitution variables CurFcst, Actmnth,CY
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
Comments : This script copies actual data from actual scenario to forecast scenario;
This rule should be run before every forecast.
*/

/*House Keeping*/

/*Set the calculator cache. */


SET CACHE HIGH ;

/* Display calculation statistics in application log. */


SET MSG Summary;

/* Display calculation completion messages in application log at 20% intervals. */


SET NOTICE LOW;

/*Turn off Intelligent Calculation */


SET UPDATECALC OFF;

/* Enables parallel calculation. */


SET CALCPARALLEL 4;

/* Baseline Fix on CurYear, Local Currency, and Level 0 cost center */


FIX(@LEVMBRS("Cost Center",0),Local)

/* Main Rollup */

/* Copies data for all existing employees of all Expense Accounts from Actual scenario
, final version to Current Forecast and Working version */

FIX ("Existing_Employees", @IDESCENDANTS("Expense Accounts"), Jan:&Actmnth)


DATACOPY Actual->Final TO &CurFcst->Working;
ENDFIX
/* End of baseline Fix*/
ENDFIX;
Posted by Dornakal's-Hyperion-Blog at 3:18 PM 2 comments Links to this post
Labels: Sample Calc Script

Reactions:
Friday, March 13, 2009
What is Intelligent Calculation? Why should
we care?
Developing calc scripts Series

What is intelligent calc? Why should I care?

A primary goal in calculation script development is optimization (elimination of extra


passes through database index). To optimize calculation, you can use FIX and IF
statements to focus calculations, or you can use an option called intelligent
calculation.

When you perform a full database calculation, Essbase marks which blocks have been
calculated. If you then load a subset of data, you can calculate only the changed data
blocks and their ancestors. This selective calculation process is intelligent
calculation.

By default, intelligent calculation is turned on. You can change the default setting in
the essbase.cfg file or on a script-by-script basis wit the SETUPDATECALC
OFF command.

Intelligent calculation is based on data-block marking, when intelligent calculation is


active, during the normal processes, within the index file, blocks are marked clean or
dirty.

Clean Blocks—Blocks that don’t require calculation


Dirty Blocks --- Blocks that require calculation.

When intelligent calculation is active, during calculation, Essbase looks for only dirty
blocks.

Exceptions:
Even when the intelligent calculation is enabled, for CALC DIM statements that do not
include all dimensions, Essbase does not use intelligent calculation process. Rather,
Essbase calculates all relevant data blocks, regardless of clean or dirty status, and all
data blocks retain their status, dirty or clean.

SET CLEARUPDATESTATUS AFTER is a calculation command that engages intelligent


calc for any calc script, regardless of construction. Typically, you use this command
where you cannot meet the conditions for a calc dim on all dimensions.

Example:

SET CLEARUPDATESTATUS AFTER


FIX(@IDESCENDANTS(“Q1”))
CALC DIM (Accounts);
ENDFIX

When you execute a calculation script that includes the SET UPDATESTATUS AFTER
command, data blocks that are marked clean are not calculated and data blocks that
are marked dirty are calculated and marked clean.

How do you force block marking without calculating?


SET CLEARUPDATESTATUS ONLY command instructs Essbase to mark as clean all data
blocks that were previously marked dirty.

How do blocks become dirty?


In the following cases the data blocks are marked as dirty.
Block creation during data input
Data modification (Lock and send)
Creation or modification of descendant blocks
Database Restructure (both dense and sparse)

What are False negative and False positive?


Occasionally, clean data blocks are marked dirty (False negative). In such cases
calculation efficiency suffers. A more serious problem, however, is a false positive
condition, in which dirty blocks are marked as clean. In such case of false positives,
data integrity can suffer.

When does False Positives arise?

Calculation only a subset of a data block:


Essbase marks at block level not at the cell level, so a calculation that I executed on a
subset of cells can cause a false positive condition. Only a few cells are calculated but
the block is marked clean, although uncalculated cells remain.

Using a FIX statement:


Ancestors of a dirty block are not marked as dirty until the descendant dirty block is
calculated. A false positive can result if the descendant dirty block is calculated
within a FIX statement that does not include the dirty ancestor block. After
calculation, essbase marks the descendant block clean, and the ancestor block
remains marked clean, although it should be marked dirty.
Following SET CLEARUPDATESTATUS ONLY with unrelated calculations :
You should follow SET CLEARUPDATESTATUS ONLY with repetition of the section of
the script for which you want to force data block marking , A false positive can occur
if SET CLEARUPDATESTATUS ONLY touches blocks that are otherwise dirty and that
are not calculated.

The Intelligent calc can provide significant performance benefits kin certain situations
but require vigilant maintenance of the clean and dirty status of data blocks to avoid
skipping the wrong blocks on calculation.

The intelligent calc function most productively used in interactive or iterative


situations in which small, incremental changes are made to a database and in which it
is not necessary to recalculate the entire database. For example you can use
intelligent calc in following situations:

During quarter close periods, allocation rates and adjusting entries may be update
multiple times. You use intelligent calc to view update results without recalculating
the entire database.
In budgeting or forecasting application, typically, many users update units and drivers
on a regular basis. In such applications, for users to analyze the impact of their
updates, a short calculation time is imperative.

Posted by Dornakal's-Hyperion-Blog at 4:21 PM 1 comments Links to this post


Labels: Developing Calc Scripts, Intelligent Calc

Reactions:
Implementation Process for Essbase
Database
Hi Guys,
good morning.
please find implementation process for Essbase database.

Essbase Database implementation include many steps. The process if iterative.


Analysis of the results of one cycle may rise new questions, prompting for new define
business requirements, which in turn may lead to changes in design.

Analysis and Planning:

1. Identify business results


2. Examine data sources
3. Analyze sample reports
4. Design Essbase Analytics outlines.

Database Creation:

1. Create Essbase outlines


2. Create Load rules
3. Create Calculation scripts

Deployment and Support:

1. Maintain Essbase outlines


2. Manage data flow
3. Analyze data
4. Provide management and user support.

hope this helps.

Posted by Dornakal's-Hyperion-Blog at 9:43 AM 2 comments Links to this post


Labels: Implementation of Essbase

Reactions:
Monday, March 9, 2009
Automate HAL Load
Hi All,

good morning.

today we will see how we can autoamte HAL job (Flow diagram)

If you are loading a large number of members, HAL chokes if you don't automate it
(.exe etc).

Here are the steps one should follow to autoamate a HAL job.

Step:1

Complete the flow diagram as shown below


Step:2
Drag and drop Window Executible from Palleate.

Step:3
Open the window executible. Go to Runtime Target tab.
give the location of the flow diagram.
Browse to the location, where you want to store the executible file.
Step:4
Check

• Use file name as runtime target name


• Automatically Build when OK pressed.
Step:5
don't check anything in Flow Diagrams tab as shown below:
Step:6
Don't check anything in Management tab as shown below:
Step:7
In Logging tab give the location of your log file.

Step:8
Don't check anything in Profile tab as shown below:
Step:9
Check

• Copy needed Vignette Business Integration Studio DLLs


• Run as Console application
• If you want status bar check that box
Step:10
Hit OK

Step :11
you will see your execution file in the list as shown below:

Step:12

you can give the location of executible file in your batch file and run it using
scheduler like window scheduler.
Posted by Dornakal's-Hyperion-Blog at 3:32 PM 6 comments Links to this post
Labels: Automation of HAL Load, HAL, Metadataload

Reactions:
Monday, February 23, 2009
Migration of DataForms in Hyperion
Planning
The goal of this post is to explain how to migrate data forms from one environment to
another in planning applications. The automation of this process helps to reduce time
and leaves no scope for human error.

FormDefUtil.cmd utility can be used to move data form definitions from one Planning
application to another. Data form definitions can be exported or imported from an
XML file. This utility is extremely useful when Data forms need to be moved from a
development environment to a production environment.This utility uses a command
line interface and is installed in the bin directory (D:Hyperion\Planning\bin). This
utility runs only on Windows-based systems (even if application servers are on UNIX).
Only administrators can run it. The utility is located on Planning server as shown
below.

Steps to export Data Forms :

1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:

formdefutil export Formname/-all server name user name password application

The utility creates an XML file in D:Hyperion/Planning/bin and logs errors


in FormDefUtil.log in the directory from which you run the utility (by default bin).
You can copy the utility to any directory and launch it from there to save files to
another directory.
Steps to Import Data Forms :

1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:

formdefutil import Location of Formnames server name user name password


application

This completes migration of data forms from one environment to another in planning
applications.
Posted by Dornakal's-Hyperion-Blog at 1:50 PM 2 comments Links to this post
Labels: Migration of DataForms, Migration of WebForms

Reactions:
Automation of Dimension extraction
Hi Guys,
good morning.
today we will learn how to automate dimension extraction from essbase.
There is a very good utility called outline extractor in

http://www.appliedolap.com

Step # 1
As a first step you should install Outline extractor.
Step # 2

Build a batch file providing details of your application and database and dimension
you want to extract as follows:

cd C:\Program Files\olapunderground\Essbase Outline Extractor

win C:\Program Files\olapunderground\Essbase Outline


Extractor\exportdim.exeServerName/AdminName/Password/ApplicationName/Databa
seName/DimensionName/Path where extracted file to be stored/!/
Doc/11111111111111111111/Text

You can schedule the above batch file to run at fixed time using various schedulers
like Window scheduler etc.

Hope this helps.


Have a BLESSED day.
Posted by Dornakal's-Hyperion-Blog at 10:58 AM 0 comments Links to this post
Labels: Automate dimension extraction in essbase

Reactions:
Thursday, February 19, 2009
Sample Report Script
Posted by Dornakal's-Hyperion-Blog at 5:37 PM 0 comments Links to this post
Labels: Report Script

Reactions:
Tuesday, February 17, 2009
Import Security in Hyperion Planning
applications
Hi Guys,
good morning.
lot of times there will be a huge number of requests to grant or remove security
access for users.
for one or two changes, it is simple task, click on member and then assign access and
grant permission. It becomes tiresome when we need to make changes for large
number of members.
it involves considerable amount of time.
Automating the security changes saves considerable amount of time. The
ImportSecurity utility in planning loads access permissions for users or groups from a
text file into Planning. Importing access permissions using Import security utility
overwrites existing access assignments only for imported members, users, or groups.
All other existing access permissions remain intact. The SL_CLEARALL parameter
clears all existing access permissions.

ImportSecurity utility is located at D:\Hyperion\Planning\bin on Planning server as


shown in the following picture:
Steps for importing access permissions:

Step 1: Create a text file and name it as Secfile.txt and save it in bin directory
(D:\Hyperion\Planning\bin). Example of Secfile.txt is shown in picture below where
Planning_Security_Group is group name, MemberName is member name, Write is
Access permission, and MEMBER is relationship.

Step 2: Locate the ImportSecurity utility by navigating to the bin directory.


Step 3: From the Command Prompt, enter this case-sensitive command, one space,
and the parameters. Separate each parameter with a comma, and enclose parameters
with double quotation marks:

ImportSecurity.cmd “appname,username,password[delimiter],[RUN_SILENT],
[SL_CLEARALL]”

Where:
appname : Name of the Planning application importing access permissions.

username : Planning administrator user name.

password :Planning administrator password.

delimiter (Optional) : SL_TAB, SL_COMMA, SL_PIPE, SL_SPACE, SL_COLON, SL_SEMI-


COLON. If no delimiter is specified, comma is the default.

RUN_SILENT ( Optional) : Execute the utility silently (the default) or with progress
messages. Specify 0 for messages, or 1 for no messages.

[SL_CLEARALL] (Optional): Clear existing access permissions when importing new


access permissions. Must be in uppercase.
You can check the results in log file in bin folder.

Posted by Dornakal's-Hyperion-Blog at 3:14 PM 0 comments Links to this post


Labels: Automate Security in Planning, Import Security in Planning

Reactions:
Monday, February 9, 2009
Load dimension members into Planning
Application using HAL from Relational
Database
Hi Guys,
Good afternoon.
please find the process of loading metadata into planning application using HAL
(Hyperion Application Link).
In the example below, I am loading employee dimension using HAL from SQL server
table.

You can also load it from flat file (text file or .csv file)

I am using following adapters to complete loading:

1. Variable

2. Planning Adapter and


3. ODBC adapter.

The complete loading flow diagram is shown below:

The first adapter is Variable adapter.


we use this adapter to give log in information and application details.

The second adapter is Planning adapter. We use this adapter as a connector to


Planning application. This adapter is fed from variable adapter. Make sure that you
change setting to connector in this adapter.
In the General tab you can give custom defined name to adapter like Planning
Connector etc.

In the Methods tab you should specify which dimension you are loading in the drop
down menu. In this example we changed the dimension to employee as we are loading
meta data related to employee.
The third adapter is planning adapter again. Make sure that you change the drop down
to load as shown below:

You can specify the name of the adapter. For example Planning load adapter etc.
In the methods section you should specify the dimension you are loading to. In this
example we are using Employee.

The last adapter is ODBC adapter.


You should create a DSN for the relational database in the location where you are
running the HAL job. Then connect to the relational database.

Select the table which has metadata information.


Once you have all the information. Connect to ports as shown below and save and
then run to load metadata.

Hope this helps.

Posted by Dornakal's-Hyperion-Blog at 2:25 PM 0 comments Links to this post


Labels: HAL, Metadataload, Planning

Reactions:
Friday, January 30, 2009
Hyperion Planning Certification Help
Hi Guys,
here are some planning questions which could help you get certified in Hyperion
Planning Certification. I intentionally didn't post answers for most of them as you can
find those answers in planning administrator manual. Hope this helps.

1. What are the required dimensions in planning?


2. What are the steps you need to take before creating planning application?
3. Which process state signifies that a planning unit is being reviewed by someone
in the organization.
4. What file acts as bridge between Planning and relational data source?
5. When you tag a currency as a reporting currency, which dimensions will it be
part of.
6. What is the maximum number of plan types you can have in planning?
7. Where are the rejected records after loading using HAL are placed in?
8. What are the different user roles in planning security?
9. Planning supports fiscal year calendars. It does not support mixed use of fiscal
year and calendar year (True/False)?
10. What are the data sources supported by hyperion financial reporting?
11.Which type of tasks can you define in a task list?
12. What are the requirements for creating a dataform?
13. Row and column layout, business rules, POV definition, display options etc
14.Fiscal start of year and fiscal start month defines the starting fiscal year and
start month for application. You cannot change this after you have created the
application (True/False)?
15.To which application elements can you assign rights
16.When you enter plan data in the planning web client, planning automatically
distributes values from :
· Summary periods to base periods
17. Do you need to refresh your database once you modify the webforms?
18. How many relational databases are needed for an application, which has
4 plantypes
19.List of places where you can launch business rules from
20.What is the maximum number of dimensions that a planning application can
have?
21. Which can be runtime prompts in business rules?
Member or members
Dimension
String integer, real or percent
22. The POV is always set to _______ for each POV dimension
Single member
Multiple members
none
23.What is the reason for having sparse dimension? Data values are not smoothly
and randomly distributed throughout the database Data values do not exist for
the majority of member combinations in the database
24.How many databases will be created for two plan types.
25.By default account and period are dense dimensions in planning application.
26.When creating planning application, what base time period allows you to select
weekly distribution option? (Ans:Monthly)
27.In which dimension your exchange rates are stored?
28.An authentication directory is a centralized store of users and passwords, user
groups etc.
29. What authentication directories are supported by hyperion planning?
30. Which type of tasks can you define in a task list?
Dataform
Descriptive
Workflow
URL
Business rule
31.Because dimension members can belong to more than one plan type you can
specify different aggregation options by plan type.
32.Where does planning store application definition?
33. What happens when you use dynamic calc for base-level members for which
users enter data?
34.Which data form tasks can you perform in smart view?
35. Parent members set to dynamic calc are read only in target version.
36.How many user variables can you setup for each dimension?
37.A parent member set to label only displays the value of its first child member.
38.In multiple currency applications, you cannot apply the label only storage
option to members of the following dimensions: Entity, Version, Currency and
user defined custom dimensions. To store exchange rates, these members
should remain Never Share.
39. What is the calculation order in essbase
Account dimension
Time dimension
Other dense dimensions in top down order
Other sparse dimensions in top down order
Members that are tagged as two pass calcs
40. The Two pass calculation option is used primarily for members of the account
dimension. For other dimension, the two pass calculation option is only valid
for Dynamic calc or dynamic calc and store members.
41.What is the maximum number of alias tables you can have for a dimension?
42.What is a planning unit?
43.For all Hyperion system 9 products, all user provisioning and external
authentication is handled through:
44. After entering or modifying data in a dataform, you must calculate new totals
for parent members in the dataform.
45.When loading files that contain member properties, you must :
Load parent members before children
Have a separate load file for each dimension.
Posted by Dornakal's-Hyperion-Blog at 3:56 PM 3 comments Links to this post
Labels: Hyperion Planning Certification

Reactions:
Wednesday, January 28, 2009
How to delete members in planning using
HAL
Hi
good morning.
Deleting members in planning is challenging to do manually as it consume lots of time
specially if you more than 5 members to delete. we can use Hyperion Application Link
(HAL) to delete precisely the members we don't need.

Here are the steps to follow on how to delete members using HAL

Step:1
prepare a text document or .CSV file of the list of members you want to delete in
Parent, Child format.

Step:2
Open a new flow diagram and add flat file adapter, Planning adapter and Variable
adapter as shown below. When you select a Planning adapter, the names of ports and
the number of ports are determined by the dimension to which members are deleted.
Most ports reflect the properties and custom attributes of the selected dimension.

Step : 3
Open a Variable adapter and type Delete in port name column and
in the Initial Value column type

Delete Level 0 .........................(If you want to delete the member if it has no


children)
Delete Idescendants ............(If you want to delete the member and its
descendants)
Delete Descendants ............. (If you want to delete the descendants but not the
member)

Step.4:
Upload the flat file with members to delete as shown below:
After the execution of HAL, you should refresh planning application to push the
changes to essbase.
Posted by Dornakal's-Hyperion-Blog at 8:49 AM 3 comments Links to this post
Labels: Delete members in planning, HAL

Reactions:
Tuesday, January 27, 2009
Automation of DataLoad
This document is intended to explain how to automate data load.

Step.1:
Create a data load rule in Essbase

Step.2 :

Create a Maxl file to execute the load rule

Step .3 :

Create a batch file to execute the MaxL file. We can schedule the Batch file using
windows scheduler.

Example of MaxL code.


1.This MaxL code clears the existing data
2.Loads data
3.Aggregates the data.
Remember : You need clear the data before you load. We clear the data because of
our requirements. You can just use step 2 and 3 load data and aggregate data.

/*Script Start */
/*Login */
LOGIN 'UserName' 'Password' ON 'Server Name';

/*Clears data */
execute calculation '
Fix (@idescendants("Account"),@idescendants(Entity),&Current Year,Version)
CLEARDATA Scenario;
Endfix;'
on ApplicationName.Database Name;

/* Imports Data */

import database 'ApplicationName'.'DatabaseName' data connect as 'UserName'


identified by 'Password' using server rules_file 'LoadRuleName' on error
write to 'e:\\Logs\Errorfilename.err';

/*Aggregates Data*/
execute calculation'
SET AGGMISSG ON;
SET UPDATECALC OFF;
Fix (Scenario,Year,Version)
CALC DIM(Product,Entity);
Endfix;'
on ApplicationName.DatabaseName;

LOGOUT;
EXIT;
/*End MaxL Script*/

Batch File to call the MaxL Script and send email about status of the

Rem ******************************************************************************
Rem Object Type: Batch File
Rem Object Name: BatchFileName.bat
Rem Script Date: 01-27-2009
Rem Created by: Dornakal
Rem Purpose: This script loads the following data into Cube
Rem Changed By:
Rem Change Date:
Rem Description:
Rem ******************************************************************************
Rem This starts the log file
echo "Start of log" > E:\Logs\Dataload.log

Rem This calls MaxL script to load data

essmsh E:\Scripts\MaxL \DataLoad.mxl >> E:\Logs\Dataload.log

Rem Send mail about the status of the job

sendmail -b E:\Logs\Dataload.log -s "Subject of the mail (data load status)." -f


Sendersemail -r Recievers email -r Receiver’s email -X HQSMTP.yourcompany.net

Posted by Dornakal's-Hyperion-Blog at 10:37 AM 3 comments Links to this post

Reactions:
Thursday, January 22, 2009
Certification Questions
Hi All,
please find some sample questions, which can help you to get certified in Essbase.

1. Development tool used to build application which can add or delete


dimensions.
2. essbase quick start when is it used
3. multiple fix – dense or sparse
4. multiple if – dense or sparse
5. Member selection wizard in excel. Placement
6. When are hash tables used
7. Number of passes through database (two pass calc) – 2questions
8. Query designer – what cannot be assigned to filter access
9. Sequence of cache files to be optimized based on their priority
10.What causes fragmentation?
11.Which compression type to use based on the statistics information of the
database
is incremental restructuring deferred when u add, move or delete a member of
a sparse dimension
12.What command do you use to optimize data export
13.Max .pag file size
14.Given a load file identify what field or load method should be used.
15.Attribute dimensions
16.What cannot be linked to a cell with an LRO
17.When would you select Shared Member functionality, as opposed to Attribute
dimension functionality?
18.Proper use of substitution variables in the cal scripts
19.Variance calculation – when is it used. Result
20.Given an exhibit identify the improper use of label only tag
21.What are the minimum security settings for LRO’s
22.Accept and reject records based on rules
23.Data source 3 partitions, data target updated. How many .CHG files are
updated
24.Given settings, what is the optimal outline
25.Which partition uses less synchronization
26.Given 2 exhibits from excel, which exhibit will be retrieved
27.hashtbl theory
28.Set commands – 3 questions
29.Dynamic calc and dynamic cal and store properties interchanged what kind of
restructuring happens
30.Attribute calc dimensions
31.Use of attribute dimensions in partitions
32.Identify how many blocks are created given database statistics
33.How many index files will be restructures
34.Validate command
35.Which access a filter can have
36.Fix command and number of passes
37.Fix with datacopy
38.Fix with cross-dimensional operator
39.frmbottom up command
40.Given an outline which dimension is tagged as label only
41.Can u load data into attribute dimension
42.Datacopy syntax
43.beginarchive and endarchive commands
44.net delay command
45.query designer filter access issues
46.Which file holds uncompressed data blocks
47.Fragmentation, causes prevention and resolution
48.Essbase member selection placement in query designer of spread sheet

Posted by Dornakal's-Hyperion-Blog at 8:44 AM 1 comments Links to this post


Reactions:
Tuesday, January 20, 2009
Introduction to Hyperion Reports (Boot
Camp)
Posted by Dornakal's-Hyperion-Blog at 5:14 PM 2 comments Links to this post
Labels: Hyperion Reports

Reactions:
Hyperion Essbase Interview questions -
Part 2
1. What is the difference between FIX and IF statements? In what situations
they are used and what are its pros and cons?

This is the favourite question, which everyone will be asking. If should be used for
Dense dimensions and FIX for sparse dimensions.

Reason: If you FIX on a Dense dimension. Remember dense dimension member will be
present in every block of essbase. so if you FIX on dense dimension you are asking to
pull all blocks, so the purpose of "FIX" (where you want to pull only some blocks) is
lost. If you FIX on sparse dimensions, only required blocks are pulled and calculation
will be fast.

2. In what situation I cannot use AGG MISSING ON setting?

In some cases you may need to load data into parent level. Like allocations, for
example you load expense data into east region and allocate the expenses to various
cost centers under east region based on the headcount or otehr factors.

If you have AGG MISSING ON and when we aggregate the database. it will clear the
parent value if there are no data for children. In such cases we don't use AGGMISSING
ON option.
3. Give me an estimate of the largest Essbase db you have worked on (Number of
dims, members, calcs, etc)
In my past experience, the biggest databases for BSO Essbase databases were below
10GB in most cases less than 5 GB. And number of dimensions are 8 to 10 with
maximum 4 attribute dimensions. In most cases account and period are dense
dimensions. number of members are around 5000 for entities. Any dimension
members are below 10000.

4. What process do you follow when developing calc scripts?

a. Analyze the requirements documents.


b. Decide whether it is required to write calc script or can we create formulae in
the
database outline.
c. Write the calculation script in the incremental method i.e. calculating some
members and test for optimization.

5 .What is the major difference between CALCDIM and AGG commands?


CALCDIM will only rollup only thru one dimensionor specified set of dimension and
also calculates the formulae associated with the members
AGG will roll up all dims ignoring formulae

Other questions you can find answers in DBAG.

What is the significance of data cache?


What are the advantages of attribute dimensions over shared members?
What is the optimum setting index cache and Data file cache?
What type of partitions does ASO support?
Differentiate between dynamic calc and intelligent calc?
How do you optimize data load in BSO?
How do you back up ASO database?
What members cannot be Dynamic Calc?
Have you worked on currency conversion?
What versions of Essbase have you worked on?
Posted by Dornakal's-Hyperion-Blog at 3:02 PM 2 comments Links to this post

Reactions:
How to tackle Essbase Interview

Hi Guys:
Please find some of the interview questions :
The BSO architecture like

• How blocks are built


• The impact of block size on calculations
• The impact of number of blocks on retrieval

The interviewer looks for real life scenarios not the bookish answers. For example if
he asks “Tell me about blocks in Essbase”:

DBAG says:
"Essbase creates a data block for each unique combination of sparse standard
dimension members (providing that at least one data value exists for the sparse
dimension member combination). The data block represents all the dense dimension
members for its combination of sparse dimension members."

You should answer the question not like you memorized the definition of Block. You
are good if answer the question like this :

“In BSO the blocks are created for every combination of sparse standard dimension
member provided data existed for that combination.”

In addition you should add the following when you answer the question on blocks.

“Generally Blocks are in compressed state on disk. During calculation they are pulled
in cache and expanded. That’s why Block size is very important. If the block size is
huge, only few blocks are pulled onto cache at a time and if a calculation requires
10,000 blocks to finish calculation it takes forever to finish calculation and if block
size is so small and the index file becomes huge; Essbase takes forever to search for
the blocks it needed like needle in the stack of hay.”

This answer shows your confidence on subject and also makes interviewer to think
that you have real time experience.

2.What are the different storage options in Essbase?

There are two different storage options in Essbase. Previously there used to be only
one BSO. With the version 7.0 ASO came into place.

Deciding which storage option to use depends on the business requirements. If


business don’t care about write back option or complex calculations like allocations,
currency conversions etc and need fast aggregation then ASO is the answer.

In Contrast if the business requirements need write back functionality and complex
calculations like inter company eliminations, allocation based on percentages etc
BSO is the answer.
Because of the above reasons, Hyperion planning need to go with BSO option.

Test on Sparse and Dense knowledge

I had business requirements gathered and built initial dimensions. We don’t sell all
products in all cost centers. Can you suggest us the dimension types for the following
dimensions? (This question is intended to test the concept of dense and sparse in BSO
and outline optimization)

The second table gives you answer for the question.

The outline is arranged in hour glass structure.

Reason why we should arrange outline in HourGlass structure

Hour glass structure makes index to be more search freindly. Like assume that we
have largest sparse dimension as the first dimension and then next largest in that
order. when the essbase calculates it has to search all the largest sparse dimension
and then to next largest which take more time.

In the hour glas sstructure it takes less time to search in index.

Posted by Dornakal's-Hyperion-Blog at 1:50 PM 2 comments Links to this post


Labels: BSO, Essbase, Hyperion, Interview Questions

Reactions:
Migration of DataForms in Hyperion
Planning
The goal of this post is to explain how to migrate data forms from one environment to
another in planning applications. The automation of this process helps to reduce time
and leaves no scope for human error.

FormDefUtil.cmd utility can be used to move data form definitions from one Planning
application to another. Data form definitions can be exported or imported from an
XML file. This utility is extremely useful when Data forms need to be moved from a
development environment to a production environment.This utility uses a command
line interface and is installed in the bin directory (D:Hyperion\Planning\bin). This
utility runs only on Windows-based systems (even if application servers are on UNIX).
Only administrators can run it. The utility is located on Planning server as shown
below.

Steps to export Data Forms :

1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:

formdefutil export Formname/-all server name user name password application


The utility creates an XML file in D:Hyperion/Planning/bin and logs errors
in FormDefUtil.log in the directory from which you run the utility (by default bin).
You can copy the utility to any directory and launch it from there to save files to
another directory.

Steps to Import Data Forms :

1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:

formdefutil import Location of Formnames server name user name password


application

This completes migration of data forms from one environment to another in planning
applications.
Posted by Dornakal's-Hyperion-Blog at 1:50 PM 2 comments Links to this post
Labels: Migration of DataForms, Migration of WebForms

Reactions:
Automation of Dimension extraction
Hi Guys,
good morning.
today we will learn how to automate dimension extraction from essbase.
There is a very good utility called outline extractor in

http://www.appliedolap.com

Step # 1
As a first step you should install Outline extractor.

Step # 2

Build a batch file providing details of your application and database and dimension
you want to extract as follows:

cd C:\Program Files\olapunderground\Essbase Outline Extractor

win C:\Program Files\olapunderground\Essbase Outline


Extractor\exportdim.exeServerName/AdminName/Password/ApplicationName/Databa
seName/DimensionName/Path where extracted file to be stored/!/
Doc/11111111111111111111/Text
You can schedule the above batch file to run at fixed time using various schedulers
like Window scheduler etc.

Hope this helps.


Have a BLESSED day.
Posted by Dornakal's-Hyperion-Blog at 10:58 AM 0 comments Links to this post
Labels: Automate dimension extraction in essbase

Reactions:
Thursday, February 19, 2009
Sample Report Script

Posted by Dornakal's-Hyperion-Blog at 5:37 PM 0 comments Links to this post


Labels: Report Script

Reactions:
Tuesday, February 17, 2009
Import Security in Hyperion Planning
applications
Hi Guys,
good morning.
lot of times there will be a huge number of requests to grant or remove security
access for users.
for one or two changes, it is simple task, click on member and then assign access and
grant permission. It becomes tiresome when we need to make changes for large
number of members.
it involves considerable amount of time.
Automating the security changes saves considerable amount of time. The
ImportSecurity utility in planning loads access permissions for users or groups from a
text file into Planning. Importing access permissions using Import security utility
overwrites existing access assignments only for imported members, users, or groups.
All other existing access permissions remain intact. The SL_CLEARALL parameter
clears all existing access permissions.

ImportSecurity utility is located at D:\Hyperion\Planning\bin on Planning server as


shown in the following picture:

Steps for importing access permissions:

Step 1: Create a text file and name it as Secfile.txt and save it in bin directory
(D:\Hyperion\Planning\bin). Example of Secfile.txt is shown in picture below where
Planning_Security_Group is group name, MemberName is member name, Write is
Access permission, and MEMBER is relationship.
Step 2: Locate the ImportSecurity utility by navigating to the bin directory.

Step 3: From the Command Prompt, enter this case-sensitive command, one space,
and the parameters. Separate each parameter with a comma, and enclose parameters
with double quotation marks:

ImportSecurity.cmd “appname,username,password[delimiter],[RUN_SILENT],
[SL_CLEARALL]”

Where:
appname : Name of the Planning application importing access permissions.

username : Planning administrator user name.

password :Planning administrator password.

delimiter (Optional) : SL_TAB, SL_COMMA, SL_PIPE, SL_SPACE, SL_COLON, SL_SEMI-


COLON. If no delimiter is specified, comma is the default.

RUN_SILENT ( Optional) : Execute the utility silently (the default) or with progress
messages. Specify 0 for messages, or 1 for no messages.

[SL_CLEARALL] (Optional): Clear existing access permissions when importing new


access permissions. Must be in uppercase.
You can check the results in log file in bin folder.

Posted by Dornakal's-Hyperion-Blog at 3:14 PM 0 comments Links to this post


Labels: Automate Security in Planning, Import Security in Planning

Reactions:
Monday, February 9, 2009
Load dimension members into Planning
Application using HAL from Relational
Database
Hi Guys,
Good afternoon.
please find the process of loading metadata into planning application using HAL
(Hyperion Application Link).
In the example below, I am loading employee dimension using HAL from SQL server
table.

You can also load it from flat file (text file or .csv file)

I am using following adapters to complete loading:

1. Variable

2. Planning Adapter and


3. ODBC adapter.

The complete loading flow diagram is shown below:

The first adapter is Variable adapter.


we use this adapter to give log in information and application details.

The second adapter is Planning adapter. We use this adapter as a connector to


Planning application. This adapter is fed from variable adapter. Make sure that you
change setting to connector in this adapter.
In the General tab you can give custom defined name to adapter like Planning
Connector etc.

In the Methods tab you should specify which dimension you are loading in the drop
down menu. In this example we changed the dimension to employee as we are loading
meta data related to employee.
The third adapter is planning adapter again. Make sure that you change the drop down
to load as shown below:

You can specify the name of the adapter. For example Planning load adapter etc.
In the methods section you should specify the dimension you are loading to. In this
example we are using Employee.

The last adapter is ODBC adapter.


You should create a DSN for the relational database in the location where you are
running the HAL job. Then connect to the relational database.

Select the table which has metadata information.


Once you have all the information. Connect to ports as shown below and save and
then run to load metadata.

Hope this helps.

Posted by Dornakal's-Hyperion-Blog at 2:25 PM 0 comments Links to this post


Labels: HAL, Metadataload, Planning

Reactions:
Automation of Dimension extraction
Hi Guys,
good morning.
today we will learn how to automate dimension extraction from essbase.
There is a very good utility called outline extractor in

http://www.appliedolap.com

Step # 1
As a first step you should install Outline extractor.

Step # 2

Build a batch file providing details of your application and database and dimension
you want to extract as follows:

cd C:\Program Files\olapunderground\Essbase Outline Extractor

win C:\Program Files\olapunderground\Essbase Outline


Extractor\exportdim.exeServerName/AdminName/Password/ApplicationName/Databa
seName/DimensionName/Path where extracted file to be stored/!/
Doc/11111111111111111111/Text

You can schedule the above batch file to run at fixed time using various schedulers
like Window scheduler etc.
Sample Report Script

Import Security in Hyperion Planning


applications
Hi Guys,
good morning.
lot of times there will be a huge number of requests to grant or remove security
access for users.
for one or two changes, it is simple task, click on member and then assign access and
grant permission. It becomes tiresome when we need to make changes for large
number of members.
it involves considerable amount of time.
Automating the security changes saves considerable amount of time. The
ImportSecurity utility in planning loads access permissions for users or groups from a
text file into Planning. Importing access permissions using Import security utility
overwrites existing access assignments only for imported members, users, or groups.
All other existing access permissions remain intact. The SL_CLEARALL parameter
clears all existing access permissions.

ImportSecurity utility is located at D:\Hyperion\Planning\bin on Planning server as


shown in the following picture:

Steps for importing access permissions:

Step 1: Create a text file and name it as Secfile.txt and save it in bin directory
(D:\Hyperion\Planning\bin). Example of Secfile.txt is shown in picture below where
Planning_Security_Group is group name, MemberName is member name, Write is
Access permission, and MEMBER is relationship.

Step 2: Locate the ImportSecurity utility by navigating to the bin directory.


Step 3: From the Command Prompt, enter this case-sensitive command, one space,
and the parameters. Separate each parameter with a comma, and enclose parameters
with double quotation marks:

ImportSecurity.cmd “appname,username,password[delimiter],[RUN_SILENT],
[SL_CLEARALL]”

Where:
appname : Name of the Planning application importing access permissions.

username : Planning administrator user name.

password :Planning administrator password.

delimiter (Optional) : SL_TAB, SL_COMMA, SL_PIPE, SL_SPACE, SL_COLON, SL_SEMI-


COLON. If no delimiter is specified, comma is the default.

RUN_SILENT ( Optional) : Execute the utility silently (the default) or with progress
messages. Specify 0 for messages, or 1 for no messages.

[SL_CLEARALL] (Optional): Clear existing access permissions when importing new


access permissions. Must be in uppercase.
You can check the results in log file in bin folder.

Posted by Dornakal's-Hyperion-Blog at 3:14 PM


Labels: Automate Security in Planning, Import Security in Planning

Reactions:
Load dimension members into Planning
Application using HAL from Relational
Database
Hi Guys,

Good afternoon.

please find the process of loading metadata into planning application using HAL
(Hyperion Application Link).

In the example below, I am loading employee dimension using HAL from SQL server
table.

You can also load it from flat file (text file or .csv file)

I am using following adapters to complete loading:


1. Variable

2. Planning Adapter and

3. ODBC adapter.

The complete loading flow diagram is shown below:

The first adapter is Variable adapter.

we use this adapter to give log in information and application details.


The second adapter is Planning adapter. We use this adapter as a connector to
Planning application. This adapter is fed from variable adapter. Make sure that you
change setting to connector in this adapter.

In the General tab you can give custom defined name to adapter like Planning
Connector etc.
In the Methods tab you should specify which dimension you are loading in the drop
down menu. In this example we changed the dimension to employee as we are loading
meta data related to employee.
The third adapter is planning adapter again. Make sure that you change the drop down
to load as shown below:

You can specify the name of the adapter. For example Planning load adapter etc.

In the methods section you should specify the dimension you are loading to. In this
example we are using Employee.

The last adapter is ODBC adapter.

You should create a DSN for the relational database in the location where you are
running the HAL job. Then connect to the relational database.
Select the table which has metadata information.

Once you have all the information. Connect to ports as shown below and save and
then run to load metadata.
Hope this helps.

Posted by Dornakal's-Hyperion-Blog at 2:25 PM


Labels: HAL, Metadataload, Planning

Reaction
s:

0 comments:

Post a Comment

Links to this post

Create a Link

Newer PostOlder PostHome

Common Mistakes to avoid in Hyperion


Resume
Hi All,
Good evening.
We were recruiting for hyperion developer last year and got many resumes as
response for our advertisement.

Here are some things, which I observed while reviewing those resumes.

1. A candidate has overlapping experience. He claimed that he worked for a client in


Jacksonville, FL and also at other client in San Jose, CA at the same period of time.
We asked how he managed to work at both places (hectic schedule), he had no clue.
If you were working on multiple projects should answer questions related to that or
take care when listing project periods.

2.Putting lot of experience in unrelated technology.

Common why do your prospective employer want to know what you did in cold fusion
coding, when he advertised for hyperion developer.

3.Stating same responsibilities on different projects.

It is possible to have same type of responsibilities in different projects. If so club both


projects and state responsibilities once. The content of resume is more important
than the number of pages. The common myth is that employer will look into resumes,
only if it has more than 5 pages. That is untrue. The candidate, which we hired have a
clean and small resume focusing on main points.

4.Unable to answer what the candidate stated in resume.


Once a candidate stated that he is expert in SQR reporting. SQR was not we were
looking for and didn’t ask for that in our advertisement. We saw that he is an expert
in SQR in resume and asked him a question related to that. He was not able to answer
fundamentals in SQR, which undermined the credibility of the candidate.
Don’t put something which you heard of. Employer will read your resume entirely and
ask questions based on your resume. Very few people have set of questions prepared
to ask for an interview. One should have complete grip on resume. Should have
answers for every word you put in resume. Should not put any unrelated experience.
Like project on IBM mainframes doesn’t help you for the position in Hyperion.

Hope you will take care of above points when building your resume.
Posted by Dornakal's-Hyperion-Blog at 5:36 PM 1 comments Links to this post
Labels: Resume Building

Reactions:
Friday, March 27, 2009
Data transfer between two hyperion
Essbase Cubes
Data Transfer from one cube to another.
There are different ways to transfer data from one cube to other.
1. Hyperion Application Link (HAL)
2.Export data using report script and importing data into new cube
3.Jexport
4.XREF
Today we will learn about XREF calc script, which is used by most who want to
transfer data between cubes.
please find sample xref calc script below:
In this example I am trasfering payroll,social, bonus and headcount data from my
main P&L (profit and loss) application to work force application.
The first step of XREF is to create a location alias of the source application. In this
example my location alias is _LocAliasPL.
You can create location alias using EAS in the following way:
open the application
right click database
Click Edit
navigate to location alias.
Click to create location alias and give the details of the source cube.

/*XREF Calc Script Code*/


/*Information */
/*
Script Name : XREF
Created by : Dornakal, Hyperion Consultant, March 27, 2009
Purpose : Copy HR data from main application to work force application
Directions : Check location alias
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
*/
/*House Keeping*/
/*Set the calculator cache. */
SET CACHE HIGH ;
/* Display calculation statistics in application log. */
SET MSG Summary;
/* Display calculation completion messages in application log at 20% intervals. */
SET NOTICE LOW;
/*Turn off Intelligent Calculation */
SET UPDATECALC OFF;
/* Enables parallel calculation. */
SET CALCPARALLEL 4;
/* Baseline Fix */
FIX(HSP_INPUTVALUE, Local, USD, FINAL,Actual, FY08,
&ActualMnth,&NxtYr,@RELATIVE(Cost_Center,0),"EMPLOYEES")

SET CREATENONMISSINGBLK ON;


"PayRoll" = @XREF(_LocAliasPL, "PRODUCT");
"Social" = @XREF(_LocAliasPL, "PRODUCT");
"Bonus" = @XREF(_LocAliasPL, "PRODUCT");
"Headcount" = @XREF(_LocAliasPL, "PRODUCT");
SET CREATENONMISSINGBLK OFF;

ENDFIX;
/*END MAIN SCRIPT*/
Posted by Dornakal's-Hyperion-Blog at 3:10 PM 5 comments Links to this post
Labels: Data Transfer

Reactions:
Monday, March 16, 2009
Sample Calculation Script
/*Information */
/*
Script Name : CopyAct2Fcst
Created by : Dornakal, Hyperion Consultant, March 16, 2009
Purpose : Copy Actuals to Current Forecast
Directions : Set substitution variables CurFcst, Actmnth,CY
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
Comments : This script copies actual data from actual scenario to forecast scenario;
This rule should be run before every forecast.
*/

/*House Keeping*/

/*Set the calculator cache. */


SET CACHE HIGH ;

/* Display calculation statistics in application log. */


SET MSG Summary;

/* Display calculation completion messages in application log at 20% intervals. */


SET NOTICE LOW;

/*Turn off Intelligent Calculation */


SET UPDATECALC OFF;

/* Enables parallel calculation. */


SET CALCPARALLEL 4;
/* Baseline Fix on CurYear, Local Currency, and Level 0 cost center */
FIX(@LEVMBRS("Cost Center",0),Local)

/* Main Rollup */

/* Copies data for all existing employees of all Expense Accounts from Actual scenario
, final version to Current Forecast and Working version */

FIX ("Existing_Employees", @IDESCENDANTS("Expense Accounts"), Jan:&Actmnth)


DATACOPY Actual->Final TO &CurFcst->Working;
ENDFIX

/* End of baseline Fix*/


ENDFIX;
Posted by Dornakal's-Hyperion-Blog at 3:18 PM 2 comments Links to this post
Labels: Sample Calc Script

Reactions:
Friday, March 13, 2009
What is Intelligent Calculation? Why should
we care?
Developing calc scripts Series

What is intelligent calc? Why should I care?

A primary goal in calculation script development is optimization (elimination of extra


passes through database index). To optimize calculation, you can use FIX and IF
statements to focus calculations, or you can use an option called intelligent
calculation.

When you perform a full database calculation, Essbase marks which blocks have been
calculated. If you then load a subset of data, you can calculate only the changed data
blocks and their ancestors. This selective calculation process is intelligent
calculation.

By default, intelligent calculation is turned on. You can change the default setting in
the essbase.cfg file or on a script-by-script basis wit the SETUPDATECALC
OFF command.

Intelligent calculation is based on data-block marking, when intelligent calculation is


active, during the normal processes, within the index file, blocks are marked clean or
dirty.

Clean Blocks—Blocks that don’t require calculation


Dirty Blocks --- Blocks that require calculation.

When intelligent calculation is active, during calculation, Essbase looks for only dirty
blocks.

Exceptions:
Even when the intelligent calculation is enabled, for CALC DIM statements that do not
include all dimensions, Essbase does not use intelligent calculation process. Rather,
Essbase calculates all relevant data blocks, regardless of clean or dirty status, and all
data blocks retain their status, dirty or clean.

SET CLEARUPDATESTATUS AFTER is a calculation command that engages intelligent


calc for any calc script, regardless of construction. Typically, you use this command
where you cannot meet the conditions for a calc dim on all dimensions.

Example:

SET CLEARUPDATESTATUS AFTER


FIX(@IDESCENDANTS(“Q1”))
CALC DIM (Accounts);
ENDFIX

When you execute a calculation script that includes the SET UPDATESTATUS AFTER
command, data blocks that are marked clean are not calculated and data blocks that
are marked dirty are calculated and marked clean.

How do you force block marking without calculating?


SET CLEARUPDATESTATUS ONLY command instructs Essbase to mark as clean all data
blocks that were previously marked dirty.

How do blocks become dirty?


In the following cases the data blocks are marked as dirty.
Block creation during data input
Data modification (Lock and send)
Creation or modification of descendant blocks
Database Restructure (both dense and sparse)

What are False negative and False positive?


Occasionally, clean data blocks are marked dirty (False negative). In such cases
calculation efficiency suffers. A more serious problem, however, is a false positive
condition, in which dirty blocks are marked as clean. In such case of false positives,
data integrity can suffer.
When does False Positives arise?

Calculation only a subset of a data block:


Essbase marks at block level not at the cell level, so a calculation that I executed on a
subset of cells can cause a false positive condition. Only a few cells are calculated but
the block is marked clean, although uncalculated cells remain.

Using a FIX statement:


Ancestors of a dirty block are not marked as dirty until the descendant dirty block is
calculated. A false positive can result if the descendant dirty block is calculated
within a FIX statement that does not include the dirty ancestor block. After
calculation, essbase marks the descendant block clean, and the ancestor block
remains marked clean, although it should be marked dirty.

Following SET CLEARUPDATESTATUS ONLY with unrelated calculations :


You should follow SET CLEARUPDATESTATUS ONLY with repetition of the section of
the script for which you want to force data block marking , A false positive can occur
if SET CLEARUPDATESTATUS ONLY touches blocks that are otherwise dirty and that
are not calculated.

The Intelligent calc can provide significant performance benefits kin certain situations
but require vigilant maintenance of the clean and dirty status of data blocks to avoid
skipping the wrong blocks on calculation.

The intelligent calc function most productively used in interactive or iterative


situations in which small, incremental changes are made to a database and in which it
is not necessary to recalculate the entire database. For example you can use
intelligent calc in following situations:

During quarter close periods, allocation rates and adjusting entries may be update
multiple times. You use intelligent calc to view update results without recalculating
the entire database.
In budgeting or forecasting application, typically, many users update units and drivers
on a regular basis. In such applications, for users to analyze the impact of their
updates, a short calculation time is imperative.

Posted by Dornakal's-Hyperion-Blog at 4:21 PM 1 comments Links to this post


Labels: Developing Calc Scripts, Intelligent Calc

Reactions:
Implementation Process for Essbase
Database
Hi Guys,
good morning.
please find implementation process for Essbase database.

Essbase Database implementation include many steps. The process if iterative.


Analysis of the results of one cycle may rise new questions, prompting for new define
business requirements, which in turn may lead to changes in design.

Analysis and Planning:

1. Identify business results


2. Examine data sources
3. Analyze sample reports
4. Design Essbase Analytics outlines.

Database Creation:

1. Create Essbase outlines


2. Create Load rules
3. Create Calculation scripts

Deployment and Support:

1. Maintain Essbase outlines


2. Manage data flow
3. Analyze data
4. Provide management and user support.

hope this helps.

Posted by Dornakal's-Hyperion-Blog at 9:43 AM 2 comments Links to this post


Labels: Implementation of Essbase

Reactions:
Monday, March 9, 2009
Automate HAL Load
Hi All,

good morning.
today we will see how we can autoamte HAL job (Flow diagram)

If you are loading a large number of members, HAL chokes if you don't automate it
(.exe etc).

Here are the steps one should follow to autoamate a HAL job.

Step:1

Complete the flow diagram as shown below

Step:2
Drag and drop Window Executible from Palleate.
Step:3
Open the window executible. Go to Runtime Target tab.
give the location of the flow diagram.
Browse to the location, where you want to store the executible file.
Step:4
Check

• Use file name as runtime target name


• Automatically Build when OK pressed.

Step:5
don't check anything in Flow Diagrams tab as shown below:
Step:6
Don't check anything in Management tab as shown below:
Step:7
In Logging tab give the location of your log file.

Step:8
Don't check anything in Profile tab as shown below:
Step:9
Check

• Copy needed Vignette Business Integration Studio DLLs


• Run as Console application
• If you want status bar check that box
Step:10
Hit OK

Step :11
you will see your execution file in the list as shown below:

Step:12

you can give the location of executible file in your batch file and run it using
scheduler like window scheduler.
Posted by Dornakal's-Hyperion-Blog at 3:32 PM 6 comments Links to this post
Labels: Automation of HAL Load, HAL, Metadataload

Reactions:
Data transfer between two hyperion
Essbase Cubes
Data Transfer from one cube to another.

There are different ways to transfer data from one cube to other.

1. Hyperion Application Link (HAL)

2.Export data using report script and importing data into new cube
3.Jexport
4.XREF

Today we will learn about XREF calc script, which is used by most who want to
transfer data between cubes.
please find sample xref calc script below:

In this example I am trasfering payroll,social, bonus and headcount data from my


main P&L (profit and loss) application to work force application.

The first step of XREF is to create a location alias of the source application. In this
example my location alias is _LocAliasPL.

You can create location alias using EAS in the following way:
open the application
right click database
Click Edit
navigate to location alias.
Click to create location alias and give the details of the source cube.

/*XREF Calc Script Code*/


/*Information */

/*

Script Name : XREF

Created by : Dornakal, Hyperion Consultant, March 27, 2009

Purpose : Copy HR data from main application to work force application

Directions : Check location alias

Versions : Essbase 9.3.1

Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse

*/

/*House Keeping*/

/*Set the calculator cache. */

SET CACHE HIGH ;

/* Display calculation statistics in application log. */

SET MSG Summary;

/* Display calculation completion messages in application log at 20% intervals. */


SET NOTICE LOW;

/*Turn off Intelligent Calculation */

SET UPDATECALC OFF;

/* Enables parallel calculation. */

SET CALCPARALLEL 4;

/* Baseline Fix */
FIX(HSP_INPUTVALUE, Local, USD, FINAL,Actual, FY08,
&ActualMnth,&NxtYr,@RELATIVE(Cost_Center,0),"EMPLOYEES")

SET CREATENONMISSINGBLK ON;

"PayRoll" = @XREF(_LocAliasPL, "PRODUCT");


"Social" = @XREF(_LocAliasPL, "PRODUCT");
"Bonus" = @XREF(_LocAliasPL, "PRODUCT");
"Headcount" = @XREF(_LocAliasPL, "PRODUCT");

SET CREATENONMISSINGBLK OFF;

ENDFIX;

/*END MAIN SCRIPT*/

Posted by Dornakal's-Hyperion-Blog at 3:10 PM

Labels: Data Transfer

Reaction
s:

5 comments:

mahesh said...

Excellent post Dornakal.

I have some queries on this topic. Can you please clarify it for me.
1.Do i need equal number of dimensions on both cubes?

2.suppose i have equal number of dimensions on both cubes but dimensions are
different how will it works?

I think it is working fine in the 2 situation, but functionally is it right?

my suggestion is can you please describe how many dimensions are there and
what are that. so we get good clarity on the script.

Many thanks,
Mahesh.

March 28, 2009 10:18 AM

mahesh said...

Hi Dornakal,

your posts are very interesting.


Can please post some information about EPMA.
I have some knowledge on EPMA.
Can you please tell me how is it working.
what are the situations we use EPMA.
and something about Dimension library, Application library, Data
Synchronization, Application upgradation and all.

Thanks,
Mahesh.

March 28, 2009 10:31 AM

Anonymous said...

Mahesh, you don't need same number of dimentions for data copy or XREF.

Dronakal (yes, I've been reading your blogs) here is a question for you.
You are using "SET CREATENONMISSINGBLK ON;" don't this command create
unnecessary blocks hence increasing size of your database?

Second: What's the advantage of using script vs having @XREF in the outline? I
can think of couple but would like to know what you think.
Third: Are you using something like this in production and if so what is the
performance. You know I asked you this before.

Thanks
Pranay

March 30, 2009 2:55 PM

Dornakal's-Hyperion-Blog said...

Hi Pranay,
Thanks for your comments.

You are using "SET CREATENONMISSINGBLK ON;" don't this command create
unnecessary blocks hence increasing size of your database?

You are right in your comment, but it is when you enable database wide in
database settings.

The purpose of SET CREATEONMISSINGBLK ON; is to create to new blocks for


the XREF operation only. I also included SET CREATEONMISSINGBLK OFF; at the
end of script so that it doesn't create unneccesary blocks.

Yes we are using this script in production and don't have any noticible size
increase because of this.

What's the advantage of using script vs having @XREF in the outline?


By using script we can run XREF on need basis, not every time when you do
aggregation.

Are you using something like this in production and if so what is the
performance.
Yes. we are using this script in production. our cube sizes are 4 GB and 6 GB
and script takes less than a 5 minutes to complete.

have a great day Pranay.

March 30, 2009 5:08 PM

Anonymous said...
This creates a cube in the same application, but if we want to copy data to
other application, how can we maupulate it.

Example: We have two application (app1 & app2) with different dimensions,
app1 have 10 dimensions and app2 have 8 dimensions. From app1 we need to
extract all the level0 data and copy in the app2.
Can you suggest me how i can go ahead with the XREF script. If possible can
you give one sample script.

Thanks
Vivek

Sample Calculation Script


/*Information */
/*
Script Name : CopyAct2Fcst
Created by : Dornakal, Hyperion Consultant, March 16, 2009
Purpose : Copy Actuals to Current Forecast
Directions : Set substitution variables CurFcst, Actmnth,CY
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
Comments : This script copies actual data from actual scenario to forecast scenario;
This rule should be run before every forecast.
*/

/*House Keeping*/

/*Set the calculator cache. */


SET CACHE HIGH ;

/* Display calculation statistics in application log. */


SET MSG Summary;

/* Display calculation completion messages in application log at 20% intervals. */


SET NOTICE LOW;

/*Turn off Intelligent Calculation */


SET UPDATECALC OFF;

/* Enables parallel calculation. */


SET CALCPARALLEL 4;

/* Baseline Fix on CurYear, Local Currency, and Level 0 cost center */


FIX(@LEVMBRS("Cost Center",0),Local)

/* Main Rollup */

/* Copies data for all existing employees of all Expense Accounts from Actual scenario
, final version to Current Forecast and Working version */

FIX ("Existing_Employees", @IDESCENDANTS("Expense Accounts"), Jan:&Actmnth)


DATACOPY Actual->Final TO &CurFcst->Working;
ENDFIX

/* End of baseline Fix*/


ENDFIX;

Posted by Dornakal's-Hyperion-Blog at 3:18 PM

Labels: Sample Calc Script

Reaction
s:

2 comments:

mahesh said...

Hi Dornakal,

I have gone through your script. It is really good. Can you post some more
calculation scripts like this.It really helps me.
Thanks,
Mahesh

March 18, 2009 1:17 AM

Dornakal's-Hyperion-Blog said...

Hi Mahesh,
Thanks for your comment.
I am planning to post more calc scripts and business rules in future.
have a great day.

with best wishes,


Dornakal.

What is Intelligent Calculation? Why should


we care?
Developing calc scripts Series

What is intelligent calc? Why should I care?

A primary goal in calculation script development is optimization (elimination of extra


passes through database index). To optimize calculation, you can use FIX and IF
statements to focus calculations, or you can use an option called intelligent
calculation.

When you perform a full database calculation, Essbase marks which blocks have been
calculated. If you then load a subset of data, you can calculate only the changed data
blocks and their ancestors. This selective calculation process is intelligent
calculation.

By default, intelligent calculation is turned on. You can change the default setting in
the essbase.cfg file or on a script-by-script basis wit the SETUPDATECALC
OFF command.

Intelligent calculation is based on data-block marking, when intelligent calculation is


active, during the normal processes, within the index file, blocks are marked clean or
dirty.
Clean Blocks—Blocks that don’t require calculation
Dirty Blocks --- Blocks that require calculation.

When intelligent calculation is active, during calculation, Essbase looks for only dirty
blocks.

Exceptions:
Even when the intelligent calculation is enabled, for CALC DIM statements that do not
include all dimensions, Essbase does not use intelligent calculation process. Rather,
Essbase calculates all relevant data blocks, regardless of clean or dirty status, and all
data blocks retain their status, dirty or clean.

SET CLEARUPDATESTATUS AFTER is a calculation command that engages intelligent


calc for any calc script, regardless of construction. Typically, you use this command
where you cannot meet the conditions for a calc dim on all dimensions.

Example:

SET CLEARUPDATESTATUS AFTER


FIX(@IDESCENDANTS(“Q1”))
CALC DIM (Accounts);
ENDFIX

When you execute a calculation script that includes the SET UPDATESTATUS AFTER
command, data blocks that are marked clean are not calculated and data blocks that
are marked dirty are calculated and marked clean.

How do you force block marking without calculating?

SET CLEARUPDATESTATUS ONLY command instructs Essbase to mark as clean all data
blocks that were previously marked dirty.

How do blocks become dirty?


In the following cases the data blocks are marked as dirty.
Block creation during data input
Data modification (Lock and send)
Creation or modification of descendant blocks
Database Restructure (both dense and sparse)
What are False negative and False positive?
Occasionally, clean data blocks are marked dirty (False negative). In such cases
calculation efficiency suffers. A more serious problem, however, is a false positive
condition, in which dirty blocks are marked as clean. In such case of false positives,
data integrity can suffer.

When does False Positives arise?

Calculation only a subset of a data block:


Essbase marks at block level not at the cell level, so a calculation that I executed on a
subset of cells can cause a false positive condition. Only a few cells are calculated but
the block is marked clean, although uncalculated cells remain.

Using a FIX statement:

Ancestors of a dirty block are not marked as dirty until the descendant dirty block is
calculated. A false positive can result if the descendant dirty block is calculated
within a FIX statement that does not include the dirty ancestor block. After
calculation, essbase marks the descendant block clean, and the ancestor block
remains marked clean, although it should be marked dirty.

Following SET CLEARUPDATESTATUS ONLY with unrelated calculations :


You should follow SET CLEARUPDATESTATUS ONLY with repetition of the section of
the script for which you want to force data block marking , A false positive can occur
if SET CLEARUPDATESTATUS ONLY touches blocks that are otherwise dirty and that
are not calculated.

The Intelligent calc can provide significant performance benefits kin certain situations
but require vigilant maintenance of the clean and dirty status of data blocks to avoid
skipping the wrong blocks on calculation.

The intelligent calc function most productively used in interactive or iterative


situations in which small, incremental changes are made to a database and in which it
is not necessary to recalculate the entire database. For example you can use
intelligent calc in following situations:

During quarter close periods, allocation rates and adjusting entries may be update
multiple times. You use intelligent calc to view update results without recalculating
the entire database.
In budgeting or forecasting application, typically, many users update units and drivers
on a regular basis. In such applications, for users to analyze the impact of their
updates, a short calculation time is imperative.

Posted by Dornakal's-Hyperion-Blog at 4:21 PM


Labels: Developing Calc Scripts, Intelligent Calc

Reaction
s:

1 comments:

parthiv said...

Hello Dornakal,
I saw the interview qustion in ur blog,I dont know these answers could u pls
post these answeres

1)essbase quick start when is it used

2)When are hash tables used

3)query designer filter access issues

4)How many index files will be restructures

5)hashtbl theory

6)Where are the rejected records after loading using HAL are placed in?

7)Identify how many blocks are created given database statistics?

Thank u ,
bye

Implementation Process for Essbase


Database
Hi Guys,
good morning.
please find implementation process for Essbase database.

Essbase Database implementation include many steps. The process if iterative.


Analysis of the results of one cycle may rise new questions, prompting for new define
business requirements, which in turn may lead to changes in design.

Analysis and Planning:

1. Identify business results


2. Examine data sources
3. Analyze sample reports
4. Design Essbase Analytics outlines.

Database Creation:

1. Create Essbase outlines


2. Create Load rules
3. Create Calculation scripts

Deployment and Support:

1. Maintain Essbase outlines


2. Manage data flow
3. Analyze data
4. Provide management and user support.

hope this helps.

Posted by Dornakal's-Hyperion-Blog at 9:43 AM

Labels: Implementation of Essbase

Reaction
s:

2 comments:

parthiv said...
hi
thats great to helping hyperion people. Is this same to planning also. where can
i get the DBAG guide. Could u help me plss.
thanku,
parthiv

March 13, 2009 1:07 PM

Dornakal's-Hyperion-Blog said...

Hi Partiv,
thanks for your comment.
For planning this will be little bit different. I will post that soon.
you can find DBAG in Hyperion Documentation link on my blog.
hope this helps.

March 13, 2009 3:01 PM

Automate HAL Load


Hi All,

good morning.

today we will see how we can autoamte HAL job (Flow diagram)

If you are loading a large number of members, HAL chokes if you don't automate it
(.exe etc).

Here are the steps one should follow to autoamate a HAL job.

Step:1

Complete the flow diagram as shown below


Step:2
Drag and drop Window Executible from Palleate.
Step:3
Open the window executible. Go to Runtime Target tab.

give the location of the flow diagram.

Browse to the location, where you want to store the executible file.
Step:4
Check

• Use file name as runtime target name


• Automatically Build when OK pressed.

Step:5
don't check anything in Flow Diagrams tab as shown below:
Step:6

Don't check anything in Management tab as shown below:


Step:7
In Logging tab give the location of your log file.
Step:8
Don't check anything in Profile tab as shown below:
Step:9
Check

• Copy needed Vignette Business Integration Studio DLLs


• Run as Console application
• If you want status bar check that box
Step:10
Hit OK

Step :11
you will see your execution file in the list as shown below:

Step:12
you can give the location of executible file in your batch file and run it using
scheduler like window scheduler.

Posted by Dornakal's-Hyperion-Blog at 3:32 PM


Labels: Automation of HAL Load, HAL, Metadataload

Reaction
s:
6 comments:

Anonymous said...

hi
Can u post the process of,backup of planning repository?

Thanku in advance

March 11, 2009 10:28 AM

Anonymous said...

Can u pls post sample businesrule n for what it was created.

Thanku

March 11, 2009 4:03 PM

chinni said...

Hi
this is chinni.I urgenty need of the SDLC process of an essbase project
plsssssssssss

thanku in advance

March 12, 2009 1:25 PM

Dornakal's-Hyperion-Blog said...

Hi Chinni,
can you email me at [email protected]

Thanks
Dornakal

March 12, 2009 1:54 PM


Anonymous said...

To backup Planning repository, export all HSP_xxxxx Oracle tables from the
Oracle schema.
Examples of HSP tables - HSP_Object, HSP_Group.
Also, export Business Rules tables from HBR repository relational database.
Here is example of Business Rule - it is same as calc script but you can use
[variable] to prompt and limit members:
FIX (&CurrFcst , "Working", "Local", "HSP_InputValue", [Department])
FIX(&FYofFcstQtr, &fcstmonth: Jan)

"61100" ="61100_ex"+"61100_adj";
"15200" = "15200"+ "15200_new_calc" ;
"61300" ="61300_calc"+"61300_adj";
:
:
:
above HBR is used to total up accounts.
[Department] is variable that is retrieved from Webform

March 13, 2009 4:13 PM

Dornakal's-Hyperion-Blog said...

Thanks for taking time to post on backup of planning and business rule.

March 16, 2009 10:01 AM

Hyperion Planning Installation Steps


This document demonstrates step by step procedure to install Hyperion Planning
Software. This includes the installation and configuration of Hyperion components and
other relative software.

First step is to download hyperion planning and related software from the following
website
http://edelivery.oracle.com/

After entering the details you will be directed to the page below. Select Hyperion
performance management and BI.
Download the following products:

1. Hyperion Shared Services Release 9.3.1 Microsoft Windows x86 Installer English
(V11229-01)

2. Hyperion Smart View for Office Release 9.3.1.2 Microsoft Windows x86 Installer
( V12736-01)
3. Hyperion Essbase - System 9 Release 9.3.1.3 Server Windows Installer (V14762-01)

4. Hyperion Essbase Administration Services - System 9 Release 9.3.1.3 Windows


Installer-- (V14792-01)

5. Hyperion Planning - System 9 Release 9.3.1.1 All Platforms Installer English--


V11378-01

After downloading the software. Extract the setup files from downloads and start
installation.
Make sure that you don't check Launch configuration utility in the last step in the
first run.
As seen above. complete installation of all 5 products below with out launching
configuration utility.

Hyperion Shared Services


Hyperion Essbase
Hyperion Essbase Hyperion Essbase Administration Services -

Hyperion Planning - System 9 Release 9.3.1.1

Hyperion Smart View

The next step is to create relational databases or repositories for

Shared Services

EAS

Planning System tables

Planning Database

I used sql server 2005 to create above repositories.

After creating sql server repositeries, go to the server where you installed shared
services and open Hyperion configuration utility from the start menu. we have to
configure shared services first and then EAS and then planning.
The next step is to configure EAS. Open the foundation services on the server where
you installed EAS and follow the procedure below:
Posted by Dornakal's-Hyperion-Blog at 11:28 AM
Labels: Hyperion Planning Installation and Configuration

Reaction
s:

18 comments:

Ahamed said...

Thanks for all your time and effort spent keeping this blog up to date.

May 5, 2009 4:52 PM

Dornakal's-Hyperion-Blog said...

Hey Ahamed,
Thanks for your comment.
can you please advise me on the content for the future posts.

with regards,
Dornakal.

May 5, 2009 4:58 PM

Anu said...

Thanks Dornakal.....Itz of gr8 help to all learning Planning....

Coluld you plzmake it as pdf & upload it.....

May 5, 2009 5:11 PM

mahesh said...

Hi Dornakal;

your posts are very helpful.


many thanks for posting the installation process of planning.
long back i request you for installation of planning.

Many thanks...

May 7, 2009 8:21 AM

Dornakal's-Hyperion-Blog said...

Hi Mahesh,
how are you.
I remember your suggestion :-)
I had to procure computer to install planning.
Thanks for your comments.
Please comment your suggestions so that I can post more based on your
suggestions.
have a great day.

May 7, 2009 9:07 AM

Dornakal's-Hyperion-Blog said...

Hi Anu,
I don't know whether I can post PDF in blog.
I can make PDF document and mail you.
please email me at [email protected]

May 7, 2009 9:08 AM

Anu said...

Hi Dornakal,

DO we need to install BI sevices for planning to function....Do we get the


workspace UP if we dont Install BI.....

May 7, 2009 2:18 PM


Anu said...

Dornakal,
Could you post more tutorials on Planning.What does a developer do on a
Planning project....

Appreciate your Effort.

May 7, 2009 2:21 PM

mahesh said...

HI Dornakal,
How are you?

How do we know that there is no user connect to the planning server?

Thanks;
Mahesh

May 17, 2009 1:06 AM

Dornakal's-Hyperion-Blog said...

Hi Mahesh,
good morning.
I am assuming that you are asking who is logged into planning application.
you can check for this information in planning web application (front end)
and look in view statistics.
hope this helps.

May 18, 2009 8:51 AM

Anonymous said...

Hi,

Can i know how many Datasource i have to create in SQL totally.

When i create an instance, im getting an error as stated below,


" System Failur: Error while retrievinf xml file from database"

Please help me to resolve this.

Thanks,
Prabhu
[email protected]

June 10, 2009 3:26 AM

Dornakal's-Hyperion-Blog said...

Hi Prabhu,
i passed through the same error, when i was installing the planning.
what you need to do is to have mixed mode authentication for the sql
repository, which you careate and also be careful with user name and password
when you enter your repository information in your confi utility.
if you have same error i can walk you through.
you can reach me at [email protected]

with best wishes,


Dornakal.

June 15, 2009 3:21 PM

Anonymous said...

Thats really a great effort to take the screen shots and helping the newbies..
Thanks for the initiative and hope we get more valuable inputs from your blog.
I am checking if you could comeout something on writing calculation scripts
and optimizing techniques.

Thanks once again.

Venkat

July 2, 2009 8:08 AM

mahesh said...
Hi Dornakal,
How are you?

i want know, How the planning unit related to enter the budget data into
planning application.

Can you please tell me that how the process inter related one another(i.e
entering budget values and planning unit annotations).
if possible explain with some example.

Thanks,
Mahesh.

July 20, 2009 3:17 AM

Bala said...

Hi Dronakal,
I am exactly following your steps, but i am facing some difficulty in the Essbase
server configuration .
Can you please eloberate the configuration of EAS . and is it necessary to
install weblogic server prior to this step .
My email id is [email protected]

Thanks for all your help


Your efforts are very much appritiated

Balaji

July 24, 2009 11:03 AM

Manohar Singh Rana said...

Hi Dornakal,

How are you. You are doing a very good job. Thanks.
Though I never installed Planning 9.3.1, I will try this.
Can you also post how to proceed with planning from the point this post ends.
Regards
Manohar Rana
August 3, 2009 2:03 AM

Anonymous said...

Very good site with valuable information.


1) I configured MSAD as user directory and "externalize users".
2) In Windows Server 2003 AD added user to MSSQL group.
3) SQL Server 2005 is "Mixed mode Authentication"
4) I see the Windows user in SQL Server.
5) Created databases using Windows user as dbo.
6) In config utility, for db username and password, left it blank.
7) When I was creating an instance for Planning, I got " System Failur: Error
while retrieving xml file from database"
Can this be due to Windows user?

Do you have to use SQL Server authentication to create databases?

Thanks,
Devang

March 3, 2010 2:03 PM

Tamer Elbadawy said...

Dears,
I have a problem to run the workspace after installing and configuring the
application.
I'm sure that all services are up and running.
I'm using the following URL to connect to the workspace :
http://localhost:19000/workspace/index.jsp
is it a correct URL or should I use another one.
please help

May 13, 2010 7:50 AM

Post a Comment

Links to this post


Error Codes
Hi All,
you can find comprehensive list of error codes for essbase in the following website
www.EssbaseInfo.com
Error Codes
Hi All,
you can find comprehensive list of error codes for essbase in the following website

www.EssbaseInfo.com
How can I track changes in the outline?
You can use the OUTLINECHANGELOG setting in the ESSBASE.CFG file to capture
outline changes. This setting logs all outline changes into the database_name.OLG
file. The OUTLINECHANGELOG setting allows database administrators to review the
outline revision history and gather enough information to roll back changes if needed.

Example:

OUTLINECHANGELOG TRUE FALSE

TRUE Essbase logs outline changes into the file database_name.OLG.


FALSE Essbase does not log outline changes. The default is FALSE.
Posted by Dornakal's-Hyperion-Blog at 9:47 AM
Labels: Essbase

Reactions:
How to Delete Dimension from Hyperion
Planning Application
If you create a custom dimension in planning and wish to delete it, here is how you
might remove it from SQL manually. We recommend that only someone familiar with
relational databases attempt this.

Outlined below are the steps to remove a dimension from the Oracle relational store.
The directions are for SQL, but the table names should be the same in Oracle.

This procedure should only be performed by your SQL or Oracle DBA.

BE SURE YOU HAVE A BACKUP OF YOUR APPLICATION BEFORE ATTEMPTING THIS


PROCEDURE.

You will need to delete all the children first, then the dimension.

If you have the dimension in forms, you may also need to (after step 5) go into the
hsp_formobj_def_mbr table and delete all references of that object ID. Then delete
the dim id in the hsp_form_layout table.
Recycle services of Planning.

The following assumptions are being made about the dimension:

* No forms have any references to the dimension.


* Any and all attribute dimensions have been removed from the dimension.
* Any and all members have been removed from the dimension.
* All security access has been removed from dimension.
* Any Alias associated with the dimension has been removed.

In this example we have created a dimension called Dummy Dimension.

1. Open the Enterprise Manager for SQL server


2. Open the database that has your planning application
3. Open the table HSP_OBJECT
4. You need to find the row that has the dimension name you want to delete:
5. Take note of the OBJECT_ID in my example it is 50051, don't delete it yet.
6. Open the table HSP_MEMBER
7. Find and delete any rows that have a MEMBER_ID equal to the OBJECT_ID from step
5.
8. Close the Table HSP_MEMBER
9. Open the table HSP_MRU_MEMBERS.
10. Find and delete any rows that have a DIM_ID equal to the OBJECT_ID from step 5.
11. Close the table HSP_MRU_MEMBERS.
12. Open the table HSP_DIMENSION
13. Find and delete any rows that have a DIM_ID equal to the OBJECT_ID from step 5.
14. Close the table HSP_DIMENSION
15. Also delete it's reference from the HSP_UNIQUE_NAMES table.
16. Delete the row in the HSP_OBJECT table that you found in step 4.
17. Close the HSP_OBJECT table.
18. Open Essbase Application Manager for the application and remove the dimension
from the necessary Essbase outlines.
19. Restart the application server.

Now when you open the application in Planning the dimension will be removed.
The next step is to perform a refresh to make sure planning and Essbase are in sync.
Posted by Dornakal's-Hyperion-Blog at 5:23 PM
Labels: Delete Dimension in Hyperion Planning

Reactions:
Hyperion Essbase Overview
please find some Q&A, which provide you with essbase overview.

What is Essbase OLAP server?


An OLAP server is a multidimensional database for storing data with an unlimited
number of dimensions such as time, region, accounts, channel or products. Essbase
server manages analytical data models, data storage, calculations, and data security.

What are the Essbase Application tools?


Applications tools are used for extending Essbase applications
Spreadsheet Add-in
Smart view for Microsoft applications
Currency Conversion
Essbase SQL interface
Essbase Application Programming Interface (API)

What is Partitioning?
Copying a slice of a large database to work with it locally, or link from your database
directly to other databases.

What is a Hybrid Analysis?


Hybrid analysis integrates relational databases with Essbase databases to combine the
size and scalability of the relational database with conceptual power and analytical
capabilities of the multidimensional Database.

What are administrative requests and client requests?


Administrative requests such as
· Logging in and logging out
· Starting and stopping applications and databases,
· Viewing users security information
are handled by the Essbase server agent

Client requests such as


Data loads
Spread sheet reports
Data lock and unlock
are handled by the application server (ESSVR)

What is the use of multidimensional database such as Essbase?


Multidimensional database supports multiple views of data sets for users who need to
analyze the relationships between data categories.
For example marketing analyst needs detailed information in different view than the
manager.
Multidimensional DB consolidates and calculates data to provide different views. Only
Database outline, the structure that defines all elements of the DB, limits the number
of views
With the multidimensional DB users can pivot the data to see information from
different view point, drill down to find more detailed information or drill up to see an
over view.
Relational database have more data and have all transactions information. Whereas
the Essbase has limitations on data which it can hold. In Essbase the filters (security)
can be given until the data cell value. Whereas in RDBMS the security can be given
only until table view. In RDBMS we can see only one view.

What are Standard dimensions?


Standard dimensions are those which represent the core components of the business
plan an often relate to the departmental functions
Examples of standard dimensions are as follows
Time
Accounts
Products
Market
Dimension
Dimensions are static in most databases. DB dimensions rarely change over the life of
an application.

What is an Outline?
Outline is the structure that defines all elements of the Database. It dictates how
data is to be stored into the database. Outline defines the structural relationship
between the members in DB. Organizes all data in the DB
Defines consolidations and mathematical relationships between members
It defines type of dimensions. Aliases, member formulas etc.
The order of the outline is that how dimensions are arranged is as follows:
Dense dimensions followed by sparse dimensions followed by Attribute dimensions.

How do you order the Outline?


All the attribute dimensions should be placed at the end of the outline. Dense
dimensions should be placed in the first then followed by the sparse dimensions
The order determines
How quickly calculations are run
How long it takes users to retrieve information
The order of the outline for query performance
· Dense
· Most queried Sparse
· Least queried Sparse
· Most queried Attribute
· Least queried Attribute

The order of Outline for Calculation time


· Dense
· Smallest Sparse dimension
· Largest Sparse
· Smallest Attribute
· Largest Attribute

What is the highest level of consolidation in the outline?


Dimension

Is there any limitation on number of members?


No. Essbase does not limit the number of members within a dimension and allows you
to add new members as needed.

Parent is a member, which has a branch below it.


Child is a member, which has a parent above it.
Siblings are the child members of same immediate parent, at the same generation.
Descendants are all members in branches below a parent.
Ancestors are all members in above a member.
Root is a top member in a branch.
Leaf member has no children.
Generation number refers to the consolidation levels within a dimension.
Level also refers to a branch within a dimension.

What is a cell/data value?


A data value is defined by the intersection of all standard dimensions in the database.

What is the maximum number of values in the database?


Product of members in each dimension
Why do we need to have classification as dense and sparse dimensions?
As the data is no smoothly and uniformly distributed and data does not exist for the
majority of members. Essbase speeds up data retrieval while minimizing the memory
and disk requirements.
A sparse dimension is a dimension with a low percentage of available data positions
filled.
A dense dimension is a dimension with a high probability that one or more data points
are occupied in every combination of dimensions.

What are the two types of internal structures in Essbase?


Data blocks
Index system
The two types of internal structures are to store data and access data.

What is a Data block?


Data block is a cube created for each unique combination of sparse standard
dimension members (provided that atleast one data value exists for sparse member
combination).

The Data block represents all the dense dimension members for its combination of
sparse standard dimensions.

Its size depends on number of dense dimensions.

How does Essbase calculate the data?


Top down order

What is an Index entry?


The index entry provides a pointer to the data block
Essbase creates Index entry for each data block. The index entry represents the
combinations of sparse standard dimensions. It contains an entry for each unique
combination of sparse standard dimension members for which atleast one data value
exists.

What happens if you make all dimensions sparse?


Data blocks are created for each unique combination of sparse standard dimension
members. Thus a large number of dense blocks are created, and thus a huge index
containing pointers (or addresses) for all those data blocks.

Huge index consumes large amount of memory. The more index entries the longer
Essbase searches to find a specific block.

What happens if you make all dimensions dense?


Only one huge sparse block is created with only one index entry. This block is very
sparse. This configuration requires thousands of times more storage than other
configurations. Essbase need to load the entire data block into memory when it
searches for a data value, which requires enormous amounts of memory.

What is the Design Process?


Analyze business needs and plan the database
Define the database outline
Check the system requirements
Load the test data into the database
Define calculations
Define reports
Verify with the users
Repeat the process

What are the different types of data sources?


Flat files (column formatted)
Spreadsheet files
Any RDBMS files

What is a shared member?


The shared member concept lets two members with the same name share data. The
shared member stores only pointer to the data contained in the other member, so the
Essbase only stores the data once. Shared members should be in the same dimension.

What dimension can be given time balance properties?


Only accounts dimensions can be given time balance, expense reporting, and country
and currency properties.

What is the use of variance reporting?


Variance reporting properties defines how Essbase calculates the difference between
actual and budget data in members with @VAR, @VARPER functions in their member
formula.
· Expense reporting (Budget-Actual)
· Non-Expense reporting (Actual-Budget)
What is a Function?
It is a predefined routine to carry on specific task or calculation.
What is a formula?
Formulas are used to calculate relationships between members in the DB Outline.

What is a dynamic Calc?


When you tag a member as dynamic calc, Essbase calculates the combinations of the
member only when user retrieves data, instead of pre calculating member
combinations during the regular DB.
Dynamic calc shorten the regular calc time, but may increase the retrieval time for
dynamically calculated data values.

What are the advantages of Dynamic calc?


Low disk space
Reduced database restructure time
Reduced back up time
Optimum regular calculation time

What are the members for which Two-pass calculations can be given?
Accounts
Dynamic calc
Dynamic calc and store

How does a user can view data?


Spread sheet
Printed reports
Reports published on web
web analysis
smart view

Can you load data or calculate data on client machine?


No.
Applications and Databases created on client machine are used only to store database
objects, such as outlines and calc scripts. You cannot load or calculate data on a
client machine.

What are Database Objects?


Files that are related to databases are called Objects. Common types of objects are:
Database Outlines (.OTL)
Report Scripts (.REP)
Calculation Scripts (.CSC)
Data Load rules and Dimension build rules (.RUL)
Data sources
Security definitions
LRO’s (Linked Reporting Objects)
Partition definitions

What is a rule file?


Data load rules are set of operations that Essbase performs on the data from external
data source file as it is loaded, or copied into the Essbase database.
Specifying the data load rules is the most common way to load data into the
Database.
Dimension build rule files create or modify an outline dynamically based on the data
in the external source file.

What is a calculation script?


Calc script is a text file with set of instructions telling Essbase how to calculate data
in the database. It has “. csc” extension.
What is a Report script?
Report script is a text file with a set of instructions on data retrieval, formatting and
output to create a report from the database.
Report script has .REP extension.

What is a Linked Reporting Object (LRO)?


A LRO is an object associated with a specific data cell in the Essbase database. A LRO
can any of the following:
A paragraph of descriptive text (” cell note”).
A separate file that contain text, audio, video or graphics.
An URL (Uniform Resource Locator).
A link to data in another Essbase database.

What are “.EQD” files?


Within spreads sheet add-in, users can create queries using query designer (EQD).
Users can save the reports in the form of queries (.EQD files)

What are “.sel” files?


With the spreadsheet add-in, users can define member retrievals with the member
select feature. If users want to save member select specification, they can do so with
a “.sel” file.
How can you create a database?
Application manager file>new>database
Essbase administrative services console
ESSCMD (“CREATE DB”)
Maxl (“create database”)

Application and database names should be less than 8 characters.

create application------maxl
CREATEAPP------------ESSCMD

What is annotating database?


It is a database note that can provide useful information in the situations where you
need to broadcast messages to users about status of the database deadlines for
updates and so on

Select database>set note

What are substitution variables?


Substitution variables act as global placeholders for the information that changes
regularly.
Each variable has a value assigned to it. The value can be changed at any time by the
DB designer, thus manual changes are reduced. Ex :- currmnth.

You cannot use the substitution variables in formulae that are applied to the DB
outline. The value of the substitution variable cannot be more than 256 characters.

Substitution variables can be used only in


Calculation scripts
Report scripts
Spread sheet add-in
SQL interface

Server>substitution variable
Maxl (for creating/deleting/updating)
Alter system
Alter application
Alter db
ESSCMD
CREATE VARIABLE
DELETE VARIABLE
UPDATE VARIABLE

What is a location alias?


A location alias maps an alias name for a DB to the location of that DB.
You can use location aliases only with the @XREF function.
With @XREF function you can retrieve data value from another database to include in
calculation in the current database. In this case, the location alias points to the
database from which the value to be retrieved.

Database>location aliases

Create
Maxl ----------------------create location alias
ESSCMD----------------CREATE LOCATION

Edit /Delete
Maxl display location alias
drop location alias

ESSCMD LIST LOCATIONS


DELETE LOCATION
What happens if you open outlines in two instances?
If you open same outline with two instances of application manager using same login
id, each save will overwrite the changes of the other instance.

Copying database
Database > copy
Maxl create database as
ESSCMD COPYDB

What are important points while building an outline?


· All members and alias names should be unique;
· Only one dimension should be tagged as accounts, time, currency type and country;
· Level “0” members cannot be label only;
· Level “0” members cannot be assigned formulae but dynamic calc members of
standard dimensions may have formula;
· Dynamic calc should not have more than 100 children;
· Boolean attribute dimensions have only two members.

What are the restructuring options in saving database?

· All data
· Level 0 data (when all data in the outline is at level 0)
· Input data (when you load data into non level 0 members)
· Discard all data (when you expect to reload data or when outline is radically
changed that no existing data applies)

How do you set dense and sparse settings?


Settings>data storage
Data dictionary button
· You must set the standard dimensions with which you plan to associate attribute
dimension as sparse because attributes can only be associated to sparse standard
dimensions.
· Application manager automatically sets attribute dimensions as sparse.

How do you rename members?


· Data dictionary button
· Edit>properties
· Manually

When does a DB restructure?


When you add, delete, or move non-attribute (standard) dimensions or members,
Essbase restructure DB and you must recalculate your data.

What is Metadata?
Metadata is data is data about data. Metadata is the data contained in the database
outline that describes the values within a DB.
Ex:
East>New York>cola>sales>10000

East>New York>cola>sales> is metadata

What are different types of dimension tags?


· Time
· Accounts
· Country
· Currency
· Attribute

Can you add time members that are not tagged as time?
Yes

When do you give an accounts tag to a dimension?


You can tag a dimension as accounts if it contains items that you want to measure
such as profit or inventory.

Time dimension tag says how often you collect and update data. The time dimension
enables several accounts dimension functions such as first and last time balances.

What is the significance of time balance properties?


When you set a time balance property on a member in an accounts dimension, it
affects how Essbase calculates the parent of that member in the time dimension.
· TB FIRST (The parent value is the value of the first member in the branch)
· TB LAST (The parent value is the value of the last member in the branch)
· TB AVG (The parent value represents the average value of the children)
· TB NONE (default; rolls up parents in the time dimension in the usual way)

Skip Properties
· None
· Missing
· Zeros
· Missing and zeros
Skip properties, expense reporting, Time Balance properties are applicable only to
the accounts dimension.

What is a Two-Pass calculation?


By default Essbase calculates outlines from the bottom up first calculating the values
for children and then values for parent. Sometimes however the values of children
depend may be based on the values of parent or the values of other members in the
outline. To obtain correct values for these members, Essbase must first calculate the
outline and then recalculate the members that are dependent on the calculated
values of the other members. The members that are calculated on the second pass
through the outline are called Two-Pass Calculation.

Only accounts, dynamic calc, dynamic calc and store members can be given two pass
calculation.
Edit>properties
Data dictionary button

What does the consolidation properties do?


Member consolidation determines how children roll up into their parents. Default (+)
operator.

Essbase don’t use consolidation properties for attribute dimensions.


Essbase automatically tags members of the attribute dimensions as dynamic calc. you
cannot change this setting.

When do you use label only?


When no data is associated with members we use label only. They are used only to
ease navigation and reporting from the spread sheet add-in.
You cannot associate attributes to label only. If you tag label only to the base
member, which has, attributes associated with it, Essbase removes attributes and
displays a warning message

Posted by Dornakal's-Hyperion-Blog at 4:07 PM

Labels: Essbase Overview, Hyperion Essbase BSO, Hyperion Essbase Interview questions, Important Points in
Hyperion Essbase

Reaction
s:

1 comments:

Anonymous said...

Essbase objects .. like reports is not .RUL . please correct that question. FYI
Rule file is .RUL

October 22, 2009 12:27 AM

Fragmentation in Essbase (BSO)


• What is Fragmentation?
Fragmentation is unused disk space.
• When does Fragmentation occur?

Fragmentation is likely to occur with the following:


Read/write databases that users are constantly updating with data

Databases that execute calculations around the clock

Databases that frequently update and recalculate dense members

Data loads that are poorly designed

Databases that contain a significant number of Dynamic Calc and Store members

Databases that use an isolation level of uncommitted access with commit block set to zero

• How can you measure fragmentation?

You can measure fragmentation using the average clustering ratio or


average fragmentation Quotient.

Using the average fragmentation quotient

Any quotient above the high end of the range indicates that reducing
fragmentation may help

performance, with the following qualifications:


The reported value of the Fragmentation Quotient is more accurate when there are no other write

transactions running on the database.

For databases less than 50 MB using the Direct I/O access mode, the fragmentation quotient tends to
be high. A high fragmentation quotient does not necessarily indicate a need to reduce
fragmentation, because the free space is created in 8 MB chunks and all of it might not get used
right away.
Database Size Fragmentation Quotient
Threshold
Small (up to 200 60% or higher
MB)
Medium (up to 2 40% or higher
GB)
Large (greater than 30% or higher
2 GB)

Using the average clustering ratio:

The average clustering ratio database statistic indicates the


fragmentation level of the data (.pag) files. The maximum value, 1,
indicates no fragmentation.
• How do you can prevent and remove fragmentation?

You can prevent and remove fragmentation:


To prevent fragmentation, optimize data loads by sorting load records based upon sparse dimension
members. For a comprehensive discussion of optimizing data load by grouping sparse members.

To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.

To remove fragmentation, force a dense restructure of the database.

Posted by Dornakal's-Hyperion-Blog at 9:31 AM


Labels: Essbase Database Optimization, Force Restructure, Fragmentation in Essbase

Reactions:
Correct Order of restarting planning
services
If you are planning admin and had to restart planning services.
here is the order you have to follow to restart.

Stop Order
Planning Service
RMI registry
AAS

Start Order
AAS
RMI registry
Planning Service
Screen shots of services shown below:
Posted by Dornakal's-Hyperion-Blog at 10:19 AM 1 comments Links to this post
Labels: Order of restarting planning services

Reactions:
Smart View retrieve Error
One of my Business user recieved an error when she was running Smart view query,
which had lot of attributes in her retrive sheet.

The error she received is as follows:


"The request timed out. Contact your administrator to increase netRetrycount and
netRetryInterval"

Eventhough I increased NetDelay and NetRetrycount in essbase.cfg, but that does not
resolve this problem. She was using IE7 on XP SP3.

Here is the reason and resolution for the issue:

By design, Internet Explorer imposes a time-out limit for the server to return data.
The time-out limit is five minutes for versions 4.0 and 4.01 and is 60 minutes for
versions 5.x and 6. Also, the time-out limit is 30 seconds for Windows Internet
Explorer 7. As a result, Internet Explorer does not wait endlessly for the server to
come back with data when the server has a problem.
Apparently the query which the user was running takes around 45 seconds. The IE7
time out setting was causing it to error out.

we need to change (add) three new registry keys to resolve the issue.

1. Open the Registry (Start -> Run -> Regedit)


Locate the following section:
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet
Settings]\
2. Create the following new keys for dword with decimal values:
"ReceiveTimeout"=dword:00dbba00
"KeepAliveTimeout"=dword:00180000
"ServerInfoTimeout"=dword:00180000
3. Restart the machine for the new settings to take effect

If the above settings does not make any difference and the version of SmartView is
9.3.1.2, it may need to upgrade SmartView 9.3.1.4 or higher version.
Here are the Screen shots for changing the registry settings.

Right click and select DWORD (32-bit) value

Rename to KeepAliveTimeout

Right click and select Modify

Select Decimal and enter the following data 00180000

repeat the same process for other 2 keys


Validate whether you have right keys or not.
Posted by Dornakal's-Hyperion-Blog at 4:11 PM 1 comments Links to this post
Labels: Smart View retrieval error.

Reactions:
Hyperion Essbase Questions
How do you optimize outline?
Usually the outline is optimized using the hourglass design for dimension ordering i.e,
· Dimension with Accounts tag
· Dimension with Time tag
· Largest Dense dimension
· Smallest dense dimension
· Smallest Sparse dimension
· Largest Sparse dimension

What are the ways to improve performance during data loads?

There are several ways to optimize load

1. Grouping of Sparse member combinations

2. Making the data source as small as possible

3. Making source fields as small as possible

4. Positioning the data in the same order as the outline


5. Loading from Essbase Server

6. Managing parallel data load processing

What are the design considerations for calculation optimization?

You can configure a database to optimize calculation performance. The best configuration for the
site depends on the nature and size of the database.

· Block Size(8Kb to 100Kb) and Block Density

· Order of Sparse Dimensions

· Incremental Data Loading

· Database Outlines with Two or More Flat Dimensions

· Formulas and Calculation Scripts

When does Fragmentation occur?

Fragmentation is likely to occur with the following:

· Read/write databases that users are constantly updating with data

· Databases that execute calculations around the clock

· Databases that frequently update and recalculate dense members

· Data loads that are poorly designed

· Databases that contain a significant number of Dynamic Calc and Store members

· Databases that use an isolation level of uncommitted access with commit block set to zero

How can you measure fragmentation?

You can measure fragmentation using the average clustering ratio or average fragmentation
Quotient.

Using the average fragmentation quotient

Any quotient above the high end of the range indicates that reducing fragmentation may help
performance

Small (up to 200 MB) 60% or higher


Medium (up to 2 GB) 40% or higher

Large (greater than 2 GB) 30% or higher

Using the average clustering ratio:

The average clustering ratio database statistic indicates the fragmentation level of the data (.pag)
files. The maximum value, 1, indicates no fragmentation.

How do you can prevent and remove fragmentation?

You can prevent and remove fragmentation:

· To prevent fragmentation, optimize data loads by sorting load records based upon sparse
dimension members. For a comprehensive discussion of optimizing data load by grouping sparse
members.

· To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.

· To remove fragmentation, force a dense restructure of the database.

Why is database restructuring?

As your business changes, you change the Essbase database outline to capture new product lines,
provide information on new scenarios, reflect new time periods, etc. Some changes to a database
outline affect the data storage arrangement, forcing Essbase to restructure the database.

What are the types of database restructuring?

The two ways by which a database restructure is triggered:

• Implicit Restructures
• Dense restructure
• Sparse restructure
• Outline-only restructure
• Explicit Restructures

What are the conditions affecting Database restructuring?

Intelligent Calculation, name changes, and formula changes affect database restructuring:

· If you use Intelligent Calculation in the database, all restructured blocks are marked as dirty
whenever data blocks are restructured. Marking the blocks as dirty forces the next default
Intelligent Calculation to be a full calculation.
· If you change a name or a formula, Essbase does not mark the affected blocks as dirty.
Therefore, you must use a method other than full calculation to recalculate the member or the
database.

What are the files used during Restructuring?

When Essbase restructures both the data blocks and the index, it uses the files described

essxxxxx.pag Essbase data file

essxxxxx.ind Essbase index file

dbname.esm Essbase kernel file that contains control information used for db recovery

dbname.tct Transaction control table

dbname.ind Free fragment file for data and index free fragments

dbname.otl Outline file in which is defined all metadata for a database and how data is stored

What are the actions that improve performance for restructuring?

There are a number of things you can do to improve performance related to database
restructuring:

· If you change a dimension frequently, make it sparse. · Use incremental restructuring to control
when Essbase performs a required database restructuring. · Select options when you save a
modified outline that reduce the amount of restructuring required.

Which restructure operations are faster?

These types of restructure operations are listed from fastest to slowest:

· Outline only (no index or data files)· Sparse (only index files) · Dense (index files and data
files) as a result of adding, deleting, or moving members and other operations · Dense (index
and data files) as a result of changing a dense dimension to sparse or changing a sparse
dimension to dense

What is Implicit Restructures?

Essbase initiates an implicit restructure of the database files after an outline is changed using
Outline Editor or Dimension Build. The type of restructure that is performed depends on the type
of changes made to the outline

What is Explicit Restructures?


When you manually initiate a database restructure, you perform an explicit restructure. An
explicit restructure forces a full restructure of the database. A full restructure comprises a dense
restructure plus removal of empty blocks.

What is Dense restructure?

If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in
the data files and creates new data files. When Essbase restructures the data blocks, it regenerates
the index automatically so that index entries point to the new data blocks.

Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense
restructure you need to recalculate the database.

What is Sparse restructure?

If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index
and creates new index files.

Restructuring the index is relatively fast; the amount of time required depends on the size of the
index.

What is Outline-only restructure?

If a change affects only the database outline, Essbase does not restructure the index or data files.

Member name changes, creation of aliases, and dynamic calculation formula changes are
examples of changes that affect only the database outline.

Explain the process of dense restructure?

To perform a dense restructure, Essbase does the following:

1. Creates temporary files that are copies of the .ind, .pag, .otl, .esm, and .tct files. Each
temporary file substitutes either N or U for the last character of the file extension, so the
temporary file names are .inn, essxxxxx.inn, essxxxxx.pan, dbname.otn, dbname.esn, and
dbname.tcu. 2. Reads the blocks from the database files copied in step 1, restructures the
blocks in memory, and then stores them in the new temporary files. This step takes the most
time. 3. Removes the database files copied in step 1, including .ind, .pag, .otl, .esm, and .tct
files. 4. Renames the temporary files to the correct file names: .ind, .pag, .otl, .esm, and .tct.

Explain the process of sparse restructure?

When Essbase does a sparse restructure (restructures just the index), it uses the following files:·
essxxxxx.ind· dbname.otl· dbname.esm

What is data compression?


Essbase allows you to choose whether data blocks that are stored on disk are compressed, as well
as which compression scheme to use. When data compression is enabled, Essbase compresses
data blocks when it writes them out to disk. Essbase fully expands the compressed data blocks,
including empty cells, when the blocks are swapped into the data cache.

Generally, data compression optimizes storage use. You can check compression efficiency by
checking the compression ratio statistic.

What are types of data compression?

Essbase provides several options for data compression:

1. Bitmap compression, the default. Essbase stores only non-missing values and

uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell
value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to
store every non-missing cell. In most cases, bitmap compression conserves disk space more
efficiently. However, much depends on the configuration of the data.

2. Run-length encoding (RLE). Essbase compresses repetitive, consecutive values --any value
that repeats three or more times consecutively, including zeros and #MISSING values. Each data
value that is repeated three or more times uses 8 bytes plus a 16 byte repetition factor.

3. zlib compression. Essbase builds a data dictionary based on the actual data being compressed.
This method is used in packages like PNG, Zip, and gzip. Generally, the more dense or
heterogeneous the data is, the better zlib will compress it in comparison to bitmap or RLE
compression.

4. Index Value Pair compression. Essbase applies this compression if the block density is less
than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the
blocks are highly sparse. zlib does not use this.

5. No compression. Essbase does not compress data blocks when they are written to disk

When do you use RLE over Bitmap Compression?

Use RLE over Bitmap When,

Average block density very low (< 3%).

Database has many consecutive repeating Values.

When do you disable compression?

You may want to disable data compression if blocks have very high density (90% or greater) and
have few consecutive, repeating data values. Under these conditions, enabling compression
consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue
compared to your application. It can become a drain on the processor.

What are data locks?

Essbase issues write (exclusive) locks for blocks that are created, updated, or deleted, and issues
read (shared) locks for blocks that should be accessed but not modified. By issuing the
appropriate locks, Essbase ensures that data changed by one operation cannot be corrupted by a
concurrent update.

What is a transaction?

When a database is in read/write mode, Essbase considers every update request to the server
(such as a data load, a calculation, or a statement in a calculation script) as a transaction.

What is transaction control file?

Essbase tracks information about transactions in a transaction control file (dbname.tct).

The transaction control file contains an entry for each transaction and tracks the current state of
each transaction (Active, Committed, or Aborted).

What is isolation level and what are the types of isolation levels?

Isolation levels determine how Essbase commits data to disk. Essbase offers two isolation levels
for transactions --committed access and uncommitted access (the default).

What is commited access?


When data is committed, it is taken from server memory and written to the database on disk.
Essbase automatically commits data to disk. There are no explicit commands that users perform
to commit data blocks.

Talk about committed and uncommitted access?

Committed:

Committed at the end of a transaction. Data retained till then.

All blocks in question locked.

Pre-Image Access: If enabled, Read only access allowed

Wait Times:

Indefinite
Immediate Access or no Wait

No. of Seconds Specified

Uncommitted:

Committed only at synchronization points.

Block by Block Locks.

Commit Row: No of rows of data loaded when Sync point occurs.

Commit Block: No. of Blocks Modified when Sync Point occurs.

For Rollback, Commit Row=0 and Commit Block=0

What are the advantages and disadvantages of using committed access?

You can optimize data integrity by using committed access.

Setting the isolation level to committed access may increase memory and time requirements for
database restructure.

Which transaction is always in committed mode?

The Spreadsheet Add-in lock and Send and the Grid API are always in Committed
Access Mode

What are the memory caches used by Essbase to coordinate memory


usage? Essbase uses five memory caches to coordinate memory usage 1. Index Cache 2. Data
File Cache 3. Data Cache 4. Calculator Cache 5. Dynamic Calculator Cache

What is Index cache?

The index cache is a buffer in memory that holds index pages. How many index pages are in
memory at one time depends upon the amount of memory allocated to the cache.

What is Data file cache?

The data file cache is a buffer in memory that holds compressed data files (.pag files). Essbase
allocates memory to the data file cache during data load, calculation, and retrieval operations, as
needed. The data file cache is used only when direct I/O is in effect.

What is Data cache?


The data cache is a buffer in memory that holds uncompressed data blocks. Essbase allocates
memory to the data cache during data load, calculation, and retrieval operations, as needed.

What is Calculator cache?

The calculator cache is a buffer in memory that Essbase uses to create and track data blocks
during calculation operations.

What is Dynamic calculator cache?

The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks
needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a
query).

What are the memory caches used by Essbase to coordinate memory usage?

Essbase uses five memory caches to coordinate memory usage

Index Cache: Min -1024 KB (1048576 bytes) Default - Buffered I/O : 1024 KB (1048576
bytes);Direct I/O : 10240 KB (10485760 bytes) Opt -Combined size of all essn.ind files, if
possible; as large as possible otherwise.Do not set this cache size higher than the total
index size, as no performance improvement results.

Data File Cache: Min - Direct I/O: 10240 KB(10485760 bytes) Default -Direct I/O: 32768
KB(33554432 bytes)Opt -Combined size of all essn.pag files, if possible; otherwise as
large as possible.This cache setting not used if Essbase is set to use buffered I/O.

Data Cache:Min - 3072 KB (3145728 bytes) Default - 3072 KB (3145728 bytes) Opt -
0.125 * the value of data file cache size.

Calculator Cache:Min - 4 bytes Max: 200,000,000 bytes Default - 200,000 bytes Opt -The
best size for the calculator cache depends on the number and density of the sparse
dimensions in your outline. The optimum size of the calculator cache depends on the
amount of memory the system has available.

What is the structure of currency applications?

In a business application requiring currency conversion, the main database is divided into at
least two slices. One slice handles input of the local data, and another slice holds a copy of the
input data converted to a common currency.

Essbase holds the exchange rates required for currency conversion in a separate currency
database. The currency database outline, which is automatically generated by Essbase from the
main database after you assign the necessary tags, typically maps a given conversion ratio onto a
section of the main database. After the currency database is generated, it can be edited just like
any other Essbase database.
What are the three dimension that should be present in main database of currency
application?

The main database outline can contain from 3 to n dimensions. At a minimum, the main database
must contain the following dimensions:

· A dimension tagged as time.

· A dimension tagged as accounts.

· A market-related dimension tagged as country.

What are the dimensions that should be present in currency database of currency
application?

A currency database always consists of the following three dimensions, with an optional fourth
dimension:

· A dimension tagged as time, which is typically the same as the dimension tagged as time in
the main database.

· A dimension tagged as country, which contains the names of currencies relevant to the
markets (or countries) defined in the main database.

· A dimension tagged as accounts, which enables the application of various rates to members
of the dimension tagged as accounts in the main database.

· A currency database, which typically includes an optional currency type dimension, which
enables different scenarios for currency conversion.

What are the conversion methods supported by Essbase for currency applications?

Different currency applications have different conversion requirements. Essbase supports two
conversion methods:

· Overwriting local values with converted values.

· Keeping local and converted values.


Either of these two methods may require a currency conversion to be applied at report time.
Report time conversion enables analysis of various exchange rate scenarios without actually
storing data in the database.

What is the process to build a currency conversion application and perform conversions?

To build a currency conversion application and perform conversions, use the following process:

1. Create or open the main database outline. 2. Prepare the main database outline for currency
conversion. 3. Generate the currency database outline. 4. Link the main and currency
databases. 5. Convert currency values. 6. Track currency conversions. 7. If necessary,
troubleshoot currency conversion.

What is CCONV? After you create a currency conversion application, you convert data values
from a local currency to a common, converted currency by using the CCONV command in
calculation scripts Ex: CCONV USD;CALC ALL;

Can we convert the converted currency back into its local currency? You can convert the
data values back to the original, local currencies by using the CCONV TOLOCALRATE
command.

When you convert currencies using the CCONV command, are the resulting data blocks
are marked as dirty or clean?When you convert currencies using the CCONV command, the
resulting data blocks are marked as dirty for the purposes of Intelligent Calculation. Thus,
Essbase recalculates all converted blocks when you recalculate the database.

What is CCTRACK? You can use the CCTRACK setting in the essbase.cfg file to control
whether Essbase tracks the currency partitions that have been converted and the exchange rates
that have been used for the conversions. By default CCTRACK is turned on.

What are the reasons to turn off CCTRACK? For increased efficiency when converting
currency data between currency partitions, you may want to turn off CCTRACK. For example,
you load data for the current month into the local partition, use the DATACOPY command to
copy the entire currency partition that contains the updated data, and then run the conversion on
the currency partition.

How can you turn off CCTRACK? You can turn off CCTRACK in three ways: · Use the SET
CCTRACKCALC ONOFF command in a calculation script to turn off CCTRACK temporarily ·
Use the CLEARCCTRACK calculation command to clear the internal exchange rate tables
created by CCTRACK.

Set CCTRACK to FALSE in the essbase.cfg file.

What is LRO (Linked reporting objects)? An LRO is an artifact associated with a specific
data cell in an Essbase database. LROs can enhance data analysis capabilities by providing
additional information on a cell.
An LRO can be any of the following:

· A paragraph of descriptive text (a "cell note")

· A separate file that contains text, audio, video, or graphics

· A URL for a Web site

· A link to data in another Essbase database

How do you create LRO's?

Users create linked objects through Essbase Spreadsheet Add-in for Excel by selecting a data cell
and choosing a menu item. There is no limit to the number of objects you can link to a cell. The
objects are stored on the Essbase Server where they are available to any user with the appropriate
access permissions. Users retrieve and edit the objects through the Essbase Spreadsheet Add-in
for Excel Linked Objects Browser feature, enabling them to view objects linked to the selected
cell.

Does adding or removing links to a cell does not affect the cell contents?

No.LROs are linked to data cells --not to the data contained in the cells. The link is based on a
specific member combination in the database.

Give a few examples of LRO's?

Ex1: A sales manager may attach cell notes to recently updated budget items. Ex2: A finance
manager might link a spreadsheet containing supporting data for this quarter's results. Ex3: A
product manager might link bitmap images of new products. Ex4: A sales manager may link the
URL of a company's Web site to quickly access the info on the Web

How does Essbase locate and retrieve linked objects?

Essbase uses the database index to locate and retrieve linked objects. If you clear all data values
from a database, the index is deleted and so are the links to linked objects. If you restructure a
database, the index is preserved and so are the links to linked objects.

Do shared members share LRO's?

Shared members share data values but do not share LROs. This is because LROs are linked to
specific member combinations and shared members do not have identical member combinations.
To link a given object to shared members, link it to each shared member individually.
Can you change the member combination associated with any linked object?

You cannot change the member combination associated with any linked object. To move an
object to another member combination, first delete it, then use Essbase Spreadsheet Addin for
Excel to re-link the object to the desired member combination.

Why do we need to limit the LRO file sizes for storage conversion?

Because Essbase stores linked files in a repository on the server and, by default, the size is
unlimited. Limiting the file size prevents users from taking up too much of the server resources
by storing extremely large objects. You can set the maximum linked file size for each
application. If a user attempts to link a file that is larger than the limit, an error message displays.

The maximum file size setting applies only to linked files and does not affect cell notes or URLs.
The lengths of the cell note, URL string, and LRO descriptions are fixed.

What is partitioning?

A partition is the piece of a database that is shared with another database. An Essbase partitioned
application can span multiple servers, processors, or computers.

What is Essbase Partitioning?

Essbase Partitioning is a collection of features that makes it easy to design and administer
databases that span Essbase applications or servers. Partitioning is licensed separately from
Essbase.

What are the types of Partitions available in Essbase?

Three types of partitions are there.

1. Transparent partition:

A form of shared partition that provides the ability to access and manipulate remote data
transparently as though it is part of your local database. The remote data is retrieved from the
data source each time you request it. Any updates made to the data are written back to the data
source and become immediately accessible to both local data target users and transparent data
source users

2. Replicated Partition:

A portion of a database, defined through Partition Manager, used to propagate an update to data
mastered at one site to a copy of data stored at another site. Users can access the data as though it
were part of their local database.
3. Linked Partition:

A shared partition that enables you to use a data cell to link two databases. When a user clicks a
linked cell in a worksheet, Essbase opens a new sheet displaying the dimensions in the linked
database. The user can then drill down those dimensions.

What is the process for designing a partitioned database?

Here is the suggested process for designing a partitioned database.

1. Learn about partitions.

2. Determine whether the database can benefit from partitioning.

3. Identify the data to partition.

4. Decide on the type of partition.

5. Understand the security issues related to partitions.

What are the parts of partition?

Partitions contain the following parts,

· Type of partition: A flag indicating whether the partition is replicated, transparent, or linke

· Data source information: The server, application, and database name of the data source.

· Data target information: The server, application, and database name of the

data target.

· Login and password: The login and password information for the data source and the data
target.

· Shared areas: A definition of one or more areas, or sub cubes, shared between the data source
and the data target.

· Member mapping information: A description of how the members in the data source map to
members in the data target.

· State of the partition: Information about whether the partition is up-to-date and when the
partition was last updated.

What are benefits of partitioning?


Partitioning applications can provide the following benefits:

· Improved scalability, reliability, availability, and performance of databases

· Reduced database sizes

· More efficient use of resources

· Data synchronization across multiple databases.

· Outline synchronization across multiple databases.

· Ability for user navigation between databases with differing dimensionality.

Can you define different types of partitions between the same two databases?

No

Can a single database serve as the data source or data target for multiple partitions?

Yes

What is overlapping partition?

An overlapping partition occurs when similar data from two or more databases serve as the data
source for a single data target in a partition.

Is overlapping partition valid in all the partitions?

An overlapping partition is allowed in linked partitions, but is invalid in replicated and


transparent partitions and generates an error message during validation.

When do you use substitution variables in partitions?

Using substitution variables in partition definitions enables you to base the partition definition on
different members at different times.

Can we use attribute values to partition a database?

Yes,You can use attribute functions for partitioning on attribute values. But you cannot partition
an attribute dimension.
Can we partition an attribute dimension?

No, we cannot partition an attribute dimension.

What is the limitation on version and mode during partition?

Both ends of a transparent, replicated, or linked partition must be on the same release level of
Essbase Server. For example, if the source of a linked partition is on a Release 7.1.2 server, the
target must also be

on a Release 7.1.2 server.

In addition, for transparent and replicated (but not linked) partitions, the application
mode of both ends of the partitions must be the same--either Unicode mode or non-
Unicode mode.

What are the major difference between ASO & BSO?

If we have more dimensions (generally more than 10) then we will go for ASO that simply rollup
If we have less dimensions then we will go for BSO We cannot write back in ASO we can write
back in BSO Most of the dimensions are sparse in ASO Most of the dimensions are dense in
BSO

What is "Enterprise Analytics"? ASO in System 9 is called Enterprise Analytics.

Explain in detail about the features of ASO?

· ASO databases are created specifically to deal with the requirements of very large sparse data
sets with a high no of dimensions and potentially millions of members. · ASO do not have
indexes or data blocks. · ASO do not use calculation scripts. Bcoz calculations are not
complex. · ASO uses a new kind of storage mechanism that allows improved calculation times
from 10 to100 times faster than BSO. · ASO can store up to 252 dimensional combinations. · The
front end tools usually do not care if the database is ASO or BSO. Even Maxl sees minor
differences. · We can have attribute dimensions in ASO. · In ASO there is no concept as dense
and sparse dimensions. · We do not have two pass logic and built in time balance
functionality.( time balance functionality is present from 9.3 version onwards). · Member
formulas are not supported in stored hierarchies. · Only non consolidation (~) and addition (+)
operators are supported in shared hierarchies. · We cannot create more than 1 database in ASO. ·
ASO does not utilize procedural calculation scripts. · ASO formulas are written in MDX
syntax. · ASO has Accounts dimension but it is completely different from the account dimension
of BSO. · ASO is read-only. You cannot write to ASO databases, but there is a workaround
using transparent partitions and pointing to an attached BSO database for those duties. · You can
load data to level zero members only. · The database must restructure after any members in the
standard dimensions are added ,deleted or moved. In fact most actions on an ASO outline will
either cause a loss of data or restructure.
How do you differentiate ASO applications?

You can easily differentiate the ASO database in the Administrative Services Console by the red
star beside the application name.

How do you create an ASO application? ASO has two types of hierarchies: stored and
dynamic. The dimension can contain both types of hierarchies (if you enable multiple
hierarchies).Other properties that need to be set for dimensions and members include ·
Dimension Type · Data Storage(store, never share, label only) · Member solve order

· Alias

You can add dimensions using the visual editor or the rules files.

Unlike in block storage ASO does not allow you to preview the outline changes. If you are
unsure of the build file, make a backup of your outline before running the new build rule. For
ASO databases after the data values are loaded into the level 0 cells of an outline, the database
requires no separate calculation step. For retrieving from the ASO database, retrieve and analyze
just as in BSO database.

How do you create an ASO database using ASO Outline Conversion Wizard ? You can also
create an ASO database using ASO Outline Conversion Wizard. This wizard uses the existing
BSO database to convert to an ASO database. This is advantageous because we do not need to
create an ASO database from the Scratch. However we need perform reengineering of
dimensions and hierarchies.

How do you create ASO in the Automated Way? The final way of creating an ASO
application is by using "Create Application" , "Create Database" ,"Create Outline " commands
using MaxL. Typically this method is used when you are running the MaxL command as a part
of the batch job.

**Unicode is supported for BSO databases only.

**Data Mining is not supported by ASO databases.

**MDX is the only mechanism for defining member calculations in databases.

Unicode applications use UTF-8 encoding form to interpret and store character text, providing
support for multiple character sets.

To set up a Unicode application

1. Setup a computer for Unicode support by doing one of

· Install the for that supports UTF-8 encoding


· Install a Unicode editor

2. Set the Essbase server to Unicode Mode via Administrative Services or MaxL.

3. Check the Unicode box when creating a new Unicode -mode application.

4. You can also migrate from non-Unicode applications to Unicode applications (but not the
other way round).

Report Scripts are outdated but still can be helpful when extracting subsets of data from Essbase
for online backups or feeding into other systems.

The Wizards Tab of Administrative Services Console menu has the following

components

1. Migration

2. Aggregate Storage Outline Conversion

3. Aggregate Storage Partition

4. User Setup

5. Data Mining Wizard

Posted by Dornakal's-Hyperion-Blog at 4:46 PM 2 comments Links to this post

Reactions:

You might also like