Hyperion Interview Questions
Hyperion Interview Questions
Hyperion Interview Questions
You can configure a database to optimize calculation performance. The best configuration for the
site depends on the nature and size of the database.
· Databases that contain a significant number of Dynamic Calc and Store members
· Databases that use an isolation level of uncommitted access with commit block set to zero
You can measure fragmentation using the average clustering ratio or average fragmentation
Quotient.
Any quotient above the high end of the range indicates that reducing fragmentation may help
performance
The average clustering ratio database statistic indicates the fragmentation level of the data (.pag)
files. The maximum value, 1, indicates no fragmentation.
· To prevent fragmentation, optimize data loads by sorting load records based upon sparse
dimension members. For a comprehensive discussion of optimizing data load by grouping sparse
members.
· To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.
As your business changes, you change the Essbase database outline to capture new product lines,
provide information on new scenarios, reflect new time periods, etc. Some changes to a database
outline affect the data storage arrangement, forcing Essbase to restructure the database.
• Implicit Restructures
• Dense restructure
• Sparse restructure
• Outline-only restructure
• Explicit Restructures
Intelligent Calculation, name changes, and formula changes affect database restructuring:
· If you use Intelligent Calculation in the database, all restructured blocks are marked as dirty
whenever data blocks are restructured. Marking the blocks as dirty forces the next default
Intelligent Calculation to be a full calculation.
· If you change a name or a formula, Essbase does not mark the affected blocks as dirty.
Therefore, you must use a method other than full calculation to recalculate the member or the
database.
When Essbase restructures both the data blocks and the index, it uses the files described
dbname.esm Essbase kernel file that contains control information used for db recovery
dbname.ind Free fragment file for data and index free fragments
dbname.otl Outline file in which is defined all metadata for a database and how data is stored
· If you change a dimension frequently, make it sparse. · Use incremental restructuring to control
when Essbase performs a required database restructuring. · Select options when you save a
modified outline that reduce the amount of restructuring required.
· Outline only (no index or data files)· Sparse (only index files) · Dense (index files and data
files) as a result of adding, deleting, or moving members and other operations · Dense (index
and data files) as a result of changing a dense dimension to sparse or changing a sparse
dimension to dense
Essbase initiates an implicit restructure of the database files after an outline is changed using
Outline Editor or Dimension Build. The type of restructure that is performed depends on the type
of changes made to the outline
When you manually initiate a database restructure, you perform an explicit restructure. An
explicit restructure forces a full restructure of the database. A full restructure comprises a dense
restructure plus removal of empty blocks.
If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in
the data files and creates new data files. When Essbase restructures the data blocks, it regenerates
the index automatically so that index entries point to the new data blocks.
Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense
restructure you need to recalculate the database.
If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index
and creates new index files.
Restructuring the index is relatively fast; the amount of time required depends on the size of the
index.
Member name changes, creation of aliases, and dynamic calculation formula changes are
examples of changes that affect only the database outline.
1. Creates temporary files that are copies of the .ind, .pag, .otl, .esm, and .tct files. Each
temporary file substitutes either N or U for the last character of the file extension, so the
temporary file names are .inn, essxxxxx.inn, essxxxxx.pan, dbname.otn, dbname.esn, and
dbname.tcu. 2. Reads the blocks from the database files copied in step 1, restructures the
blocks in memory, and then stores them in the new temporary files. This step takes the most
time. 3. Removes the database files copied in step 1, including .ind, .pag, .otl, .esm, and .tct
files. 4. Renames the temporary files to the correct file names: .ind, .pag, .otl, .esm, and .tct.
When Essbase does a sparse restructure (restructures just the index), it uses the following files:·
essxxxxx.ind· dbname.otl· dbname.esm
Essbase allows you to choose whether data blocks that are stored on disk are compressed, as well
as which compression scheme to use. When data compression is enabled, Essbase compresses
data blocks when it writes them out to disk. Essbase fully expands the compressed data blocks,
including empty cells, when the blocks are swapped into the data cache.
Generally, data compression optimizes storage use. You can check compression efficiency by
checking the compression ratio statistic.
1. Bitmap compression, the default. Essbase stores only non-missing values and
uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell
value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to
store every non-missing cell. In most cases, bitmap compression conserves disk space more
efficiently. However, much depends on the configuration of the data.
2. Run-length encoding (RLE). Essbase compresses repetitive, consecutive values --any value
that repeats three or more times consecutively, including zeros and #MISSING values. Each data
value that is repeated three or more times uses 8 bytes plus a 16 byte repetition factor.
3. zlib compression. Essbase builds a data dictionary based on the actual data being compressed.
This method is used in packages like PNG, Zip, and gzip. Generally, the more dense or
heterogeneous the data is, the better zlib will compress it in comparison to bitmap or RLE
compression.
4. Index Value Pair compression. Essbase applies this compression if the block density is less
than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the
blocks are highly sparse. zlib does not use this.
5. No compression. Essbase does not compress data blocks when they are written to disk
You may want to disable data compression if blocks have very high density (90% or greater) and
have few consecutive, repeating data values. Under these conditions, enabling compression
consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue
compared to your application. It can become a drain on the processor.
Essbase issues write (exclusive) locks for blocks that are created, updated, or deleted, and issues
read (shared) locks for blocks that should be accessed but not modified. By issuing the
appropriate locks, Essbase ensures that data changed by one operation cannot be corrupted by a
concurrent update.
What is a transaction?
When a database is in read/write mode, Essbase considers every update request to the server
(such as a data load, a calculation, or a statement in a calculation script) as a transaction.
The transaction control file contains an entry for each transaction and tracks the current state of
each transaction (Active, Committed, or Aborted).
What is isolation level and what are the types of isolation levels?
Isolation levels determine how Essbase commits data to disk. Essbase offers two isolation levels
for transactions --committed access and uncommitted access (the default).
Committed:
Wait Times:
Indefinite
Uncommitted:
Setting the isolation level to committed access may increase memory and time requirements for
database restructure.
The index cache is a buffer in memory that holds index pages. How many index pages are in
memory at one time depends upon the amount of memory allocated to the cache.
The data file cache is a buffer in memory that holds compressed data files (.pag files). Essbase
allocates memory to the data file cache during data load, calculation, and retrieval operations, as
needed. The data file cache is used only when direct I/O is in effect.
The data cache is a buffer in memory that holds uncompressed data blocks. Essbase allocates
memory to the data cache during data load, calculation, and retrieval operations, as needed.
The calculator cache is a buffer in memory that Essbase uses to create and track data blocks
during calculation operations.
The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks
needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a
query).
What are the memory caches used by Essbase to coordinate memory usage?
Index Cache: Min -1024 KB (1048576 bytes) Default - Buffered I/O : 1024 KB (1048576
bytes);Direct I/O : 10240 KB (10485760 bytes) Opt -Combined size of all essn.ind files, if
possible; as large as possible otherwise.Do not set this cache size higher than the total
index size, as no performance improvement results.
Data File Cache: Min - Direct I/O: 10240 KB(10485760 bytes) Default -Direct I/O: 32768
KB(33554432 bytes)Opt -Combined size of all essn.pag files, if possible; otherwise as
large as possible.This cache setting not used if Essbase is set to use buffered I/O.
Data Cache:Min - 3072 KB (3145728 bytes) Default - 3072 KB (3145728 bytes) Opt -
0.125 * the value of data file cache size.
Calculator Cache:Min - 4 bytes Max: 200,000,000 bytes Default - 200,000 bytes Opt -The
best size for the calculator cache depends on the number and density of the sparse
dimensions in your outline. The optimum size of the calculator cache depends on the
amount of memory the system has available.
In a business application requiring currency conversion, the main database is divided into at
least two slices. One slice handles input of the local data, and another slice holds a copy of the
input data converted to a common currency.
Essbase holds the exchange rates required for currency conversion in a separate currency
database. The currency database outline, which is automatically generated by Essbase from the
main database after you assign the necessary tags, typically maps a given conversion ratio onto a
section of the main database. After the currency database is generated, it can be edited just like
any other Essbase database.
What are the three dimension that should be present in main database of currency
application?
The main database outline can contain from 3 to n dimensions. At a minimum, the main database
must contain the following dimensions:
What are the dimensions that should be present in currency database of currency
application?
A currency database always consists of the following three dimensions, with an optional fourth
dimension:
· A dimension tagged as time, which is typically the same as the dimension tagged as time in
the main database.
· A dimension tagged as country, which contains the names of currencies relevant to the
markets (or countries) defined in the main database.
· A dimension tagged as accounts, which enables the application of various rates to members
of the dimension tagged as accounts in the main database.
· A currency database, which typically includes an optional currency type dimension, which
enables different scenarios for currency conversion.
What are the conversion methods supported by Essbase for currency applications?
Different currency applications have different conversion requirements. Essbase supports two
conversion methods:
Either of these two methods may require a currency conversion to be applied at report time.
Report time conversion enables analysis of various exchange rate scenarios without actually
storing data in the database.
What is the process to build a currency conversion application and perform conversions?
To build a currency conversion application and perform conversions, use the following process:
1. Create or open the main database outline. 2. Prepare the main database outline for currency
conversion. 3. Generate the currency database outline. 4. Link the main and currency
databases. 5. Convert currency values. 6. Track currency conversions. 7. If necessary,
troubleshoot currency conversion.
What is CCONV? After you create a currency conversion application, you convert data values
from a local currency to a common, converted currency by using the CCONV command in
calculation scripts Ex: CCONV USD;CALC ALL;
Can we convert the converted currency back into its local currency? You can convert the
data values back to the original, local currencies by using the CCONV TOLOCALRATE
command.
When you convert currencies using the CCONV command, are the resulting data blocks
are marked as dirty or clean?When you convert currencies using the CCONV command, the
resulting data blocks are marked as dirty for the purposes of Intelligent Calculation. Thus,
Essbase recalculates all converted blocks when you recalculate the database.
What is CCTRACK? You can use the CCTRACK setting in the essbase.cfg file to control
whether Essbase tracks the currency partitions that have been converted and the exchange rates
that have been used for the conversions. By default CCTRACK is turned on.
What are the reasons to turn off CCTRACK? For increased efficiency when converting
currency data between currency partitions, you may want to turn off CCTRACK. For example,
you load data for the current month into the local partition, use the DATACOPY command to
copy the entire currency partition that contains the updated data, and then run the conversion on
the currency partition.
How can you turn off CCTRACK? You can turn off CCTRACK in three ways: · Use the SET
CCTRACKCALC ONOFF command in a calculation script to turn off CCTRACK temporarily ·
Use the CLEARCCTRACK calculation command to clear the internal exchange rate tables
created by CCTRACK.
What is LRO (Linked reporting objects)? An LRO is an artifact associated with a specific
data cell in an Essbase database. LROs can enhance data analysis capabilities by providing
additional information on a cell.
Users create linked objects through Essbase Spreadsheet Add-in for Excel by selecting a data cell
and choosing a menu item. There is no limit to the number of objects you can link to a cell. The
objects are stored on the Essbase Server where they are available to any user with the appropriate
access permissions. Users retrieve and edit the objects through the Essbase Spreadsheet Add-in
for Excel Linked Objects Browser feature, enabling them to view objects linked to the selected
cell.
Does adding or removing links to a cell does not affect the cell contents?
No.LROs are linked to data cells --not to the data contained in the cells. The link is based on a
specific member combination in the database.
Give a few examples of LRO's?
Ex1: A sales manager may attach cell notes to recently updated budget items. Ex2: A finance
manager might link a spreadsheet containing supporting data for this quarter's results. Ex3: A
product manager might link bitmap images of new products. Ex4: A sales manager may link the
URL of a company's Web site to quickly access the info on the Web
Essbase uses the database index to locate and retrieve linked objects. If you clear all data values
from a database, the index is deleted and so are the links to linked objects. If you restructure a
database, the index is preserved and so are the links to linked objects.
Shared members share data values but do not share LROs. This is because LROs are linked to
specific member combinations and shared members do not have identical member combinations.
To link a given object to shared members, link it to each shared member individually.
Can you change the member combination associated with any linked object?
You cannot change the member combination associated with any linked object. To move an
object to another member combination, first delete it, then use Essbase Spreadsheet Addin for
Excel to re-link the object to the desired member combination.
Why do we need to limit the LRO file sizes for storage conversion?
Because Essbase stores linked files in a repository on the server and, by default, the size is
unlimited. Limiting the file size prevents users from taking up too much of the server resources
by storing extremely large objects. You can set the maximum linked file size for each
application. If a user attempts to link a file that is larger than the limit, an error message displays.
The maximum file size setting applies only to linked files and does not affect cell notes or URLs.
The lengths of the cell note, URL string, and LRO descriptions are fixed.
What is partitioning?
A partition is the piece of a database that is shared with another database. An Essbase partitioned
application can span multiple servers, processors, or computers.
1. Transparent partition:
A form of shared partition that provides the ability to access and manipulate remote data
transparently as though it is part of your local database. The remote data is retrieved from the
data source each time you request it. Any updates made to the data are written back to the data
source and become immediately accessible to both local data target users and transparent data
source users
2. Replicated Partition:
A portion of a database, defined through Partition Manager, used to propagate an update to data
mastered at one site to a copy of data stored at another site. Users can access the data as though it
were part of their local database.
3. Linked Partition:
A shared partition that enables you to use a data cell to link two databases. When a user clicks a
linked cell in a worksheet, Essbase opens a new sheet displaying the dimensions in the linked
database. The user can then drill down those dimensions.
· Data source information: The server, application, and database name of the data source.
· Data target information: The server, application, and database name of the
data target.
· Login and password: The login and password information for the data source and the data
target.
· Shared areas: A definition of one or more areas, or sub cubes, shared between the data source
and the data target.
· Member mapping information: A description of how the members in the data source map to
members in the data target.
· State of the partition: Information about whether the partition is up-to-date and when the
partition was last updated.
Can you define different types of partitions between the same two databases?
No
Can a single database serve as the data source or data target for multiple partitions?
Yes
What is overlapping partition?
An overlapping partition occurs when similar data from two or more databases serve as the data
source for a single data target in a partition.
Using substitution variables in partition definitions enables you to base the partition definition on
different members at different times.
Yes,You can use attribute functions for partitioning on attribute values. But you cannot partition
an attribute dimension.
Both ends of a transparent, replicated, or linked partition must be on the same release level of
Essbase Server. For example, if the source of a linked partition is on a Release 7.1.2 server, the
target must also be
In addition, for transparent and replicated (but not linked) partitions, the application
mode of both ends of the partitions must be the same--either Unicode mode or non-
Unicode mode.
If we have more dimensions (generally more than 10) then we will go for ASO that simply rollup
If we have less dimensions then we will go for BSO We cannot write back in ASO we can write
back in BSO Most of the dimensions are sparse in ASO Most of the dimensions are dense in
BSO
· ASO databases are created specifically to deal with the requirements of very large sparse data
sets with a high no of dimensions and potentially millions of members. · ASO do not have
indexes or data blocks. · ASO do not use calculation scripts. Bcoz calculations are not
complex. · ASO uses a new kind of storage mechanism that allows improved calculation times
from 10 to100 times faster than BSO. · ASO can store up to 252 dimensional combinations. · The
front end tools usually do not care if the database is ASO or BSO. Even Maxl sees minor
differences. · We can have attribute dimensions in ASO. · In ASO there is no concept as dense
and sparse dimensions. · We do not have two pass logic and built in time balance
functionality.( time balance functionality is present from 9.3 version onwards). · Member
formulas are not supported in stored hierarchies. · Only non consolidation (~) and addition (+)
operators are supported in shared hierarchies. · We cannot create more than 1 database in ASO. ·
ASO does not utilize procedural calculation scripts. · ASO formulas are written in MDX
syntax. · ASO has Accounts dimension but it is completely different from the account dimension
of BSO. · ASO is read-only. You cannot write to ASO databases, but there is a workaround
using transparent partitions and pointing to an attached BSO database for those duties. · You can
load data to level zero members only. · The database must restructure after any members in the
standard dimensions are added ,deleted or moved. In fact most actions on an ASO outline will
either cause a loss of data or restructure.
You can easily differentiate the ASO database in the Administrative Services Console by the red
star beside the application name.
How do you create an ASO application? ASO has two types of hierarchies: stored and
dynamic. The dimension can contain both types of hierarchies (if you enable multiple
hierarchies).Other properties that need to be set for dimensions and members include ·
Dimension Type · Data Storage(store, never share, label only) · Member solve order
· Alias
You can add dimensions using the visual editor or the rules files.
Unlike in block storage ASO does not allow you to preview the outline changes. If you are
unsure of the build file, make a backup of your outline before running the new build rule. For
ASO databases after the data values are loaded into the level 0 cells of an outline, the database
requires no separate calculation step. For retrieving from the ASO database, retrieve and analyze
just as in BSO database.
How do you create an ASO database using ASO Outline Conversion Wizard ? You can also
create an ASO database using ASO Outline Conversion Wizard. This wizard uses the existing
BSO database to convert to an ASO database. This is advantageous because we do not need to
create an ASO database from the Scratch. However we need perform reengineering of
dimensions and hierarchies.
How do you create ASO in the Automated Way? The final way of creating an ASO
application is by using "Create Application" , "Create Database" ,"Create Outline " commands
using MaxL. Typically this method is used when you are running the MaxL command as a part
of the batch job.
Unicode applications use UTF-8 encoding form to interpret and store character text, providing
support for multiple character sets.
2. Set the Essbase server to Unicode Mode via Administrative Services or MaxL.
3. Check the Unicode box when creating a new Unicode -mode application.
4. You can also migrate from non-Unicode applications to Unicode applications (but not the
other way round).
Report Scripts are outdated but still can be helpful when extracting subsets of data from Essbase
for online backups or feeding into other systems.
The Wizards Tab of Administrative Services Console menu has the following
components
1. Migration
4. User Setup
Reactions:
Thursday, January 14, 2010
Smart View retrieve Error
One of my Business user recieved an error when she was running Smart view query,
which had lot of attributes in her retrive sheet.
Eventhough I increased NetDelay and NetRetrycount in essbase.cfg, but that does not
resolve this problem. She was using IE7 on XP SP3.
By design, Internet Explorer imposes a time-out limit for the server to return data.
The time-out limit is five minutes for versions 4.0 and 4.01 and is 60 minutes for
versions 5.x and 6. Also, the time-out limit is 30 seconds for Windows Internet
Explorer 7. As a result, Internet Explorer does not wait endlessly for the server to
come back with data when the server has a problem.
Apparently the query which the user was running takes around 45 seconds. The IE7
time out setting was causing it to error out.
we need to change (add) three new registry keys to resolve the issue.
If the above settings does not make any difference and the version of SmartView is
9.3.1.2, it may need to upgrade SmartView 9.3.1.4 or higher version.
Here are the Screen shots for changing the registry settings.
Right click and select DWORD (32-bit) value
Rename to KeepAliveTimeout
Reactions:
Friday, October 2, 2009
Correct Order of restarting planning
services
If you are planning admin and had to restart planning services.
here is the order you have to follow to restart.
Stop Order
Planning Service
RMI registry
AAS
Start Order
AAS
RMI registry
Planning Service
Reactions:
Thursday, September 10, 2009
Hyperion Essbase Overview
please find some Q&A, which provide you with essbase overview.
What is Partitioning?
Copying a slice of a large database to work with it locally, or link from your database
directly to other databases.
What is an Outline?
Outline is the structure that defines all elements of the Database. It dictates how
data is to be stored into the database. Outline defines the structural relationship
between the members in DB. Organizes all data in the DB
Defines consolidations and mathematical relationships between members
It defines type of dimensions. Aliases, member formulas etc.
The order of the outline is that how dimensions are arranged is as follows:
Dense dimensions followed by sparse dimensions followed by Attribute dimensions.
The Data block represents all the dense dimension members for its combination of
sparse standard dimensions.
Huge index consumes large amount of memory. The more index entries the longer
Essbase searches to find a specific block.
What is a formula?
Formulas are used to calculate relationships between members in the DB Outline.
What are the members for which Two-pass calculations can be given?
Accounts
Dynamic calc
Dynamic calc and store
create application------maxl
CREATEAPP------------ESSCMD
You cannot use the substitution variables in formulae that are applied to the DB
outline. The value of the substitution variable cannot be more than 256 characters.
Server>substitution variable
Maxl (for creating/deleting/updating)
Alter system
Alter application
Alter db
ESSCMD
CREATE VARIABLE
DELETE VARIABLE
UPDATE VARIABLE
Database>location aliases
Create
Maxl ----------------------create location alias
ESSCMD----------------CREATE LOCATION
Edit /Delete
Maxl display location alias
drop location alias
Copying database
Database > copy
Maxl create database as
ESSCMD COPYDB
· All data
· Level 0 data (when all data in the outline is at level 0)
· Input data (when you load data into non level 0 members)
· Discard all data (when you expect to reload data or when outline is radically
changed that no existing data applies)
What is Metadata?
Metadata is data is data about data. Metadata is the data contained in the database
outline that describes the values within a DB.
Ex:
East>New York>cola>sales>10000
Can you add time members that are not tagged as time?
Yes
When do you give an accounts tag to a dimension?
You can tag a dimension as accounts if it contains items that you want to measure
such as profit or inventory.
Time dimension tag says how often you collect and update data. The time dimension
enables several accounts dimension functions such as first and last time balances.
Skip Properties
· None
· Missing
· Zeros
· Missing and zeros
Skip properties, expense reporting, Time Balance properties are applicable only to
the accounts dimension.
Only accounts, dynamic calc, dynamic calc and store members can be given two pass
calculation.
Edit>properties
Data dictionary button
Reactions:
Monday, September 7, 2009
Fragmentation in Essbase (BSO)
• What is Fragmentation?
Databases that contain a significant number of Dynamic Calc and Store members
Databases that use an isolation level of uncommitted access with commit block set to zero
For databases less than 50 MB using the Direct I/O access mode, the fragmentation quotient tends to
be high. A high fragmentation quotient does not necessarily indicate a need to reduce
fragmentation, because the free space is created in 8 MB chunks and all of it might not get used
right away.
To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.
Example:
Reactions:
Wednesday, May 6, 2009
How to Delete Dimension from Hyperion
Planning Application
If you create a custom dimension in planning and wish to delete it, here is how you
might remove it from SQL manually. We recommend that only someone familiar with
relational databases attempt this.
Outlined below are the steps to remove a dimension from the Oracle relational store.
The directions are for SQL, but the table names should be the same in Oracle.
You will need to delete all the children first, then the dimension.
If you have the dimension in forms, you may also need to (after step 5) go into the
hsp_formobj_def_mbr table and delete all references of that object ID. Then delete
the dim id in the hsp_form_layout table.
Now when you open the application in Planning the dimension will be removed.
The next step is to perform a refresh to make sure planning and Essbase are in sync.
Posted by Dornakal's-Hyperion-Blog at 5:23 PM 0 comments Links to this post
Labels: Delete Dimension in Hyperion Planning
Reactions:
Thursday, April 30, 2009
Hyperion Planning Installation Steps
This document demonstrates step by step procedure to install Hyperion Planning
Software. This includes the installation and configuration of Hyperion components and
other relative software.
First step is to download hyperion planning and related software from the following
website
http://edelivery.oracle.com/
After entering the details you will be directed to the page below. Select Hyperion
performance management and BI.
Download the following products:
1. Hyperion Shared Services Release 9.3.1 Microsoft Windows x86 Installer English
(V11229-01)
2. Hyperion Smart View for Office Release 9.3.1.2 Microsoft Windows x86 Installer
( V12736-01)
3. Hyperion Essbase - System 9 Release 9.3.1.3 Server Windows Installer (V14762-01)
4. Hyperion Essbase Administration Services - System 9 Release 9.3.1.3 Windows
Installer-- (V14792-01)
5. Hyperion Planning - System 9 Release 9.3.1.1 All Platforms Installer English--
V11378-01
After downloading the software. Extract the setup files from downloads and start
installation.
Make sure that you don't check Launch configuration utility in the last step in the
first run.
As seen above. complete installation of all 5 products below with out launching
configuration utility.
www.EssbaseInfo.com
Reactions:
Monday, March 30, 2009
Common Mistakes to avoid in Hyperion
Resume
Hi All,
Good evening.
We were recruiting for hyperion developer last year and got many resumes as
response for our advertisement.
Here are some things, which I observed while reviewing those resumes.
Common why do your prospective employer want to know what you did in cold fusion
coding, when he advertised for hyperion developer.
Hope you will take care of above points when building your resume.
Posted by Dornakal's-Hyperion-Blog at 5:36 PM 1 comments Links to this post
Labels: Resume Building
Reactions:
Friday, March 27, 2009
Data transfer between two hyperion
Essbase Cubes
Data Transfer from one cube to another.
There are different ways to transfer data from one cube to other.
1. Hyperion Application Link (HAL)
2.Export data using report script and importing data into new cube
3.Jexport
4.XREF
Today we will learn about XREF calc script, which is used by most who want to
transfer data between cubes.
please find sample xref calc script below:
In this example I am trasfering payroll,social, bonus and headcount data from my
main P&L (profit and loss) application to work force application.
The first step of XREF is to create a location alias of the source application. In this
example my location alias is _LocAliasPL.
You can create location alias using EAS in the following way:
open the application
right click database
Click Edit
navigate to location alias.
Click to create location alias and give the details of the source cube.
ENDFIX;
/*END MAIN SCRIPT*/
Posted by Dornakal's-Hyperion-Blog at 3:10 PM 5 comments Links to this post
Labels: Data Transfer
Reactions:
Monday, March 16, 2009
Sample Calculation Script
/*Information */
/*
Script Name : CopyAct2Fcst
Created by : Dornakal, Hyperion Consultant, March 16, 2009
Purpose : Copy Actuals to Current Forecast
Directions : Set substitution variables CurFcst, Actmnth,CY
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
Comments : This script copies actual data from actual scenario to forecast scenario;
This rule should be run before every forecast.
*/
/*House Keeping*/
/* Main Rollup */
/* Copies data for all existing employees of all Expense Accounts from Actual scenario
, final version to Current Forecast and Working version */
Reactions:
Friday, March 13, 2009
What is Intelligent Calculation? Why should
we care?
Developing calc scripts Series
When you perform a full database calculation, Essbase marks which blocks have been
calculated. If you then load a subset of data, you can calculate only the changed data
blocks and their ancestors. This selective calculation process is intelligent
calculation.
By default, intelligent calculation is turned on. You can change the default setting in
the essbase.cfg file or on a script-by-script basis wit the SETUPDATECALC
OFF command.
When intelligent calculation is active, during calculation, Essbase looks for only dirty
blocks.
Exceptions:
Even when the intelligent calculation is enabled, for CALC DIM statements that do not
include all dimensions, Essbase does not use intelligent calculation process. Rather,
Essbase calculates all relevant data blocks, regardless of clean or dirty status, and all
data blocks retain their status, dirty or clean.
Example:
When you execute a calculation script that includes the SET UPDATESTATUS AFTER
command, data blocks that are marked clean are not calculated and data blocks that
are marked dirty are calculated and marked clean.
The Intelligent calc can provide significant performance benefits kin certain situations
but require vigilant maintenance of the clean and dirty status of data blocks to avoid
skipping the wrong blocks on calculation.
During quarter close periods, allocation rates and adjusting entries may be update
multiple times. You use intelligent calc to view update results without recalculating
the entire database.
In budgeting or forecasting application, typically, many users update units and drivers
on a regular basis. In such applications, for users to analyze the impact of their
updates, a short calculation time is imperative.
Reactions:
Implementation Process for Essbase
Database
Hi Guys,
good morning.
please find implementation process for Essbase database.
Database Creation:
Reactions:
Monday, March 9, 2009
Automate HAL Load
Hi All,
good morning.
today we will see how we can autoamte HAL job (Flow diagram)
If you are loading a large number of members, HAL chokes if you don't automate it
(.exe etc).
Here are the steps one should follow to autoamate a HAL job.
Step:1
Step:3
Open the window executible. Go to Runtime Target tab.
give the location of the flow diagram.
Browse to the location, where you want to store the executible file.
Step:4
Check
Step:8
Don't check anything in Profile tab as shown below:
Step:9
Check
Step :11
you will see your execution file in the list as shown below:
Step:12
you can give the location of executible file in your batch file and run it using
scheduler like window scheduler.
Posted by Dornakal's-Hyperion-Blog at 3:32 PM 6 comments Links to this post
Labels: Automation of HAL Load, HAL, Metadataload
Reactions:
Monday, February 23, 2009
Migration of DataForms in Hyperion
Planning
The goal of this post is to explain how to migrate data forms from one environment to
another in planning applications. The automation of this process helps to reduce time
and leaves no scope for human error.
FormDefUtil.cmd utility can be used to move data form definitions from one Planning
application to another. Data form definitions can be exported or imported from an
XML file. This utility is extremely useful when Data forms need to be moved from a
development environment to a production environment.This utility uses a command
line interface and is installed in the bin directory (D:Hyperion\Planning\bin). This
utility runs only on Windows-based systems (even if application servers are on UNIX).
Only administrators can run it. The utility is located on Planning server as shown
below.
1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:
1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:
This completes migration of data forms from one environment to another in planning
applications.
Posted by Dornakal's-Hyperion-Blog at 1:50 PM 2 comments Links to this post
Labels: Migration of DataForms, Migration of WebForms
Reactions:
Automation of Dimension extraction
Hi Guys,
good morning.
today we will learn how to automate dimension extraction from essbase.
There is a very good utility called outline extractor in
http://www.appliedolap.com
Step # 1
As a first step you should install Outline extractor.
Step # 2
Build a batch file providing details of your application and database and dimension
you want to extract as follows:
You can schedule the above batch file to run at fixed time using various schedulers
like Window scheduler etc.
Reactions:
Thursday, February 19, 2009
Sample Report Script
Posted by Dornakal's-Hyperion-Blog at 5:37 PM 0 comments Links to this post
Labels: Report Script
Reactions:
Tuesday, February 17, 2009
Import Security in Hyperion Planning
applications
Hi Guys,
good morning.
lot of times there will be a huge number of requests to grant or remove security
access for users.
for one or two changes, it is simple task, click on member and then assign access and
grant permission. It becomes tiresome when we need to make changes for large
number of members.
it involves considerable amount of time.
Automating the security changes saves considerable amount of time. The
ImportSecurity utility in planning loads access permissions for users or groups from a
text file into Planning. Importing access permissions using Import security utility
overwrites existing access assignments only for imported members, users, or groups.
All other existing access permissions remain intact. The SL_CLEARALL parameter
clears all existing access permissions.
Step 1: Create a text file and name it as Secfile.txt and save it in bin directory
(D:\Hyperion\Planning\bin). Example of Secfile.txt is shown in picture below where
Planning_Security_Group is group name, MemberName is member name, Write is
Access permission, and MEMBER is relationship.
ImportSecurity.cmd “appname,username,password[delimiter],[RUN_SILENT],
[SL_CLEARALL]”
Where:
appname : Name of the Planning application importing access permissions.
RUN_SILENT ( Optional) : Execute the utility silently (the default) or with progress
messages. Specify 0 for messages, or 1 for no messages.
Reactions:
Monday, February 9, 2009
Load dimension members into Planning
Application using HAL from Relational
Database
Hi Guys,
Good afternoon.
please find the process of loading metadata into planning application using HAL
(Hyperion Application Link).
In the example below, I am loading employee dimension using HAL from SQL server
table.
You can also load it from flat file (text file or .csv file)
1. Variable
In the Methods tab you should specify which dimension you are loading in the drop
down menu. In this example we changed the dimension to employee as we are loading
meta data related to employee.
The third adapter is planning adapter again. Make sure that you change the drop down
to load as shown below:
You can specify the name of the adapter. For example Planning load adapter etc.
In the methods section you should specify the dimension you are loading to. In this
example we are using Employee.
Reactions:
Friday, January 30, 2009
Hyperion Planning Certification Help
Hi Guys,
here are some planning questions which could help you get certified in Hyperion
Planning Certification. I intentionally didn't post answers for most of them as you can
find those answers in planning administrator manual. Hope this helps.
Reactions:
Wednesday, January 28, 2009
How to delete members in planning using
HAL
Hi
good morning.
Deleting members in planning is challenging to do manually as it consume lots of time
specially if you more than 5 members to delete. we can use Hyperion Application Link
(HAL) to delete precisely the members we don't need.
Here are the steps to follow on how to delete members using HAL
Step:1
prepare a text document or .CSV file of the list of members you want to delete in
Parent, Child format.
Step:2
Open a new flow diagram and add flat file adapter, Planning adapter and Variable
adapter as shown below. When you select a Planning adapter, the names of ports and
the number of ports are determined by the dimension to which members are deleted.
Most ports reflect the properties and custom attributes of the selected dimension.
Step : 3
Open a Variable adapter and type Delete in port name column and
in the Initial Value column type
Step.4:
Upload the flat file with members to delete as shown below:
After the execution of HAL, you should refresh planning application to push the
changes to essbase.
Posted by Dornakal's-Hyperion-Blog at 8:49 AM 3 comments Links to this post
Labels: Delete members in planning, HAL
Reactions:
Tuesday, January 27, 2009
Automation of DataLoad
This document is intended to explain how to automate data load.
Step.1:
Create a data load rule in Essbase
Step.2 :
Step .3 :
Create a batch file to execute the MaxL file. We can schedule the Batch file using
windows scheduler.
/*Script Start */
/*Login */
LOGIN 'UserName' 'Password' ON 'Server Name';
/*Clears data */
execute calculation '
Fix (@idescendants("Account"),@idescendants(Entity),&Current Year,Version)
CLEARDATA Scenario;
Endfix;'
on ApplicationName.Database Name;
/* Imports Data */
/*Aggregates Data*/
execute calculation'
SET AGGMISSG ON;
SET UPDATECALC OFF;
Fix (Scenario,Year,Version)
CALC DIM(Product,Entity);
Endfix;'
on ApplicationName.DatabaseName;
LOGOUT;
EXIT;
/*End MaxL Script*/
Batch File to call the MaxL Script and send email about status of the
Rem ******************************************************************************
Rem Object Type: Batch File
Rem Object Name: BatchFileName.bat
Rem Script Date: 01-27-2009
Rem Created by: Dornakal
Rem Purpose: This script loads the following data into Cube
Rem Changed By:
Rem Change Date:
Rem Description:
Rem ******************************************************************************
Rem This starts the log file
echo "Start of log" > E:\Logs\Dataload.log
Reactions:
Thursday, January 22, 2009
Certification Questions
Hi All,
please find some sample questions, which can help you to get certified in Essbase.
Reactions:
Hyperion Essbase Interview questions -
Part 2
1. What is the difference between FIX and IF statements? In what situations
they are used and what are its pros and cons?
This is the favourite question, which everyone will be asking. If should be used for
Dense dimensions and FIX for sparse dimensions.
Reason: If you FIX on a Dense dimension. Remember dense dimension member will be
present in every block of essbase. so if you FIX on dense dimension you are asking to
pull all blocks, so the purpose of "FIX" (where you want to pull only some blocks) is
lost. If you FIX on sparse dimensions, only required blocks are pulled and calculation
will be fast.
In some cases you may need to load data into parent level. Like allocations, for
example you load expense data into east region and allocate the expenses to various
cost centers under east region based on the headcount or otehr factors.
If you have AGG MISSING ON and when we aggregate the database. it will clear the
parent value if there are no data for children. In such cases we don't use AGGMISSING
ON option.
3. Give me an estimate of the largest Essbase db you have worked on (Number of
dims, members, calcs, etc)
In my past experience, the biggest databases for BSO Essbase databases were below
10GB in most cases less than 5 GB. And number of dimensions are 8 to 10 with
maximum 4 attribute dimensions. In most cases account and period are dense
dimensions. number of members are around 5000 for entities. Any dimension
members are below 10000.
Reactions:
How to tackle Essbase Interview
Hi Guys:
Please find some of the interview questions :
The BSO architecture like
The interviewer looks for real life scenarios not the bookish answers. For example if
he asks “Tell me about blocks in Essbase”:
DBAG says:
"Essbase creates a data block for each unique combination of sparse standard
dimension members (providing that at least one data value exists for the sparse
dimension member combination). The data block represents all the dense dimension
members for its combination of sparse dimension members."
You should answer the question not like you memorized the definition of Block. You
are good if answer the question like this :
“In BSO the blocks are created for every combination of sparse standard dimension
member provided data existed for that combination.”
In addition you should add the following when you answer the question on blocks.
“Generally Blocks are in compressed state on disk. During calculation they are pulled
in cache and expanded. That’s why Block size is very important. If the block size is
huge, only few blocks are pulled onto cache at a time and if a calculation requires
10,000 blocks to finish calculation it takes forever to finish calculation and if block
size is so small and the index file becomes huge; Essbase takes forever to search for
the blocks it needed like needle in the stack of hay.”
This answer shows your confidence on subject and also makes interviewer to think
that you have real time experience.
There are two different storage options in Essbase. Previously there used to be only
one BSO. With the version 7.0 ASO came into place.
In Contrast if the business requirements need write back functionality and complex
calculations like inter company eliminations, allocation based on percentages etc
BSO is the answer.
Because of the above reasons, Hyperion planning need to go with BSO option.
I had business requirements gathered and built initial dimensions. We don’t sell all
products in all cost centers. Can you suggest us the dimension types for the following
dimensions? (This question is intended to test the concept of dense and sparse in BSO
and outline optimization)
Hour glass structure makes index to be more search freindly. Like assume that we
have largest sparse dimension as the first dimension and then next largest in that
order. when the essbase calculates it has to search all the largest sparse dimension
and then to next largest which take more time.
Reactions:
Migration of DataForms in Hyperion
Planning
The goal of this post is to explain how to migrate data forms from one environment to
another in planning applications. The automation of this process helps to reduce time
and leaves no scope for human error.
FormDefUtil.cmd utility can be used to move data form definitions from one Planning
application to another. Data form definitions can be exported or imported from an
XML file. This utility is extremely useful when Data forms need to be moved from a
development environment to a production environment.This utility uses a command
line interface and is installed in the bin directory (D:Hyperion\Planning\bin). This
utility runs only on Windows-based systems (even if application servers are on UNIX).
Only administrators can run it. The utility is located on Planning server as shown
below.
1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:
1. Launch the FormDefUtil.cmd utility from the bin directory using this syntax:
This completes migration of data forms from one environment to another in planning
applications.
Posted by Dornakal's-Hyperion-Blog at 1:50 PM 2 comments Links to this post
Labels: Migration of DataForms, Migration of WebForms
Reactions:
Automation of Dimension extraction
Hi Guys,
good morning.
today we will learn how to automate dimension extraction from essbase.
There is a very good utility called outline extractor in
http://www.appliedolap.com
Step # 1
As a first step you should install Outline extractor.
Step # 2
Build a batch file providing details of your application and database and dimension
you want to extract as follows:
Reactions:
Thursday, February 19, 2009
Sample Report Script
Reactions:
Tuesday, February 17, 2009
Import Security in Hyperion Planning
applications
Hi Guys,
good morning.
lot of times there will be a huge number of requests to grant or remove security
access for users.
for one or two changes, it is simple task, click on member and then assign access and
grant permission. It becomes tiresome when we need to make changes for large
number of members.
it involves considerable amount of time.
Automating the security changes saves considerable amount of time. The
ImportSecurity utility in planning loads access permissions for users or groups from a
text file into Planning. Importing access permissions using Import security utility
overwrites existing access assignments only for imported members, users, or groups.
All other existing access permissions remain intact. The SL_CLEARALL parameter
clears all existing access permissions.
Step 1: Create a text file and name it as Secfile.txt and save it in bin directory
(D:\Hyperion\Planning\bin). Example of Secfile.txt is shown in picture below where
Planning_Security_Group is group name, MemberName is member name, Write is
Access permission, and MEMBER is relationship.
Step 2: Locate the ImportSecurity utility by navigating to the bin directory.
Step 3: From the Command Prompt, enter this case-sensitive command, one space,
and the parameters. Separate each parameter with a comma, and enclose parameters
with double quotation marks:
ImportSecurity.cmd “appname,username,password[delimiter],[RUN_SILENT],
[SL_CLEARALL]”
Where:
appname : Name of the Planning application importing access permissions.
RUN_SILENT ( Optional) : Execute the utility silently (the default) or with progress
messages. Specify 0 for messages, or 1 for no messages.
Reactions:
Monday, February 9, 2009
Load dimension members into Planning
Application using HAL from Relational
Database
Hi Guys,
Good afternoon.
please find the process of loading metadata into planning application using HAL
(Hyperion Application Link).
In the example below, I am loading employee dimension using HAL from SQL server
table.
You can also load it from flat file (text file or .csv file)
1. Variable
In the Methods tab you should specify which dimension you are loading in the drop
down menu. In this example we changed the dimension to employee as we are loading
meta data related to employee.
The third adapter is planning adapter again. Make sure that you change the drop down
to load as shown below:
You can specify the name of the adapter. For example Planning load adapter etc.
In the methods section you should specify the dimension you are loading to. In this
example we are using Employee.
Reactions:
Automation of Dimension extraction
Hi Guys,
good morning.
today we will learn how to automate dimension extraction from essbase.
There is a very good utility called outline extractor in
http://www.appliedolap.com
Step # 1
As a first step you should install Outline extractor.
Step # 2
Build a batch file providing details of your application and database and dimension
you want to extract as follows:
You can schedule the above batch file to run at fixed time using various schedulers
like Window scheduler etc.
Sample Report Script
Step 1: Create a text file and name it as Secfile.txt and save it in bin directory
(D:\Hyperion\Planning\bin). Example of Secfile.txt is shown in picture below where
Planning_Security_Group is group name, MemberName is member name, Write is
Access permission, and MEMBER is relationship.
ImportSecurity.cmd “appname,username,password[delimiter],[RUN_SILENT],
[SL_CLEARALL]”
Where:
appname : Name of the Planning application importing access permissions.
RUN_SILENT ( Optional) : Execute the utility silently (the default) or with progress
messages. Specify 0 for messages, or 1 for no messages.
Reactions:
Load dimension members into Planning
Application using HAL from Relational
Database
Hi Guys,
Good afternoon.
please find the process of loading metadata into planning application using HAL
(Hyperion Application Link).
In the example below, I am loading employee dimension using HAL from SQL server
table.
You can also load it from flat file (text file or .csv file)
3. ODBC adapter.
In the General tab you can give custom defined name to adapter like Planning
Connector etc.
In the Methods tab you should specify which dimension you are loading in the drop
down menu. In this example we changed the dimension to employee as we are loading
meta data related to employee.
The third adapter is planning adapter again. Make sure that you change the drop down
to load as shown below:
You can specify the name of the adapter. For example Planning load adapter etc.
In the methods section you should specify the dimension you are loading to. In this
example we are using Employee.
You should create a DSN for the relational database in the location where you are
running the HAL job. Then connect to the relational database.
Select the table which has metadata information.
Once you have all the information. Connect to ports as shown below and save and
then run to load metadata.
Hope this helps.
Reaction
s:
0 comments:
Post a Comment
Create a Link
Here are some things, which I observed while reviewing those resumes.
Common why do your prospective employer want to know what you did in cold fusion
coding, when he advertised for hyperion developer.
Hope you will take care of above points when building your resume.
Posted by Dornakal's-Hyperion-Blog at 5:36 PM 1 comments Links to this post
Labels: Resume Building
Reactions:
Friday, March 27, 2009
Data transfer between two hyperion
Essbase Cubes
Data Transfer from one cube to another.
There are different ways to transfer data from one cube to other.
1. Hyperion Application Link (HAL)
2.Export data using report script and importing data into new cube
3.Jexport
4.XREF
Today we will learn about XREF calc script, which is used by most who want to
transfer data between cubes.
please find sample xref calc script below:
In this example I am trasfering payroll,social, bonus and headcount data from my
main P&L (profit and loss) application to work force application.
The first step of XREF is to create a location alias of the source application. In this
example my location alias is _LocAliasPL.
You can create location alias using EAS in the following way:
open the application
right click database
Click Edit
navigate to location alias.
Click to create location alias and give the details of the source cube.
ENDFIX;
/*END MAIN SCRIPT*/
Posted by Dornakal's-Hyperion-Blog at 3:10 PM 5 comments Links to this post
Labels: Data Transfer
Reactions:
Monday, March 16, 2009
Sample Calculation Script
/*Information */
/*
Script Name : CopyAct2Fcst
Created by : Dornakal, Hyperion Consultant, March 16, 2009
Purpose : Copy Actuals to Current Forecast
Directions : Set substitution variables CurFcst, Actmnth,CY
Versions : Essbase 9.3.1
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
Comments : This script copies actual data from actual scenario to forecast scenario;
This rule should be run before every forecast.
*/
/*House Keeping*/
/* Main Rollup */
/* Copies data for all existing employees of all Expense Accounts from Actual scenario
, final version to Current Forecast and Working version */
Reactions:
Friday, March 13, 2009
What is Intelligent Calculation? Why should
we care?
Developing calc scripts Series
When you perform a full database calculation, Essbase marks which blocks have been
calculated. If you then load a subset of data, you can calculate only the changed data
blocks and their ancestors. This selective calculation process is intelligent
calculation.
By default, intelligent calculation is turned on. You can change the default setting in
the essbase.cfg file or on a script-by-script basis wit the SETUPDATECALC
OFF command.
When intelligent calculation is active, during calculation, Essbase looks for only dirty
blocks.
Exceptions:
Even when the intelligent calculation is enabled, for CALC DIM statements that do not
include all dimensions, Essbase does not use intelligent calculation process. Rather,
Essbase calculates all relevant data blocks, regardless of clean or dirty status, and all
data blocks retain their status, dirty or clean.
Example:
When you execute a calculation script that includes the SET UPDATESTATUS AFTER
command, data blocks that are marked clean are not calculated and data blocks that
are marked dirty are calculated and marked clean.
The Intelligent calc can provide significant performance benefits kin certain situations
but require vigilant maintenance of the clean and dirty status of data blocks to avoid
skipping the wrong blocks on calculation.
During quarter close periods, allocation rates and adjusting entries may be update
multiple times. You use intelligent calc to view update results without recalculating
the entire database.
In budgeting or forecasting application, typically, many users update units and drivers
on a regular basis. In such applications, for users to analyze the impact of their
updates, a short calculation time is imperative.
Reactions:
Implementation Process for Essbase
Database
Hi Guys,
good morning.
please find implementation process for Essbase database.
Database Creation:
Reactions:
Monday, March 9, 2009
Automate HAL Load
Hi All,
good morning.
today we will see how we can autoamte HAL job (Flow diagram)
If you are loading a large number of members, HAL chokes if you don't automate it
(.exe etc).
Here are the steps one should follow to autoamate a HAL job.
Step:1
Step:2
Drag and drop Window Executible from Palleate.
Step:3
Open the window executible. Go to Runtime Target tab.
give the location of the flow diagram.
Browse to the location, where you want to store the executible file.
Step:4
Check
Step:5
don't check anything in Flow Diagrams tab as shown below:
Step:6
Don't check anything in Management tab as shown below:
Step:7
In Logging tab give the location of your log file.
Step:8
Don't check anything in Profile tab as shown below:
Step:9
Check
Step :11
you will see your execution file in the list as shown below:
Step:12
you can give the location of executible file in your batch file and run it using
scheduler like window scheduler.
Posted by Dornakal's-Hyperion-Blog at 3:32 PM 6 comments Links to this post
Labels: Automation of HAL Load, HAL, Metadataload
Reactions:
Data transfer between two hyperion
Essbase Cubes
Data Transfer from one cube to another.
There are different ways to transfer data from one cube to other.
2.Export data using report script and importing data into new cube
3.Jexport
4.XREF
Today we will learn about XREF calc script, which is used by most who want to
transfer data between cubes.
please find sample xref calc script below:
The first step of XREF is to create a location alias of the source application. In this
example my location alias is _LocAliasPL.
You can create location alias using EAS in the following way:
open the application
right click database
Click Edit
navigate to location alias.
Click to create location alias and give the details of the source cube.
/*
Assumptions : The Accounts, Time dimensions are Dense and the rest of dimensions
are Sparse
*/
/*House Keeping*/
SET CALCPARALLEL 4;
/* Baseline Fix */
FIX(HSP_INPUTVALUE, Local, USD, FINAL,Actual, FY08,
&ActualMnth,&NxtYr,@RELATIVE(Cost_Center,0),"EMPLOYEES")
ENDFIX;
Reaction
s:
5 comments:
mahesh said...
I have some queries on this topic. Can you please clarify it for me.
1.Do i need equal number of dimensions on both cubes?
2.suppose i have equal number of dimensions on both cubes but dimensions are
different how will it works?
my suggestion is can you please describe how many dimensions are there and
what are that. so we get good clarity on the script.
Many thanks,
Mahesh.
mahesh said...
Hi Dornakal,
Thanks,
Mahesh.
Anonymous said...
Mahesh, you don't need same number of dimentions for data copy or XREF.
Dronakal (yes, I've been reading your blogs) here is a question for you.
You are using "SET CREATENONMISSINGBLK ON;" don't this command create
unnecessary blocks hence increasing size of your database?
Second: What's the advantage of using script vs having @XREF in the outline? I
can think of couple but would like to know what you think.
Third: Are you using something like this in production and if so what is the
performance. You know I asked you this before.
Thanks
Pranay
Dornakal's-Hyperion-Blog said...
Hi Pranay,
Thanks for your comments.
You are using "SET CREATENONMISSINGBLK ON;" don't this command create
unnecessary blocks hence increasing size of your database?
You are right in your comment, but it is when you enable database wide in
database settings.
Yes we are using this script in production and don't have any noticible size
increase because of this.
Are you using something like this in production and if so what is the
performance.
Yes. we are using this script in production. our cube sizes are 4 GB and 6 GB
and script takes less than a 5 minutes to complete.
Anonymous said...
This creates a cube in the same application, but if we want to copy data to
other application, how can we maupulate it.
Example: We have two application (app1 & app2) with different dimensions,
app1 have 10 dimensions and app2 have 8 dimensions. From app1 we need to
extract all the level0 data and copy in the app2.
Can you suggest me how i can go ahead with the XREF script. If possible can
you give one sample script.
Thanks
Vivek
/*House Keeping*/
/* Main Rollup */
/* Copies data for all existing employees of all Expense Accounts from Actual scenario
, final version to Current Forecast and Working version */
Reaction
s:
2 comments:
mahesh said...
Hi Dornakal,
I have gone through your script. It is really good. Can you post some more
calculation scripts like this.It really helps me.
Thanks,
Mahesh
Dornakal's-Hyperion-Blog said...
Hi Mahesh,
Thanks for your comment.
I am planning to post more calc scripts and business rules in future.
have a great day.
When you perform a full database calculation, Essbase marks which blocks have been
calculated. If you then load a subset of data, you can calculate only the changed data
blocks and their ancestors. This selective calculation process is intelligent
calculation.
By default, intelligent calculation is turned on. You can change the default setting in
the essbase.cfg file or on a script-by-script basis wit the SETUPDATECALC
OFF command.
When intelligent calculation is active, during calculation, Essbase looks for only dirty
blocks.
Exceptions:
Even when the intelligent calculation is enabled, for CALC DIM statements that do not
include all dimensions, Essbase does not use intelligent calculation process. Rather,
Essbase calculates all relevant data blocks, regardless of clean or dirty status, and all
data blocks retain their status, dirty or clean.
Example:
When you execute a calculation script that includes the SET UPDATESTATUS AFTER
command, data blocks that are marked clean are not calculated and data blocks that
are marked dirty are calculated and marked clean.
SET CLEARUPDATESTATUS ONLY command instructs Essbase to mark as clean all data
blocks that were previously marked dirty.
Ancestors of a dirty block are not marked as dirty until the descendant dirty block is
calculated. A false positive can result if the descendant dirty block is calculated
within a FIX statement that does not include the dirty ancestor block. After
calculation, essbase marks the descendant block clean, and the ancestor block
remains marked clean, although it should be marked dirty.
The Intelligent calc can provide significant performance benefits kin certain situations
but require vigilant maintenance of the clean and dirty status of data blocks to avoid
skipping the wrong blocks on calculation.
During quarter close periods, allocation rates and adjusting entries may be update
multiple times. You use intelligent calc to view update results without recalculating
the entire database.
In budgeting or forecasting application, typically, many users update units and drivers
on a regular basis. In such applications, for users to analyze the impact of their
updates, a short calculation time is imperative.
Reaction
s:
1 comments:
parthiv said...
Hello Dornakal,
I saw the interview qustion in ur blog,I dont know these answers could u pls
post these answeres
5)hashtbl theory
6)Where are the rejected records after loading using HAL are placed in?
Thank u ,
bye
Database Creation:
Reaction
s:
2 comments:
parthiv said...
hi
thats great to helping hyperion people. Is this same to planning also. where can
i get the DBAG guide. Could u help me plss.
thanku,
parthiv
Dornakal's-Hyperion-Blog said...
Hi Partiv,
thanks for your comment.
For planning this will be little bit different. I will post that soon.
you can find DBAG in Hyperion Documentation link on my blog.
hope this helps.
good morning.
today we will see how we can autoamte HAL job (Flow diagram)
If you are loading a large number of members, HAL chokes if you don't automate it
(.exe etc).
Here are the steps one should follow to autoamate a HAL job.
Step:1
Browse to the location, where you want to store the executible file.
Step:4
Check
Step:5
don't check anything in Flow Diagrams tab as shown below:
Step:6
Step :11
you will see your execution file in the list as shown below:
Step:12
you can give the location of executible file in your batch file and run it using
scheduler like window scheduler.
Reaction
s:
6 comments:
Anonymous said...
hi
Can u post the process of,backup of planning repository?
Thanku in advance
Anonymous said...
Thanku
chinni said...
Hi
this is chinni.I urgenty need of the SDLC process of an essbase project
plsssssssssss
thanku in advance
Dornakal's-Hyperion-Blog said...
Hi Chinni,
can you email me at [email protected]
Thanks
Dornakal
To backup Planning repository, export all HSP_xxxxx Oracle tables from the
Oracle schema.
Examples of HSP tables - HSP_Object, HSP_Group.
Also, export Business Rules tables from HBR repository relational database.
Here is example of Business Rule - it is same as calc script but you can use
[variable] to prompt and limit members:
FIX (&CurrFcst , "Working", "Local", "HSP_InputValue", [Department])
FIX(&FYofFcstQtr, &fcstmonth: Jan)
"61100" ="61100_ex"+"61100_adj";
"15200" = "15200"+ "15200_new_calc" ;
"61300" ="61300_calc"+"61300_adj";
:
:
:
above HBR is used to total up accounts.
[Department] is variable that is retrieved from Webform
Dornakal's-Hyperion-Blog said...
Thanks for taking time to post on backup of planning and business rule.
First step is to download hyperion planning and related software from the following
website
http://edelivery.oracle.com/
After entering the details you will be directed to the page below. Select Hyperion
performance management and BI.
Download the following products:
1. Hyperion Shared Services Release 9.3.1 Microsoft Windows x86 Installer English
(V11229-01)
2. Hyperion Smart View for Office Release 9.3.1.2 Microsoft Windows x86 Installer
( V12736-01)
3. Hyperion Essbase - System 9 Release 9.3.1.3 Server Windows Installer (V14762-01)
After downloading the software. Extract the setup files from downloads and start
installation.
Make sure that you don't check Launch configuration utility in the last step in the
first run.
As seen above. complete installation of all 5 products below with out launching
configuration utility.
Shared Services
EAS
Planning Database
After creating sql server repositeries, go to the server where you installed shared
services and open Hyperion configuration utility from the start menu. we have to
configure shared services first and then EAS and then planning.
The next step is to configure EAS. Open the foundation services on the server where
you installed EAS and follow the procedure below:
Posted by Dornakal's-Hyperion-Blog at 11:28 AM
Labels: Hyperion Planning Installation and Configuration
Reaction
s:
18 comments:
Ahamed said...
Thanks for all your time and effort spent keeping this blog up to date.
Dornakal's-Hyperion-Blog said...
Hey Ahamed,
Thanks for your comment.
can you please advise me on the content for the future posts.
with regards,
Dornakal.
Anu said...
mahesh said...
Hi Dornakal;
Many thanks...
Dornakal's-Hyperion-Blog said...
Hi Mahesh,
how are you.
I remember your suggestion :-)
I had to procure computer to install planning.
Thanks for your comments.
Please comment your suggestions so that I can post more based on your
suggestions.
have a great day.
Dornakal's-Hyperion-Blog said...
Hi Anu,
I don't know whether I can post PDF in blog.
I can make PDF document and mail you.
please email me at [email protected]
Anu said...
Hi Dornakal,
Dornakal,
Could you post more tutorials on Planning.What does a developer do on a
Planning project....
mahesh said...
HI Dornakal,
How are you?
Thanks;
Mahesh
Dornakal's-Hyperion-Blog said...
Hi Mahesh,
good morning.
I am assuming that you are asking who is logged into planning application.
you can check for this information in planning web application (front end)
and look in view statistics.
hope this helps.
Anonymous said...
Hi,
Thanks,
Prabhu
[email protected]
Dornakal's-Hyperion-Blog said...
Hi Prabhu,
i passed through the same error, when i was installing the planning.
what you need to do is to have mixed mode authentication for the sql
repository, which you careate and also be careful with user name and password
when you enter your repository information in your confi utility.
if you have same error i can walk you through.
you can reach me at [email protected]
Anonymous said...
Thats really a great effort to take the screen shots and helping the newbies..
Thanks for the initiative and hope we get more valuable inputs from your blog.
I am checking if you could comeout something on writing calculation scripts
and optimizing techniques.
Venkat
mahesh said...
Hi Dornakal,
How are you?
i want know, How the planning unit related to enter the budget data into
planning application.
Can you please tell me that how the process inter related one another(i.e
entering budget values and planning unit annotations).
if possible explain with some example.
Thanks,
Mahesh.
Bala said...
Hi Dronakal,
I am exactly following your steps, but i am facing some difficulty in the Essbase
server configuration .
Can you please eloberate the configuration of EAS . and is it necessary to
install weblogic server prior to this step .
My email id is [email protected]
Balaji
Hi Dornakal,
How are you. You are doing a very good job. Thanks.
Though I never installed Planning 9.3.1, I will try this.
Can you also post how to proceed with planning from the point this post ends.
Regards
Manohar Rana
August 3, 2009 2:03 AM
Anonymous said...
Thanks,
Devang
Dears,
I have a problem to run the workspace after installing and configuring the
application.
I'm sure that all services are up and running.
I'm using the following URL to connect to the workspace :
http://localhost:19000/workspace/index.jsp
is it a correct URL or should I use another one.
please help
Post a Comment
www.EssbaseInfo.com
How can I track changes in the outline?
You can use the OUTLINECHANGELOG setting in the ESSBASE.CFG file to capture
outline changes. This setting logs all outline changes into the database_name.OLG
file. The OUTLINECHANGELOG setting allows database administrators to review the
outline revision history and gather enough information to roll back changes if needed.
Example:
Reactions:
How to Delete Dimension from Hyperion
Planning Application
If you create a custom dimension in planning and wish to delete it, here is how you
might remove it from SQL manually. We recommend that only someone familiar with
relational databases attempt this.
Outlined below are the steps to remove a dimension from the Oracle relational store.
The directions are for SQL, but the table names should be the same in Oracle.
You will need to delete all the children first, then the dimension.
If you have the dimension in forms, you may also need to (after step 5) go into the
hsp_formobj_def_mbr table and delete all references of that object ID. Then delete
the dim id in the hsp_form_layout table.
Recycle services of Planning.
Now when you open the application in Planning the dimension will be removed.
The next step is to perform a refresh to make sure planning and Essbase are in sync.
Posted by Dornakal's-Hyperion-Blog at 5:23 PM
Labels: Delete Dimension in Hyperion Planning
Reactions:
Hyperion Essbase Overview
please find some Q&A, which provide you with essbase overview.
What is Partitioning?
Copying a slice of a large database to work with it locally, or link from your database
directly to other databases.
What is an Outline?
Outline is the structure that defines all elements of the Database. It dictates how
data is to be stored into the database. Outline defines the structural relationship
between the members in DB. Organizes all data in the DB
Defines consolidations and mathematical relationships between members
It defines type of dimensions. Aliases, member formulas etc.
The order of the outline is that how dimensions are arranged is as follows:
Dense dimensions followed by sparse dimensions followed by Attribute dimensions.
The Data block represents all the dense dimension members for its combination of
sparse standard dimensions.
Huge index consumes large amount of memory. The more index entries the longer
Essbase searches to find a specific block.
What are the members for which Two-pass calculations can be given?
Accounts
Dynamic calc
Dynamic calc and store
create application------maxl
CREATEAPP------------ESSCMD
You cannot use the substitution variables in formulae that are applied to the DB
outline. The value of the substitution variable cannot be more than 256 characters.
Server>substitution variable
Maxl (for creating/deleting/updating)
Alter system
Alter application
Alter db
ESSCMD
CREATE VARIABLE
DELETE VARIABLE
UPDATE VARIABLE
Database>location aliases
Create
Maxl ----------------------create location alias
ESSCMD----------------CREATE LOCATION
Edit /Delete
Maxl display location alias
drop location alias
Copying database
Database > copy
Maxl create database as
ESSCMD COPYDB
· All data
· Level 0 data (when all data in the outline is at level 0)
· Input data (when you load data into non level 0 members)
· Discard all data (when you expect to reload data or when outline is radically
changed that no existing data applies)
What is Metadata?
Metadata is data is data about data. Metadata is the data contained in the database
outline that describes the values within a DB.
Ex:
East>New York>cola>sales>10000
Can you add time members that are not tagged as time?
Yes
Time dimension tag says how often you collect and update data. The time dimension
enables several accounts dimension functions such as first and last time balances.
Skip Properties
· None
· Missing
· Zeros
· Missing and zeros
Skip properties, expense reporting, Time Balance properties are applicable only to
the accounts dimension.
Only accounts, dynamic calc, dynamic calc and store members can be given two pass
calculation.
Edit>properties
Data dictionary button
Labels: Essbase Overview, Hyperion Essbase BSO, Hyperion Essbase Interview questions, Important Points in
Hyperion Essbase
Reaction
s:
1 comments:
Anonymous said...
Essbase objects .. like reports is not .RUL . please correct that question. FYI
Rule file is .RUL
Databases that contain a significant number of Dynamic Calc and Store members
Databases that use an isolation level of uncommitted access with commit block set to zero
Any quotient above the high end of the range indicates that reducing
fragmentation may help
For databases less than 50 MB using the Direct I/O access mode, the fragmentation quotient tends to
be high. A high fragmentation quotient does not necessarily indicate a need to reduce
fragmentation, because the free space is created in 8 MB chunks and all of it might not get used
right away.
Database Size Fragmentation Quotient
Threshold
Small (up to 200 60% or higher
MB)
Medium (up to 2 40% or higher
GB)
Large (greater than 30% or higher
2 GB)
To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.
Reactions:
Correct Order of restarting planning
services
If you are planning admin and had to restart planning services.
here is the order you have to follow to restart.
Stop Order
Planning Service
RMI registry
AAS
Start Order
AAS
RMI registry
Planning Service
Screen shots of services shown below:
Posted by Dornakal's-Hyperion-Blog at 10:19 AM 1 comments Links to this post
Labels: Order of restarting planning services
Reactions:
Smart View retrieve Error
One of my Business user recieved an error when she was running Smart view query,
which had lot of attributes in her retrive sheet.
Eventhough I increased NetDelay and NetRetrycount in essbase.cfg, but that does not
resolve this problem. She was using IE7 on XP SP3.
By design, Internet Explorer imposes a time-out limit for the server to return data.
The time-out limit is five minutes for versions 4.0 and 4.01 and is 60 minutes for
versions 5.x and 6. Also, the time-out limit is 30 seconds for Windows Internet
Explorer 7. As a result, Internet Explorer does not wait endlessly for the server to
come back with data when the server has a problem.
Apparently the query which the user was running takes around 45 seconds. The IE7
time out setting was causing it to error out.
we need to change (add) three new registry keys to resolve the issue.
If the above settings does not make any difference and the version of SmartView is
9.3.1.2, it may need to upgrade SmartView 9.3.1.4 or higher version.
Here are the Screen shots for changing the registry settings.
Rename to KeepAliveTimeout
Reactions:
Hyperion Essbase Questions
How do you optimize outline?
Usually the outline is optimized using the hourglass design for dimension ordering i.e,
· Dimension with Accounts tag
· Dimension with Time tag
· Largest Dense dimension
· Smallest dense dimension
· Smallest Sparse dimension
· Largest Sparse dimension
You can configure a database to optimize calculation performance. The best configuration for the
site depends on the nature and size of the database.
· Databases that contain a significant number of Dynamic Calc and Store members
· Databases that use an isolation level of uncommitted access with commit block set to zero
You can measure fragmentation using the average clustering ratio or average fragmentation
Quotient.
Any quotient above the high end of the range indicates that reducing fragmentation may help
performance
The average clustering ratio database statistic indicates the fragmentation level of the data (.pag)
files. The maximum value, 1, indicates no fragmentation.
· To prevent fragmentation, optimize data loads by sorting load records based upon sparse
dimension members. For a comprehensive discussion of optimizing data load by grouping sparse
members.
· To remove fragmentation, perform an export of the database, delete all data in the database with
CLEARDATA, and reload the export file.
As your business changes, you change the Essbase database outline to capture new product lines,
provide information on new scenarios, reflect new time periods, etc. Some changes to a database
outline affect the data storage arrangement, forcing Essbase to restructure the database.
• Implicit Restructures
• Dense restructure
• Sparse restructure
• Outline-only restructure
• Explicit Restructures
Intelligent Calculation, name changes, and formula changes affect database restructuring:
· If you use Intelligent Calculation in the database, all restructured blocks are marked as dirty
whenever data blocks are restructured. Marking the blocks as dirty forces the next default
Intelligent Calculation to be a full calculation.
· If you change a name or a formula, Essbase does not mark the affected blocks as dirty.
Therefore, you must use a method other than full calculation to recalculate the member or the
database.
When Essbase restructures both the data blocks and the index, it uses the files described
dbname.esm Essbase kernel file that contains control information used for db recovery
dbname.ind Free fragment file for data and index free fragments
dbname.otl Outline file in which is defined all metadata for a database and how data is stored
There are a number of things you can do to improve performance related to database
restructuring:
· If you change a dimension frequently, make it sparse. · Use incremental restructuring to control
when Essbase performs a required database restructuring. · Select options when you save a
modified outline that reduce the amount of restructuring required.
· Outline only (no index or data files)· Sparse (only index files) · Dense (index files and data
files) as a result of adding, deleting, or moving members and other operations · Dense (index
and data files) as a result of changing a dense dimension to sparse or changing a sparse
dimension to dense
Essbase initiates an implicit restructure of the database files after an outline is changed using
Outline Editor or Dimension Build. The type of restructure that is performed depends on the type
of changes made to the outline
If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in
the data files and creates new data files. When Essbase restructures the data blocks, it regenerates
the index automatically so that index entries point to the new data blocks.
Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense
restructure you need to recalculate the database.
If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index
and creates new index files.
Restructuring the index is relatively fast; the amount of time required depends on the size of the
index.
If a change affects only the database outline, Essbase does not restructure the index or data files.
Member name changes, creation of aliases, and dynamic calculation formula changes are
examples of changes that affect only the database outline.
1. Creates temporary files that are copies of the .ind, .pag, .otl, .esm, and .tct files. Each
temporary file substitutes either N or U for the last character of the file extension, so the
temporary file names are .inn, essxxxxx.inn, essxxxxx.pan, dbname.otn, dbname.esn, and
dbname.tcu. 2. Reads the blocks from the database files copied in step 1, restructures the
blocks in memory, and then stores them in the new temporary files. This step takes the most
time. 3. Removes the database files copied in step 1, including .ind, .pag, .otl, .esm, and .tct
files. 4. Renames the temporary files to the correct file names: .ind, .pag, .otl, .esm, and .tct.
When Essbase does a sparse restructure (restructures just the index), it uses the following files:·
essxxxxx.ind· dbname.otl· dbname.esm
Generally, data compression optimizes storage use. You can check compression efficiency by
checking the compression ratio statistic.
1. Bitmap compression, the default. Essbase stores only non-missing values and
uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell
value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to
store every non-missing cell. In most cases, bitmap compression conserves disk space more
efficiently. However, much depends on the configuration of the data.
2. Run-length encoding (RLE). Essbase compresses repetitive, consecutive values --any value
that repeats three or more times consecutively, including zeros and #MISSING values. Each data
value that is repeated three or more times uses 8 bytes plus a 16 byte repetition factor.
3. zlib compression. Essbase builds a data dictionary based on the actual data being compressed.
This method is used in packages like PNG, Zip, and gzip. Generally, the more dense or
heterogeneous the data is, the better zlib will compress it in comparison to bitmap or RLE
compression.
4. Index Value Pair compression. Essbase applies this compression if the block density is less
than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the
blocks are highly sparse. zlib does not use this.
5. No compression. Essbase does not compress data blocks when they are written to disk
You may want to disable data compression if blocks have very high density (90% or greater) and
have few consecutive, repeating data values. Under these conditions, enabling compression
consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue
compared to your application. It can become a drain on the processor.
Essbase issues write (exclusive) locks for blocks that are created, updated, or deleted, and issues
read (shared) locks for blocks that should be accessed but not modified. By issuing the
appropriate locks, Essbase ensures that data changed by one operation cannot be corrupted by a
concurrent update.
What is a transaction?
When a database is in read/write mode, Essbase considers every update request to the server
(such as a data load, a calculation, or a statement in a calculation script) as a transaction.
The transaction control file contains an entry for each transaction and tracks the current state of
each transaction (Active, Committed, or Aborted).
What is isolation level and what are the types of isolation levels?
Isolation levels determine how Essbase commits data to disk. Essbase offers two isolation levels
for transactions --committed access and uncommitted access (the default).
Committed:
Wait Times:
Indefinite
Immediate Access or no Wait
Uncommitted:
Setting the isolation level to committed access may increase memory and time requirements for
database restructure.
The Spreadsheet Add-in lock and Send and the Grid API are always in Committed
Access Mode
The index cache is a buffer in memory that holds index pages. How many index pages are in
memory at one time depends upon the amount of memory allocated to the cache.
The data file cache is a buffer in memory that holds compressed data files (.pag files). Essbase
allocates memory to the data file cache during data load, calculation, and retrieval operations, as
needed. The data file cache is used only when direct I/O is in effect.
The calculator cache is a buffer in memory that Essbase uses to create and track data blocks
during calculation operations.
The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks
needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a
query).
What are the memory caches used by Essbase to coordinate memory usage?
Index Cache: Min -1024 KB (1048576 bytes) Default - Buffered I/O : 1024 KB (1048576
bytes);Direct I/O : 10240 KB (10485760 bytes) Opt -Combined size of all essn.ind files, if
possible; as large as possible otherwise.Do not set this cache size higher than the total
index size, as no performance improvement results.
Data File Cache: Min - Direct I/O: 10240 KB(10485760 bytes) Default -Direct I/O: 32768
KB(33554432 bytes)Opt -Combined size of all essn.pag files, if possible; otherwise as
large as possible.This cache setting not used if Essbase is set to use buffered I/O.
Data Cache:Min - 3072 KB (3145728 bytes) Default - 3072 KB (3145728 bytes) Opt -
0.125 * the value of data file cache size.
Calculator Cache:Min - 4 bytes Max: 200,000,000 bytes Default - 200,000 bytes Opt -The
best size for the calculator cache depends on the number and density of the sparse
dimensions in your outline. The optimum size of the calculator cache depends on the
amount of memory the system has available.
In a business application requiring currency conversion, the main database is divided into at
least two slices. One slice handles input of the local data, and another slice holds a copy of the
input data converted to a common currency.
Essbase holds the exchange rates required for currency conversion in a separate currency
database. The currency database outline, which is automatically generated by Essbase from the
main database after you assign the necessary tags, typically maps a given conversion ratio onto a
section of the main database. After the currency database is generated, it can be edited just like
any other Essbase database.
What are the three dimension that should be present in main database of currency
application?
The main database outline can contain from 3 to n dimensions. At a minimum, the main database
must contain the following dimensions:
What are the dimensions that should be present in currency database of currency
application?
A currency database always consists of the following three dimensions, with an optional fourth
dimension:
· A dimension tagged as time, which is typically the same as the dimension tagged as time in
the main database.
· A dimension tagged as country, which contains the names of currencies relevant to the
markets (or countries) defined in the main database.
· A dimension tagged as accounts, which enables the application of various rates to members
of the dimension tagged as accounts in the main database.
· A currency database, which typically includes an optional currency type dimension, which
enables different scenarios for currency conversion.
What are the conversion methods supported by Essbase for currency applications?
Different currency applications have different conversion requirements. Essbase supports two
conversion methods:
What is the process to build a currency conversion application and perform conversions?
To build a currency conversion application and perform conversions, use the following process:
1. Create or open the main database outline. 2. Prepare the main database outline for currency
conversion. 3. Generate the currency database outline. 4. Link the main and currency
databases. 5. Convert currency values. 6. Track currency conversions. 7. If necessary,
troubleshoot currency conversion.
What is CCONV? After you create a currency conversion application, you convert data values
from a local currency to a common, converted currency by using the CCONV command in
calculation scripts Ex: CCONV USD;CALC ALL;
Can we convert the converted currency back into its local currency? You can convert the
data values back to the original, local currencies by using the CCONV TOLOCALRATE
command.
When you convert currencies using the CCONV command, are the resulting data blocks
are marked as dirty or clean?When you convert currencies using the CCONV command, the
resulting data blocks are marked as dirty for the purposes of Intelligent Calculation. Thus,
Essbase recalculates all converted blocks when you recalculate the database.
What is CCTRACK? You can use the CCTRACK setting in the essbase.cfg file to control
whether Essbase tracks the currency partitions that have been converted and the exchange rates
that have been used for the conversions. By default CCTRACK is turned on.
What are the reasons to turn off CCTRACK? For increased efficiency when converting
currency data between currency partitions, you may want to turn off CCTRACK. For example,
you load data for the current month into the local partition, use the DATACOPY command to
copy the entire currency partition that contains the updated data, and then run the conversion on
the currency partition.
How can you turn off CCTRACK? You can turn off CCTRACK in three ways: · Use the SET
CCTRACKCALC ONOFF command in a calculation script to turn off CCTRACK temporarily ·
Use the CLEARCCTRACK calculation command to clear the internal exchange rate tables
created by CCTRACK.
What is LRO (Linked reporting objects)? An LRO is an artifact associated with a specific
data cell in an Essbase database. LROs can enhance data analysis capabilities by providing
additional information on a cell.
An LRO can be any of the following:
Users create linked objects through Essbase Spreadsheet Add-in for Excel by selecting a data cell
and choosing a menu item. There is no limit to the number of objects you can link to a cell. The
objects are stored on the Essbase Server where they are available to any user with the appropriate
access permissions. Users retrieve and edit the objects through the Essbase Spreadsheet Add-in
for Excel Linked Objects Browser feature, enabling them to view objects linked to the selected
cell.
Does adding or removing links to a cell does not affect the cell contents?
No.LROs are linked to data cells --not to the data contained in the cells. The link is based on a
specific member combination in the database.
Ex1: A sales manager may attach cell notes to recently updated budget items. Ex2: A finance
manager might link a spreadsheet containing supporting data for this quarter's results. Ex3: A
product manager might link bitmap images of new products. Ex4: A sales manager may link the
URL of a company's Web site to quickly access the info on the Web
Essbase uses the database index to locate and retrieve linked objects. If you clear all data values
from a database, the index is deleted and so are the links to linked objects. If you restructure a
database, the index is preserved and so are the links to linked objects.
Shared members share data values but do not share LROs. This is because LROs are linked to
specific member combinations and shared members do not have identical member combinations.
To link a given object to shared members, link it to each shared member individually.
Can you change the member combination associated with any linked object?
You cannot change the member combination associated with any linked object. To move an
object to another member combination, first delete it, then use Essbase Spreadsheet Addin for
Excel to re-link the object to the desired member combination.
Why do we need to limit the LRO file sizes for storage conversion?
Because Essbase stores linked files in a repository on the server and, by default, the size is
unlimited. Limiting the file size prevents users from taking up too much of the server resources
by storing extremely large objects. You can set the maximum linked file size for each
application. If a user attempts to link a file that is larger than the limit, an error message displays.
The maximum file size setting applies only to linked files and does not affect cell notes or URLs.
The lengths of the cell note, URL string, and LRO descriptions are fixed.
What is partitioning?
A partition is the piece of a database that is shared with another database. An Essbase partitioned
application can span multiple servers, processors, or computers.
Essbase Partitioning is a collection of features that makes it easy to design and administer
databases that span Essbase applications or servers. Partitioning is licensed separately from
Essbase.
1. Transparent partition:
A form of shared partition that provides the ability to access and manipulate remote data
transparently as though it is part of your local database. The remote data is retrieved from the
data source each time you request it. Any updates made to the data are written back to the data
source and become immediately accessible to both local data target users and transparent data
source users
2. Replicated Partition:
A portion of a database, defined through Partition Manager, used to propagate an update to data
mastered at one site to a copy of data stored at another site. Users can access the data as though it
were part of their local database.
3. Linked Partition:
A shared partition that enables you to use a data cell to link two databases. When a user clicks a
linked cell in a worksheet, Essbase opens a new sheet displaying the dimensions in the linked
database. The user can then drill down those dimensions.
· Type of partition: A flag indicating whether the partition is replicated, transparent, or linke
· Data source information: The server, application, and database name of the data source.
· Data target information: The server, application, and database name of the
data target.
· Login and password: The login and password information for the data source and the data
target.
· Shared areas: A definition of one or more areas, or sub cubes, shared between the data source
and the data target.
· Member mapping information: A description of how the members in the data source map to
members in the data target.
· State of the partition: Information about whether the partition is up-to-date and when the
partition was last updated.
Can you define different types of partitions between the same two databases?
No
Can a single database serve as the data source or data target for multiple partitions?
Yes
An overlapping partition occurs when similar data from two or more databases serve as the data
source for a single data target in a partition.
Using substitution variables in partition definitions enables you to base the partition definition on
different members at different times.
Yes,You can use attribute functions for partitioning on attribute values. But you cannot partition
an attribute dimension.
Can we partition an attribute dimension?
Both ends of a transparent, replicated, or linked partition must be on the same release level of
Essbase Server. For example, if the source of a linked partition is on a Release 7.1.2 server, the
target must also be
In addition, for transparent and replicated (but not linked) partitions, the application
mode of both ends of the partitions must be the same--either Unicode mode or non-
Unicode mode.
If we have more dimensions (generally more than 10) then we will go for ASO that simply rollup
If we have less dimensions then we will go for BSO We cannot write back in ASO we can write
back in BSO Most of the dimensions are sparse in ASO Most of the dimensions are dense in
BSO
· ASO databases are created specifically to deal with the requirements of very large sparse data
sets with a high no of dimensions and potentially millions of members. · ASO do not have
indexes or data blocks. · ASO do not use calculation scripts. Bcoz calculations are not
complex. · ASO uses a new kind of storage mechanism that allows improved calculation times
from 10 to100 times faster than BSO. · ASO can store up to 252 dimensional combinations. · The
front end tools usually do not care if the database is ASO or BSO. Even Maxl sees minor
differences. · We can have attribute dimensions in ASO. · In ASO there is no concept as dense
and sparse dimensions. · We do not have two pass logic and built in time balance
functionality.( time balance functionality is present from 9.3 version onwards). · Member
formulas are not supported in stored hierarchies. · Only non consolidation (~) and addition (+)
operators are supported in shared hierarchies. · We cannot create more than 1 database in ASO. ·
ASO does not utilize procedural calculation scripts. · ASO formulas are written in MDX
syntax. · ASO has Accounts dimension but it is completely different from the account dimension
of BSO. · ASO is read-only. You cannot write to ASO databases, but there is a workaround
using transparent partitions and pointing to an attached BSO database for those duties. · You can
load data to level zero members only. · The database must restructure after any members in the
standard dimensions are added ,deleted or moved. In fact most actions on an ASO outline will
either cause a loss of data or restructure.
How do you differentiate ASO applications?
You can easily differentiate the ASO database in the Administrative Services Console by the red
star beside the application name.
How do you create an ASO application? ASO has two types of hierarchies: stored and
dynamic. The dimension can contain both types of hierarchies (if you enable multiple
hierarchies).Other properties that need to be set for dimensions and members include ·
Dimension Type · Data Storage(store, never share, label only) · Member solve order
· Alias
You can add dimensions using the visual editor or the rules files.
Unlike in block storage ASO does not allow you to preview the outline changes. If you are
unsure of the build file, make a backup of your outline before running the new build rule. For
ASO databases after the data values are loaded into the level 0 cells of an outline, the database
requires no separate calculation step. For retrieving from the ASO database, retrieve and analyze
just as in BSO database.
How do you create an ASO database using ASO Outline Conversion Wizard ? You can also
create an ASO database using ASO Outline Conversion Wizard. This wizard uses the existing
BSO database to convert to an ASO database. This is advantageous because we do not need to
create an ASO database from the Scratch. However we need perform reengineering of
dimensions and hierarchies.
How do you create ASO in the Automated Way? The final way of creating an ASO
application is by using "Create Application" , "Create Database" ,"Create Outline " commands
using MaxL. Typically this method is used when you are running the MaxL command as a part
of the batch job.
Unicode applications use UTF-8 encoding form to interpret and store character text, providing
support for multiple character sets.
2. Set the Essbase server to Unicode Mode via Administrative Services or MaxL.
3. Check the Unicode box when creating a new Unicode -mode application.
4. You can also migrate from non-Unicode applications to Unicode applications (but not the
other way round).
Report Scripts are outdated but still can be helpful when extracting subsets of data from Essbase
for online backups or feeding into other systems.
The Wizards Tab of Administrative Services Console menu has the following
components
1. Migration
4. User Setup
Reactions: