Sap Hana
Sap Hana
Sap Hana
com/prerequisites-for-learning-saphana/
Prerequisite to learn SAP HANA
Do you want to learn and make your career in SAP HANA but don't know from where to start?
Do you have questions like:
I don't know SAP BW, SAP BO, SAP BI. Can I learn and make career in SAP HANA?
Can people with varied background (JAVA, PHP, .NET, JavaScript, HTML etc.) and no prior
SAP knowledge be able to succeed in SAP HANA?
How to get access to cost effective SAP HANA Server, SAP HANA Studio and other client
tools?
What are the SAP HANA certifications and how does it help in boosting my career?
Then this article is for you. Continue reading and all your questions will be answered.
migration is already starting to happen. Also HANA is extremely sell-able to clients looking to invest
in IT as it is both the now and the future.
To know more, check the article Top 10 Reasons Customers Choose SAP HANA
Lets take a look into the SAP HANA from a beginner's point of view and get some answers.
You can find plenty of materials on web on database concepts and SQL. Or let me Google that for
you. :)
However if you are already an experienced ABAP programmer and thinking: Whether ABAPer time
has come to an end? Then also you should not worry.
There is a new ABAP - Which is SMARTER, LIGHTER and FASTER" - and at the bottom sits SAP
HANA powering ABAP silently without any disruption. This is what we call "ABAP FOR SAP HANA".
Learning SAP HANA together with ABAP will give a new boost to your career.
I don't know SAP BW, SAP BO, SAP BI. Can I learn and make
career in SAP HANA?
Lets check it one by one.
SAP Business Information Warehouse (SAP BW):
Knowing BW helps you in understanding Modeling concepts and when you want DXC to transfer
data from SAP Business Suite System to HANA.
But even if you don't have the knowledge of BW, you can easily learn HANA Modeling concepts.
BW knowledge is a must if you are going to work on BW on HANA.
SAP BusinessObjects (BO):
BO or Business Objects is the Front end Reporting tool set from SAP.
If you have knowledge of BO then reporting on HANA would be a piece of cake for you. But even if
you don't have knowledge of BO, when you start learning HANA Concepts on Reporting you will be
easily able learn BO concepts.
You might want to gain the understanding of different reporting tools in BO (Explorer, WebI etc).
There are many step by step guides that can help you to learn these tools.
SAP BI:
BI or SAP BI is the Data Warehousing package implementation tool from SAP. The realization of
Data Warehousing Concepts in SAP BI will help understand the implementation aspects from BW
on HANA perspective. Again unless you are planning to work on BW on HANA, you don't
necessarily have to learn SAP BI.
Java, PHP, Python, .NET works pretty well with SAP HANA
You can also find lots of SAP HANA learning materials at this site. Just go to the Learning material
section of SAP HANA Tutorial. The contents are categorized in a nice and simple way.
Once you have gain knowledge in SAP HANA, you should also test your understanding by checking
SAP HANA Interview Questions and Answers
Note: We will come up with more topics on SAP HANA. If you want a particular topic to be included
please leave a comment.
How to get access to SAP HANA Server, SAP HANA Studio and
other client tools?
To get free access to SAP HANA Server, check the article:
SAP HANA Server Access
What are the SAP HANA certifications and how does it help in
boosting my career?
SAP offers 2 main certification paths.
1 path is for administration and operations and is considered more technical.
The 2nd path is for implementation and modeling.
To know more about them, check the article SAP HANA Certification
If you want a career in HANA there are several areas you can specialize in such as:
The SAP HANA Modeling
In this role you will need SAP HANA modeling skills. SAP BW on HANA skills will also come in
handy for you.
The SAP HANA Modeler learning roadmap also has an associate and professional certification that
This topic is not finished yet. We will come up with a more specific and time-line based roadmap.
Currently this a little difficult to outline the SAP HANA Career Roadmap as market trend is evolving
and there will be several new areas which will be enlightened in future.
If any of your question is still not answered, feel free to contact us or leave a comment. We will try
our best to guide you in SAP HANA journey.
- Its best suited for performing real-time analytics, and developing and
deploying real-time applications.
An in-memory database means all the data is stored in the memory (RAM). This is no time wasted
in loading the data from hard-disk to RAM or while processing keeping some data in RAM and
temporary some data on disk. Everything is in-memory all the time, which gives the CPUs quick
access to data for processing.
The speed advantages offered by this RAM storage system are further accelerated by the use of
multi-core CPUs, and multiple CPUs per board, and multiple boards per server appliance.
Complex calculations on data are not carried out in the application layer, but are moved to the
database.
SAP HANA is equipped with multiengine query processing environment which supports relational as
well as graphical and text data within same system. It provides features that support significant
processing speed, handle huge data sizes and text mining capabilities.
Want to know more about SAP HANA Hardware? Check the article - SAP HANA hardware
With the help of technology like SLT replication, data can be moved to HANA in real time. It is also
possible to copy data from SAP BW or other database into SAP HANA. In HANA, we can use
modeling tool called HANA Studio to build the logic and structures and use tools e.g. SAP
BusinessObjects, SAP Visual Intelligence to visualize or analyze data.
Make Decisions in Real-time Access to real time analysis; fast and easy creation of ad-hoc
business statistics.
ownership
Business transactions
Advanced analytics
Social media
Mobile experience
Collaborative business
Design connections
You may be thinking, So what? or How does this help my business? or How can SAP HANA
help my company make more money?
In this article, we look at what we consider to be the top 10 reasons why customers should choose
SAP HANA.
1. Speed:
The speed SAP HANA enables is sudden and significant, and has the potential to transform
entire business models.
A live analysis by a consumer products company reveals how SAP HANA analyzes current point-ofsale data in real timeempowering this organization to review segmentation, merchandising,
inventory management, and forecasting information at the speed of thought.
2. Real Time:
SAP HANA delivers the real real-time enterprise through the most advanced in-memory
technology
Pull up-to-the-minute data from multiple sources. Evaluate options to balance financial, operational,
and strategic goals based on todays business
3. Any Data:
SAP HANA helps you to gain insights from structured and unstructured data.
SAP HANA integrates structured and unstructured data from internal and external sources, and can
work on detailed data without aggregations.
4. Any Source:
SAP HANA provides multiple ways to load your data from existing data sources into SAP
HANA.
SAP HANA can be integrated into a wide range of enterprise environments, allowing it to handle
data from Oracle databases, Microsoft SQL Server, and IBM DB2.
Quickly and easily create ad-hoc views without needing to know the data or query type - allowing
you to formulate your actions based on deep insights
Receive quick reactions to newly articulated queries so you can innovate new processes and
business models to outpace the competition.
Enable state-of-the-art, interactive analyses such as simulations and pattern recognition to create
measurable, targeted actions.
Energy Management
Utility companies use SAP HANA to process and analyze vast amounts of data generated by smart
meter technology, improving customers energy efficiency, and driving sustainability initiatives.
Real-time Transit Routing
SAP HANA is helping research firms calculate optimal driving routes using real-time GPS data
transmitted from thousands of taxis.
Software Piracy Detection and Prevention
Tech companies use SAP HANA to analyze large volumes of complex data to gain business insights
into software piracy, develop preventive strategies, and recover revenue.
Reduce or eliminate the data aggregation, indexing, mapping and exchange-transfer-load (ETL)
needed in complex data warehouses and marts.
Incorporate prepackaged business logic, in-memory calculations and optimization for multicore 64bit processors.
8. Cloud:
Fast:
A highly robust cloud service allows quick deployment of current and next generation applications,
scaled to your business needs.
Secure:
We secure your data through the entire cloud solution with independently audited standards of data
security and governance.
9. Cost:
SAP HANA reduces your total IT cost so you can increase spending on innovation.
10. Choice:
SAP HANA provides you choice at every layer to work with your preferred partners.
Next
SAP HANA is a combination of hardware and software made to process massive real time data
using In-Memory computing. To leverage the full power of the SAP HANA platform, you need the
right hardware infrastructure.
The SAP HANA can only be installed and configured by certified hardware partners.
You can find all SAP HANA components and respective SAP HANA hardware and software
requirements in the Product Availability Matrix(PAM).
IBM:
http://www-03.ibm.com/systems/power/solutions/bigdata-analytics/sap-hana/
Fujitsu:
http://www.fujitsu.com/fts/solutions/high-tech/solutions/datacenter/sap/hana/
Cisco:
http://www.cisco.com/en/US/netsol/ns1160/index.html
Hitachi:
http://www.hds.com/solutions/applications/sap-application/
NEC:
http://www.nec.com/en/global/prod/express/related/sap_certified.html
DELL:
http://www.dell.com/Learn/us/en/555/shared-content~data-sheets~en/Documents~sap-hana-techsheet.pdf
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterpise Server. SAP
HANA database consists of multiple servers and the most important component is the Index Server.
SAP HANA database consists of Index Server, Name Server, Statistics Server, Preprocessor Server
and XS Engine.
Index Server:
It contains the actual data stores and the engines for processing the data.
The index server processes incoming SQL or MDX statements in the context of
authenticated sessions and transactions.
Persistence Layer:
The database persistence layer is responsible for durability and atomicity of transactions. It ensures
that the database can be restored to the most recent committed state after a restart and that
transactions are either completely executed or completely undone.
Preprocessor Server:
The index server uses the preprocessor server for analyzing text data and extracting the information
on which the text search capabilities are based.
Name Server:
The name server owns the information about the topology of SAP HANA system. In a distributed
system, the name server knows where the components are running and which data is located on
which server.
Statistic Server:
The statistics server collects information about status, performance and resource consumption from
the other servers in the system.. The statistics server also provides a history of measurement data
for further analysis.
Session and Transaction Manager:
The Transaction manager coordinates database transactions, and keeps track of running and
closed transactions. When a transaction is committed or rolled back, the transaction manager
informs the involved storage engines about this event so they can execute necessary actions.
XS Engine:
XS Engine is an optional component. Using XS Engine clients can connect to SAP HANA database
to fetch data via HTTP.
The SAP HANA Index Server contains the majority of the magic behind SAP HANA.
This component is responsible for creating and managing sessions and connections for the
database clients.
Once a session is established, clients can communicate with the SAP HANA database using
SQL statements.
For each session a set of parameters are maintained like, auto-commit, current transaction
isolation level etc.
Users are Authenticated either by the SAP HANA database itself (login with user and
password) or authentication can be delegated to an external authentication providers such as an
LDAP directory.
The Authorization Manager
This component is invoked by other SAP HANA database components to check whether the
user has the required privileges to execute the requested operations.
SAP HANA allows granting of privileges to users or roles. A privilege grants the right to
perform a specified operation (such as create, update, select, execute, and so on) on a specified
object (for example a table, view, SQLScript function, and so on).
The SAP HANA database supports Analytic Privileges that represent filters or hierarchy
drilldown limitations for analytic queries. Analytic privileges grant access to values with a certain
combination of dimension attributes. This is used to restrict access to a cube with some values of
the dimensional attributes.
Request Processing and Execution Control:
The client requests are analyzed and executed by the set of components summarized as
Request Processing and Execution Control. The Request Parser analyses the client request and
dispatches it to the responsible component. The Execution Layer acts as the controller that invokes
the different engines and routes intermediate results to the next execution step.
SQL Processor:
Incoming SQL requests are received by the SQL Processor. Data manipulation
statements are executed by the SQL Processor itself.
The SAP HANA database has its own scripting language named SQLScript that is designed
to enable optimizations and parallelization. SQLScript is a collection of extensions to SQL.
SQLScript is based on side effect free functions that operate on tables using SQL queries for
set processing. The motivation for SQLScript is to offload data-intensive application logic into the
database.
Multidimensional Expressions (MDX):
MDX is a language for querying and manipulating the multidimensional data stored in OLAP
cubes.
Incoming MDX requests are processed by the MDX engine and also forwarded to the Calc
Engine.
Planning Engine:
Planning Engine allows financial planning applications to execute basic planning operations
in the database layer. One such basic operation is to create a new version of a data set as a copy of
an existing one while applying filters and transformations. For example: planning data for a new
year is created as a copy of the data from the previous year.
Another example for a planning operation is the disaggregation operation that distributes
target values from higher to lower aggregation levels based on a distribution function.
Calc engine:
The SAP HANA database features such as SQLScript and Planning operations are
implemented using a common infrastructure called the Calc engine.
The SQLScript, MDX, Planning Model and Domain-Specific models are converted into
Calculation Models. The Calc Engine creates Logical Execution Plan for Calculation Models. The
Calculation Engine will break up a model, for example some SQL Script, into operations that can be
processed in parallel.
Transaction Manager:
In HANA database, each SQL statement is processed in the context of a transaction. New sessions
are implicitly assigned to a new transaction. The Transaction Manager coordinates database
transactions, controls transactional isolation and keeps track of running and closed transactions.
When a transaction is committed or rolled back, the transaction manager informs the involved
engines about this event so they can execute necessary actions.
The transaction manager also cooperates with the persistence layer to achieve atomic and durable
transactions.
Metadata Manager:
Metadata can be accessed via the Metadata Manager component. In the SAP HANA
database, metadata comprises a variety of objects, such as definitions of relational tables, columns,
views, indexes and procedures.
Metadata of all these types is stored in one common database catalog for all stores. The
database catalog is stored in tables in the Row Store. The features of the SAP HANA database
such as transaction support and multi-version concurrency control, are also used for metadata
management.
In the center of the figure you see the different data Stores of the SAP HANA database. A store is a
sub-system of the SAP HANA database which includes in-memory storage, as well as the
components that manages that storage.
Persistence Layer:
The Persistence Layer is responsible for durability and atomicity of transactions. This layer ensures
that the database is restored to the most recent committed state after a restart and that transactions
are either completely executed or completely undone. To achieve this goal in an efficient way, the
Persistence Layer uses a combination of write-ahead logs, shadow paging and savepoints.
The Persistence Layer offers interfaces for writing and reading persisted data. It also contains the
Logger component that manages the transaction log. Transaction log entries are written explicitly by
using a log interface or implicitly when using the virtual file abstraction
Because computer memory is structured linearly, there are two options for the sequences of cell
values stored in contiguous memory locations:
Traditional databases store data simply in rows. The HANA in-memory database stores data in both
rows and columns. It is this combination of both storage approaches that produces the speed,
flexibility and performance of the HANA database.
Better Compression:
Columnar data storage allows highly efficient compression because the majority of the columns
contain only few distinct values (compared to number of rows).
The application needs to only process a single record at one time (many selects
and/or updates of single records).
The table has a small number of rows (e. g. configuration tables, system tables).
In case of analytic applications where aggregation are used and fast search and
processing is required. In row based tables all data in a row has to be read even though the
requirement may be to access data from a few columns.
though the requirement may be to access data from a few columns. Hence these queries on huge
amounts of data take a lot of time.
In columnar tables, this information is stored physically next to each other, significantly increasing
the speed of certain data queries.
The following example shows the different usage of column and row storage, and positions them
relative to row and column queries. Column storage is most useful for OLAP queries (queries using
any SQL aggregate functions) because these queries get just a few attributes from every data entry.
But for traditional OLTP queries (queries not using any SQL aggregate functions), it is more
advantageous to store all attributes side-by-side in row tables. HANA combines the benefits of both
To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression
mechanisms it is recommended that transaction data is stored in a column-based table.
The SAP HANA data-base allows joining row-based tables with column-based tables. However, it is
more efficient to join tables that are located in the same row or column store. For example, master
data that is frequently joined with transaction data should also be stored in column-based tables
Introduction:
SAP HANA is a leading in-memory database and data management platform, specifically developed
to take full advantage of the capabilities provided by modern hardware to increase application
performance. By keeping all relevant data in main memory (RAM), data processing operations are
significantly accelerated.
"SAP HANA has become the fastest growing product in SAP's history."
A fundamental SAP HANA resource is memory. Understanding how the SAP HANA system
requests, uses and manages this resource is crucial to the understanding of SAP HANA. SAP
HANA provides a variety of memory usage indicators, to allow monitoring, tracking and alerting.
This article explores the key concepts of SAP HANA memory utilization, and shows how to
understand the various memory indicators.
Memory Concepts:
As an in-memory database, it is critical for SAP HANA to handle and track its memory consumption
carefully and efficiently. For this purpose, the SAP HANA database pre-allocates and manages its
own memory pool and provides a variety of memory usage indicators to allow monitoring.
SAP HANA tracks memory from the perspective of the host. The most important concepts are as
follows:
Physical memory:
The amount of (system) physical memory available on the host.
You can use the M_HOST_RESOURCE_UTILIZATION view to explore the amount of Physical
Memory as follows:
This pool of allocated memory is pre-allocated from the operating system over time, up to a
predefined global allocation limit, and is then efficiently used as needed by the SAP HANA database
code. More memory is allocated to the pool as used memory grows. If used memory nears the
global allocation limit, the SAP HANA database may run out of memory if it cannot free memory.
The default allocation limit is 90% of available physical memory, but this value is configurable.
To find the global allocation limit of the database, run below SQL query:
select HOST, round(ALLOCATION_LIMIT/1024/1024/1024, 2) as "Allocation Limit GB"
from PUBLIC.M_HOST_RESOURCE_UTILIZATION
Example:
A single-host system has 100 GB physical memory. Both the global allocation limit and the individual
process allocation limits are 90% (default values). This means the following:
Collectively, all processes of the HANA database can use a maximum of 90 GB.
The heap and shared memory are the most important part of used memory. It is used for working
You can use the M_SERVICE_MEMORY view to explore the amount of SAP HANA Used Memory
as follows:
SELECT round(sum(USED_FIXED_PART_SIZE +
USED_VARIABLE_PART_SIZE)/1024/1024) AS "Row Tables MB"
FROM M_RS_TABLES;
When the SAP HANA database runs out of allocated memory, it may also unload rarely used
columns to free up some memory. Therefore, if it is important to precisely measure the total, or
"worst case", amount of memory used for a particular table, it is best to ensure that the table is fully
loaded first by executing the following SQL statement:
LOAD table_name ALL.
To examine the memory consumption of columnar tables, you can use the M_CS_TABLES and
M_CS_COLUMNS views.
The following examples show how you can use these views to examine the amount of memory
consumed by a specific table. You can also see which of its columns are loaded and the
compression ratio that was accomplished.
Note: The M_CS_TABLES and M_CS_COLUMNS views contain a lot of additional information
(such as cardinality, main-storage versus delta storage and more). For example, use the following
query to obtain more information:
SELECT * FROM M_CS_COLUMNS WHERE TABLE_NAME = '"' and COLUMN_NAME = '"'
For instance, you can execute the following SQL query, which lists all row tables of schema "SYS"
by descending size:
SELECT SCHEMA_NAME, TABLE_NAME, round((USED_FIXED_PART_SIZE +
USED_VARIABLE_PART_SIZE)/1024/1024, 2) AS "MB Used"
FROM M_RS_TABLES
WHERE schema_name = 'SYS' ORDER BY "MB Used" DESC, TABLE_NAME
Example 1:
You have a server with 512GB, but purchased an SAP HANA license for only 384 GB. Set the
global_allocation_limit to 393216 (384 * 1024 MB).
Example 2:
You have a distributed HANA system on four hosts with 512GB each, but purchased an SAP HANA
license for only 768 GB. Set the global_allocation_limit to 196608 (192 * 1024 MB on each host).
Resident memory:
Resident memory is the physical memory actually in operational use by a process.
Over time, the operating system may "swap out" some of a process' resident memory, according to
a least-recently-used algorithm, to make room for other code or data. Thus, a process' resident
memory size may fluctuate independently of its virtual memory size. In a properly sized SAP HANA
appliance there is enough physical memory, and thus swapping is disabled and should not be
observed.
To display the size of the Physical Memory and Resident part, you can use the following SQL
command:
select HOST, round((USED_PHYSICAL_MEMORY +
FREE_PHYSICAL_MEMORY)/1024/1024/1024, 2) as "Physical Memory GB",
round(USED_PHYSICAL_MEMORY/1024/1024/1024, 2) as "Resident GB"
from PUBLIC.M_HOST_RESOURCE_UTILIZATION
Memory Sizing:
Memory sizing is the process of estimating, in advance, the amount of memory that will be required
to run a certain workload on SAP HANA. To understand memory sizing, you will need to answer the
following questions:
1. What is the size of the data tables that will be stored in SAP HANA?
You may be able to estimate this based on the size of your existing data, but unless you precisely
know the compression ratio of the existing data and the anticipated growth factor, this estimate may
only be partially meaningful.
2. What is the expected compression ratio that SAP HANA will apply to these tables?
The SAP HANA Column Store automatically uses a combination of various advanced compression
algorithms (dictionary, LRE, sparse, and more) to best compress each table column separately. The
achieved compression ratio depends on many factors, such as the nature of the data, its
organization and data-types, the presence of repeated values, the number of indexes (SAP HANA
3. How much extra working memory will be required for DB operations and temporary
computations?
The amount of extra memory will somewhat depend on the size of the tables (larger tables will
create larger intermediate result-tables in operations like joins), but even more on the expected
work load in terms of the number of users and the concurrency and complexity of the analytical
queries (each query needs its own workspace).
SAP Notes 1514966, 1637145 and 1736976 provide additional tools and information to help you
size the required amount of memory, but the most accurate method is ultimately to import several
representative tables into a SAP HANA system, measure the memory requirements, and extrapolate
from the results.
For even more details, check out the new Memory Overview feature of the SAP HANA studio. To
access it, right click on a system in the Systems View, and select "Open Memory Overview" in the
context menu, as follows:
Note: To view the Memory Overview, you need Monitoring privileges. E.g. use the following SQL
statement (replace 'youruser' with the actual user name): call
GRANT_ACTIVATED_ROLE('sap.hana.admin.roles::Monitoring','youruser')
Summary:
SAP HANA maintains many system views and memory indicators, to provide a precise way to
monitor and understand the SAP HANA memory utilization. The most important of these indicators
is Used Memory and the corresponding historic snapshots. In turn, it is possible to drill down into
very detailed reports of memory utilization using additional system views, or by using the convenient
Memory Overview from the SAP HANA studio.
Since SAP HANA contains its own memory manager and memory pool, external indicators, like the
host-level Resident Memory size, or the process-level virtual and resident memory sizes, can be
misleading when estimating the real memory requirements of a SAP HANA deployment
Types of Schemas
There are 3 types of schemas.
1.
2.
3.
_SYS_BIC:
This schema contains all the columns views of activated objects. When the user activates the
Attribute View/Analytic View/Calculation View/Analytic Privilege /Procedure, the respective run-time
objects are created under _SYS_BIC/ Column Views.
_SYS_REPO:
Whatever the objects are there in the system is available in repository. This schema contains the list
of Activated objects, Inactive Objects, Package details and Runtime Objects information etc.
Also _SYS_REPO user must have SELECT privilege with grant option on the data schama.
Read more about "GRANT SELECT PRIVILEGE ON _SYS_REPO"
_SYS_BI:
This schema stores all the metadata of created column Views. It contains the tables for created
Variables, Time Data (Fiscal, Gregorian), Schema Mapping and Content Mapping tables.
_SYS_STATISTICS:
This schema contains all the system configurations and parameters.
_SYS_XS:
This schema is used for SAP HANA Extended Application Services.
In-memory computing is safe: The SAP HANA database holds the bulk of its data in
memory for maximum performance, but still uses persistent storage (disk memory) to
provide a fallback in case of failure.
While the first three requirements are not affected by the in-memory concept, durability is a
requirement that cannot be met by storing data in main memory alone.
Main memory is volatile storage. That is, it looses its content when it is out of electrical
power. To make data persistent, it has to reside on non-volatile storage, such as hard
drives, SSD, or Flash devices.
The main memory (RAM) in SAP HANA is divided into pages. When a transaction changes
data, the corresponding pages are marked and written to disk storage in regular intervals.
In addition, a database log captures all changes made by transactions. Each committed
transaction generates a log entry that is written to disk storage. This ensures that all
transactions are permanent.
Figure below illustrates this. SAP HANA stores changed pages in savepoints, which are
asynchronously written to disk storage in regular intervals (by default every 5 minutes).
The log is written synchronously. That is, a transaction does not return before the
corresponding log entry has been written to persistent storage, in order to meet the
durability requirement, as described above.
After a power failure, the database can be restarted like a disk-based database.
The database pages are restored from the savepoints, and then the database logs are
applied (rolled forward) to restore the changes that were not captured in the savepoints.
This ensures that the database can be restored in memory to exactly the same state as
before the power failure.
Savepoint:
A savepoint is the point at which data is written to disk as backup. This is a point from which
the Database Engine can start applying changes contained in the backup disk during
recovery after an unexpected shutdown or crash.
The database administrator determines the frequency of savepoints.
Contain the current payload of the data volumes (data and undo information)
Log backups
Contain the content of closed log segments; the backup catalog is also written
as a log backup
It enables technical users to manage the SAP HANA database, to create and
manage user authorizations, to create new or modify existing models of data etc.
It is a client tool, which can be used to access local or remote HANA system.
Supported Platforms:
The SAP HANA studio runs on the Eclipse platform 3.6. We can use the SAP HANA studio on the
following platforms:
Microsoft Windows x32 and x64 versions of: Windows XP, Windows Vista, Windows
7
System Requirements:
Java JRE 1.6 or 1.7 must be installed to run the SAP HANA studio. The Java runtime must be
specified in the PATH variable. Make sure to choose the correct Java variant for installation of SAP
HANA studio:
HANA Client:
HANA Client is the piece of software which enables you to connect any other entity, including NonNative applications to a HANA server. This "other" entity can be, say, an NW Application Server, an
IIS server etc.
The HANA Client installation also provides JDBC, ODBC drivers. This enables applications written
in .Net, Java etc. to connect to a HANA server, and use the server as a remote database. So,
consider client as the primary connection enabler to HANA server.
HANA Client is installed separately from the HANA studio.
Installation Paths:
If we do not specify an Installation Path during installation, the following default values apply:
Go to start menu
Start > All Programs > SAP HANA > SAP HANA Studio
The SAP HANA studio starts.
In Linux:
1.
2.
Modeler perspective:
Provides views and menu options that enable you to define your analytic model, for
example, attribute, analytic, and calculation views of SAP HANA data.
Catalog
The Catalog represents SAP HANA's data dictionary, i. e. all data structures,
tables, and data which can be used.
All the physical tables and views can be found under the Catalog node.
Content
The Content represents the design-time repository which holds all information
of data models created with the Modeler.
Physically these models are stored in database tables which are also visible
under Catalog.
The Models are organized in Packages. The Contents node just provides a
different view on the same physical data.
Enter HANA system details, i.e. the Hostname & Instance Number and click Next.
Enter the database username & password to connect to the SAP HANA database. Click on Next
and then Finish.
Next Article :
Download SAP HANA Studio
That's why we need separate reporting tools which can connect to SAP HANA, take the data
from modeling views and show in a nice, easy to understand format.
Currently there are number of tools/applications which can be used for reporting in SAP HANA. SAP
suggests using frontends of the SAP Business Objects BI Suite. Recommended frontend tools
include SAP Business Objects Crystal Reports, Analysis Office, and Explorer.
Reporting on SAP HANA can be done in most of SAP' Business Objects BI Suite of applications, or
in tools which can create and consume MDX queries and data.
Microsoft Excel can also be used for reporting on SAP HANA.
You can use Excel to connect to SAP HANA, access the modeling views and slice and dice the data
to create meaningful reports.
Check Build Reports in Excel using SAP HANA Data
Note: You can think of SAP Business Objects BI Suite as a collection of different front-end tools
provided by SAP. Most important of them are
SAP Lumira
Below image shows the different connectivity options supported for frontend tools and SAP HANA.
Prerequisite:
In order to make MDX connections to SAP HANA, the SAP HANA Client software is needed. This is
separate to the Studio, and must be installed on the client system.
Like the Studio itself, it can be found on the SAP Service market place. Additionally, SAP provides a
developer download of the client software on SDN, at the following link:
HANA Developer Edition-SAP HANA Client
Note:Download and install the appropriate SAP HANA Client as per your operating system version
and Microsoft Office installation.
If you are using a 64-bit operating system in combination with a 32-bit Office installation, then you' ll
need the 32-bit version of the SAP HANA Client software.
Once the software is installed, there is no shortcut created on your desktop, and no entry will be
created in your "Start" menu, so don't be surprised to not see anything to run.
1.
Open Excel.
2.
Go to the Data tab, and click on From Other Sources, then From Data Connection Wizard,
as shown:
3.
Select Other/Advanced, then SAP HANA MDX provider, and then click Next.
4.
The SAP HANA Logon dialog will appear, so enter your Host, Instance, and login information
(the same information you use to connect to SAP HANA with the Studio).
5.
Click on Test Connection to validate the connection. If the test succeeds, click on OK to
choose the Modeling views to which you want to connect. Select the package which contains the
modeling views.
6.
Click on the name of the analytic view or calculation view. Click "Finish".
7.
On this screen there's a checkbox Save password in file - this will avoid having to type in the
SAP HANA password every time the Excel file is opened - but the password is stored in the Excel
file, which is a little less secure.
8.
Click on the Finish button to create the connection to SAP HANA, and your View.
9.
Now that you have established your connection to the SAP HANA database and specified
the data that you want to use, you can start exploring it in Microsoft Excel, using a pivot table.
Congratulations! You now have your reporting application available in Microsoft Excel
Mining social media data for customer feedback is one of the greatest untapped
opportunities for customer analysis in many organizations today.
As many are aware, twenty-first century corporations are facing a crisis. Many corporations have
been accurately and comprehensively storing data for years. The data is in variety of forms like
social media posts, email, blogs, news, feedback, tweets, business documents etc.
It is very important to extract meaningful information without having to read every single sentence.
Now, what is meaningful information. The extraction process should identify the "who", "what",
"where", "when" and "how much" (among other things) from these data.
For example, use social media data to find out -
Before understanding Text Analysis, you will have to first understand Structured Data and
Unstructured Data.
Structured data has the advantage of being easily entered, stored, queried and analyzed.
Unstructured Data:
The phrase "unstructured data" usually refers to information that doesn't reside in a traditional
row-column database.
Unstructured data files often include text and multimedia content. Examples include e-mail
messages, word processing documents, videos, photos, audio files, presentations, webpages and
Digging through
unstructured data can be cumbersome and costly. Email is a good example of unstructured data. It's
indexed by date, time, sender, recipient, and subject, but the body of an email remains unstructured.
Other examples of unstructured data include books, documents, medical records, and social media
posts.
Text Analysis is the process of analyzing unstructured text, extracting relevant information
and then transforming that information into structured information that can be leveraged in
different ways.
Text Analysis refers to the ability to do Natural Language Processing, linguistically understand the
Fuzzy Search
Full text search is designed to perform linguistic (language-based) searches against text and
documents stored in your database.
In a full-text search, the search engine examines all of the words in every stored document as it tries
to match search criteria (text specified by a user).
However, when the number of documents to search is potentially large, the problem of full-text
search is often divided into two tasks: indexing and searching.
The indexing stage will scan the text of all the documents and build a list of search terms (often
called an index). In the search stage, when performing a specific query, only the index is referenced,
rather than the text of the original documents.
The indexer will make an entry in the index for each term or word found in a document, and possibly
note its relative position within the document.
Conceptually, full-text indexes support searching on columns in the same way that indexes support
searching through books.
Fuzzy Search:
Also known as approximate string matching.
Fuzzy search is the technique of finding strings that match a pattern approximately (rather than
exactly).
It is a type of search that will find matches even when users misspell words or enter in only partial
words for the search.
All that tech talk is fine, but how can Text Analysis help companies make more money?
How many people are showing their interest to buy this product?
Is there any negative comments or rumor going around for this product?
Market research and social media monitoring, i.e. what people are saying about my brand or
products
Voice of the Customer/ Customer Experience Management
Which of the hotels on India get great reviews for the room service?
Competitive Intelligence
SAP HANA Text Analysis has market-leading, out-of-the-box predefined entity types that are
packaged as part of the platform. Looking at a clause, sentence, paragraph, or document, the
technology can identify the "who", "what", "where", "when" and "how much" and classify it
accordingly.
For example, in the following sentence "India celebrates Independence day on 15th August?, the
analysis can identify the country, holiday and month using HANA"s predefined core extraction.
If you have reach till this end, you should have a clear understanding on Text Analysis.
if you have any doubt or question, please leave a comment.
In SAP HANA Text Analysis - One of the coolest features of SAP HANA we explained what is Text
Analysis and why it is so important for business now-a-days.
In this article we will show you how you can easily implement Text Analysis in SAP HANA.
Use-case:
Suppose I am planning to buy a new iPhone 5 and I want to know the review of this over internet. I
wanted to get a pulse of the iPhone 5 before I buy it not just from the critics but actual users like me.
I also want to search the blogs, news and social media to find out whether people's review are
positive, negative or neutral.
Lets see how we can do this with the help of SAP HANA Text Analysis.
Prerequisites:
Download unstructured data (iPhone-News.pdf)
To save time, I have created a pdf file which contains news and blog articles on iPhone 5. Download
this from here .
Create a table in SAP HANA which will contain this unstructured data. Replace <SCHEMA_NAME>
with your schema.
CREATE COLUMN TABLE <SCHEMA_NAME>."IPHONE_NEWS" (
"File_Name" NVARCHAR(20),
"File_Content" BLOB ,
PRIMARY KEY ("File_Name"));
Note: Check below article to configure Python before running the Python code.
Power of Python Integrated with SAP HANA
import dbapi
# assume HANA host id is abcd1234 and instance no is 00
# and SAP HANA user id is USER1 and password is Password1
conn = dbapi.connect('abcd1234', 30015, 'USER1', 'Password1')
#Check if database connection was successful or not
print conn.isconnected()
#Open a cursor
cur = conn.cursor()
#Open file in read-only and binary
file = open('iPhone-News.pdf', 'rb')
#Save the content of the file in a variable
content = file.read()
#Save the content to the table - Replace SCHEMA1 with your schema
cur.execute("INSERT INTO SCHEMA1.IPHONE_NEWS VALUES(?,?)", ('iPhoneNews.pdf',content))
print 'pdf file uploaded to HANA'
#Close the file
file.close()
#Close the cursor
cur.close()
#Close the connection
conn.close()
After executing the above Python script the pdf data will be uploaded in HANA table.
This will create a full text index called "PDF_FTI" (you can use any name) on the BLOB column
"File_Content" of the table "IPHONE_NEWS".
With the execution of this script a new column table is created called $TA_PDF_FTI
($TA_<Index_Name>) that contains the result of our Text Analysis Process.
Note: If you do not see this table under your schema, try to refresh that.
That's it. Yes, Text Analysis is implemented. Rest everything is done by SAP HANA.
Further Analysis:
2 columns of the table $TA_PDF_FTI is very important for us.
TA_TOKEN
This column contains the extracted entity or element (for example, an identifiable person, place,
topic, organization, or sentiment).
TA_TYPE
This is the category the entity falls under. For example PERSON, PLACE, PRODUCT etc.
To know people's review and sentiments about iPhone, we can query the table $TA_PDF_FTI like
this.
SELECT "TA_TYPE", ROUND("SENTIMENT_VALUE"/ "TOTAL_SENTIMENT_VALAUE" * 100,2)
AS "SENTIMENT_VALAUE_PERCENTAGE"
FROM
(
SELECT "TA_TYPE", SUM("TA_COUNTER") AS "SENTIMENT_VALUE"
FROM <SCHEMA_NAME>."$TA_PDF_FTI"
where TA_TYPE in('WeakPositiveSentiment','StrongPositiveSentiment','NeutralSentiment',
'WeakNegativeSentiment','StrongNegativeSentiment','MajorProblem','MinorProblem')
GROUP BY "TA_TYPE"
) AS TABLE1,
(
SELECT SUM("TA_COUNTER") AS "TOTAL_SENTIMENT_VALAUE"
FROM <SCHEMA_NAME>."$TA_PDF_FTI"
where TA_TYPE in('WeakPositiveSentiment','StrongPositiveSentiment','NeutralSentiment',
'WeakNegativeSentiment','StrongNegativeSentiment','MajorProblem','MinorProblem')
) AS TABLE2
The result shows that more percentage of people are giving positive review of this product.
Good, now i can go ahead and buy my new iPhone 5.
What's Next:
We can use this full text index table to get a lot of information other than just sentiments.
Lets take a look into the structure of this table.
Column
Name
Ke
y
Description
Data Type
File_Name
Ye
s
Same as in source
table. In this case:
NVARCHAR(20)
RULE
Ye
s
NVARCHAR(200)
COUNTER
Ye
s
BIGINT
TOKEN
No
NVARCHAR(250)
LANGUAGE
No
NVARCHAR(2)
TYPE
No
NVARCHAR(100)
No
NVARCHAR(250)
STEM
No
NVARCHAR(300)
PARAGRAP
H
No
INTEGER
SENTENCE
No
INTEGER
CREATED_
AT
No
Creation timestamp
TIMESTAMP
NORMALIZ
ED
Hope you liked this article. If you have any question please leave a comment.
2.
Save the tweets into SAP HANA system using JDBC connection
3.
Prerequisites:
Register an Application at Twitter Developers:
As we are going to use the Twitter API to extract the data from Twitter, it is required to create an
application at Twitter Developer and we will need the authentication information of the application
and use them to invoke the APIs later.
In case you haven't use Twitter before, you need to create your twitter account firstly.
You can register an application and create your oAuth Tokens at Twitter Developers by following
below steps.
1.
Logon with your twitter account, click your profile picture and click on the "My applications".
2.
3.
Provide the information. You can give any name and description of your choice.
4.
Follow the instructions and finally click on "Create your Twitter application"
5.
Scroll down the screen and you will see the button "Create my access token", click it to
generate the token.
6.
After that, you will be able to see the oAuth settings like below, save the values of Consumer
Key, Consumer secret, Access token and Access token secret.
Now we are ready!! Let's fetch data from Twitter and save it in HANA.
Download the JAVA Project "TwitterAnalysis.zip" from here and save it to your local
computer.
2.
3.
4.
Click on browse and select the "TwitterAnalysis.zip" file you downloaded in step 1. Click on
finish.
5.
Now you will be able to see the project with the structures like this:
HDBConnection.java
Build the jdbc connection to HANA
Configurations.java
The public interface for the network, twitter authentication configurations, override it by your own
account or settings
Tweet.java
The java bean class for the tweet objects
TweetDAO.java
The data access object
ngdbc.jar
SAP HANA jdbc library
twitter4j-core-3.0.3.ja
Twitter4j library for twitter services in java
Open the file Configurations.java in your project. Basically, there are 4 category of setting you can
override:
Network Proxy Settings:
The proxy host and port, set the HAS_PROXY as false if you do not need to use proxy.
To get the proxy host is, open command prompt and type "ping proxy". This will show you proxy
host.
Search Term:
We will search the twitter based on the search term "HANA Training" and we want to know what
people were talking around the HANA Training in twitter. You can always replace it with your own
term if you are interested in other topics.
You will see the message "Connection to Twitter Successfully!" following with your twitter user id in
the console as the screenshot shows below.
After that, you can run the data preview in HANA studio and see the contents of the table TWEETS
in your schema like this:
To run the text analysis, the only thing we need to do is create a Full Text index for the column of
the table we want to analysis and HANA will process the linguistic analysis, entity extraction,
stemming for us and save the results in a generated table $TA_YOUR_INDEX_NAME at the same
schema.
After that, you can build views on top of the table and leverage all existing analysis tools around
HANA to do the visualization even the predictive analysis.
Reference: This example was taken from SAP Startup Focus Program.
If you are from a startup, interested in developing on top of the in-memory database and application
platform SAP HANA, then you may check the SAP Startup Focus program for help
In SAP HANA, you can call the fuzzy search by using the CONTAINS predicate with the FUZZY
option in the WHERE clause of a SELECT statement.
Syntax:
SELECT * FROM <tablename>
WHERE CONTAINS (<column_name>, <search_string>, FUZZY (0.8))
A search with FUZZY(x) returns all values that have a fuzzy score greater than or equal to x.
You can request the score in the SELECT statement by using the SCORE() function.
You can sort the results of a query by score in descending order to get the best records first (the
best record is the record that is most similar to the user input). When a fuzzy search of multiple
columns is used in a SELECT statement, the score is returned as an average of the scores of all
columns used.
So not only does it find a "fault tolerant" match, it also puts a score behind it.
Example:
When searching with 'SAP', a record like 'SAP AG' gets a high score, because the term 'SAP' exists
in the texts. A record like "BSAP Corp" gets a lower score, because 'SAP' is only a part of the longer
term 'BSAP Corp'.
The output of fuzzy search contains 5 entries. Based on the fuzzy search factor (which is 0.7 in this
case), it will also consider the similar words. In this case "SAP AG", "BSAP orp" etc.
Use Case
A call center agent who receives an order by phone needs to know the customer number or, in the
case of a new entry, the system has to inform him about a potentially duplicate entry.
There are chances that name can be misspelled or there can be different person with same name
but different spellings. For example "Jimi Hendricks" can be misspelled as "Jimy Hendricks" or "Jimi
Hendrix". Or the address can also be spelled differently. For example "Berliner Platz 43" or "Berliner
Plats 43" or "Berliner Platz"
Without fuzzy search system can only find the exact match means the only entries that are 100%
identical. But with fuzzy search system can find the misspelled words too.
The output will contain only one entry which contains exact match of "Jimi".
The output of fuzzy search contains 4 entries. Based on the fuzzy search factor (which is 0.7 in this
case), it will also consider the similar words. In this case "Jimy".
We can also do fuzzy search on 2 columns. For example First Name and Last Name.
SQL Query:
SELECT SCORE() AS score, * FROM <Schema_Name>."CUSTOMERS"
WHERE
CONTAINS(FIRST_NAME, 'Jimi', FUZZY(0.7))
and CONTAINS(LAST_NAME, 'Hendricks', FUZZY(0.7))
ORDER BY score DESC;
The output contains 3 entries. Based on the fuzzy search factor (which is 0.7 in this case), it will also
consider the similar names. In this case "Jimy Hendricks" and "Jimi Hendrix".
SAP BW on HANA
SAP BW on HANA is the next wave of SAP's in-memory technology vision that enables SAP
NetWeaver BW to use SAP HANA as a fully functioning in-memory database.
Running SAP BW on HANA results in dramatically improved performance, simplified administration
and streamlined IT landscape resulting in lower total cost of ownership.
SAP BW on HANA is nothing but SAP's existing NetWeaver BW data warehouse, running on SAP
HANA.
SAP now supports SAP HANA as the underlying database for the NetWeaver BW Data
Warehouse.
Because SAP HANA is much faster than regular relational databases like Oracle or Microsoft SQL
Server, the data warehouse performs much faster.
The purpose of SAP BW on HANA is to combine the power of both.
Flexibility:
It lacks the granular level details. In SAP BW data is aggregated and materialized so users can't get
the way they need or at the granularity they require.
Performance:
Every report has to be tuned for acceptable performance.
Performance boost for Data Load processes for decreased data latency
Flexible - combine EDW with HANA-native data for real-time insights and decision
making
Data persistency layers are cut off and reduced administration efforts
SAP HANA helps your SAP NetWeaver Business Warehouse run better than ever. HANA enables
you to analyze large amounts of data, from virtually any source, in near real time, making it possible
to access reports with up-to-the-minute information. As an example, having the most current order
and logistics information makes it possible to manage your inventory more efficiently, and to predict
Available to Promise (ATP) more accurately.
Whether HANA also is cheaper in terms of Total Cost of Acquisition (TCA) depends on where you
are in your procurement lifecycle. Do you have to replace hardware anyhow? Can you free up
resources on expensive UNIX equipment? Can you reuse the Oracle or DB2 licenses elsewhere or
save on maintenance revenue? Do you have to complete a SAP system upgrade as well? Do you
save substantially on storage costs? Many times, the answer is yes!
With HANA there is no need to retrain end users familiar with BW. There is still the same BW
application process but, with BW on SAP HANA, it is now possible to run queries, updates and
reports much faster than before. Expert users do not need to get retrained because they can
continue to use their current BI or other frontend tools.
In addition, HANA supports the BW Analysis Authorization Concept, and can be integrated with
NetWeaver Identity Management to ensure security remains intact.
BW includes two model types optimized for best performance on the HANA platform. Existing BW
models can be used, in most cases, with only a parameter setting change.
Existing BW client tools, like SAP Business Explorer, are supported by BW on SAP HANA. The
HANA database also supports direct clients like Microsoft Excel and SAP's Business Objects BI
(Business Intelligence) tools.
Remove SAP BW aggregates (they're only overheads in SAP HANA and this is done automatically
for you).
Convert your SAP BW cubes and DSOs to SAP HANA cubes and DSOs (a simple process that
improves performance and reduces space).
Note that everything is optional.
In our previous article SAP BW on HANA we explored about the different aspects of SAP BW on
HANA.
In this article we will talk about customer success stories. Let us see some numbers and facts for
reference. These figures were reported by customers, which already have BW on HANA operating
in their IT landscape.
Red Bull
Red Bull started to run its SAP NetWeaver 7.3. Business Warehouse on HANA in 2011.
The migration:
Database size before migration: 1.5 TB; reduced by 80% after migration
The result:
For DSO activation of 5.2 million records, performance improved by factor 32x.
Before: 21 hours 40 minutes - After: 40 minutes.
Data load into write-optimized DSO write optimized and InfoCube Upload with 50K
invoice header + 350K items. Load acceleration by factor 2.7x. Before: 1hour 30 minutes - After: 30
minutes.
Query Performance
Query with aggregated result set: Performance dramatically improved by factor 471x
for queries on aggregated data. Before: 471 seconds - After: 1 second.
Performance acceleration by factor 5x on granular data sets. Before: 308 seconds After: 64 seconds.
Massive Data Volume Reduction
This article will give a basic idea of how to use python on top of SAP HANA.
Note:Even if you are not aware of python, no worries. We will cover each and everything in detail
for you.
Python API:
There are several APIs available to connect python to SAP HANA. In this article we will use one of
the simplest one which is DB API .
dbapi API is Python DB API 2.0.
Navigate to the path where HANA client is installed and copy these 3 files.
__init__.py, dbapi.py, resultrow.py
Go to the python folder under HDBClient folder and copy all 3 files into the Lib folder.
By default the location will be C:/Program Files/sap/hdbclient/Python/Lib
3.
4.
Make sure you change the schema name with your schema.
Open Command prompt. Navigate to Python path and run the command
python filename.py
What is a BLOB?
BLOB (Binary Large Object) is a field type for storing Binary Data.
BLOB could store a large chunk of data, document types and even media files like audio or video
files. In HANA the BLOB size can be up to 2 GB.
1.
Download the JAVA Project ImageUploader.zip and save it to your local computer.
2.
Open JAVA Eclipse (or HANA Studio) and create a Java project called "ImageUploader".
Click on browse and select the " ImageUploader.zip " file you downloaded in step 1. Click on finish.
Run the Java program. You should see similar message in console.
The output shows that image files are uploaded in BLOB format.
1.
Download the zip file and extract it somewhere in your local system.
Open GetImage.xsjs file and change the YOUR_SCHEMA_NAME in first line to your schema name.
Make sure it is the same schema where you created the IMAGE_STORE table.
Done!!
Now let us test this.
Open ImageBrowser.html file under ui folder. Right click and select Run As --> HTML
Search for the image that you uploaded. If the image is found it will display the image.
You can search with the name of the images you uploaded.
If you have any question or doubt, please leave a comment or contact us.
SAP HANA Live (previously known as SHAF SAP HANA Analytic Foundation) is solution for realtime reporting on HANA.
It is a separate package that comes with predefined SAP HANA content across the SAP Business
Suite.
The content is represented as a VDM - virtual data model, which is based on the transactional and
master data tables of the SAP Business Suite.
Currently more than 2000 views are delivered in HANA Live Package.
HANA Live calculation views are designed on top of SAP Business Suite tables. These views are
optimized for best performance and analytic purposes. These views form aVirtual Data Model
(VDM) that customers and partners can reuse.
Data provided by the virtual data model can be presented through multi-purpose analytical UIs,
such as SAP BusinessObjects BI Suite UIs, and domain-specific web applications.
Query Views: The views which are exposed for consumption by end user for reporting
needs.
Reusable views: Customers can build upon reusable views to create their own custom
query views.
Private Views: Views that are built on top of the tables and not intended to be changed.
SAP HANA Live for SAP Business Suite provides the following advantages compared to regular
reporting solutions:
Open
Any access to the reporting framework is based on standard mechanisms such as SQL or MDX. No
BW modeling or ABAP programming will be required.
Uniform
One approach is chosen for all SAP Business Suite applications, enabling a common reporting
across application boundaries.
Intuitive
The virtual data model hides the complexity and Customizing dependencies of our SAP Business
Suite data model to make data available without requiring a deep understanding of SAP models.
Fast
SAP HANA Live for SAP Business Suite features SAP HANA as the underlying computing engine, to
enable fast analytics on high data volumes.
Real-time
Since all reporting happens on primary data (or a real-time replication of it), there is no need to wait
for data warehousing loading jobs to finish. The cycle time from recording to reporting is
dramatically reduced.
If customers wants to create a new custom report or modify/enhance existing reports on native ECC
it takes lot of time to find right ABAP resources, coding in ABAP, testing and promoting to various
stages and finally release it.
With SAP HANA Live all customers have to do is edit existing Virtual models/Views provided by SAP
or create new HANA Models or Views to support new development in less time and all is happening
in virtual layer and the development is much efficient, faster and no need to know ABAP.
This reduces development time and there by cost of development and support, increases SAP
usability by a faster development time. We can easily create cross functional reporting across
various SAP modules
Side-by-side scenario
In the side-by-side scenario, the database tables that are used by the SAP HANA Live products
need to be replicated from the corresponding SAP Business Suite back-end system into the SAP
HANA database. This is done using SAP Landscape Transformation Replication Server. If you want
to execute SAP HANA Live views, the data from the corresponding tables must be available.
SAP recommends to create all required tables as specified in the SAP Notes corresponding to the
SAP HANA Live products, and to replicate the data only for those tables that are used in executed
analytical scenarios. This ensures that no unnecessary data is replicated, that no unnecessary SAP
Landscape Transformation Replication Server resources are consumed, and that no unnecessary
DB memory is consumed.
Integrated scenario
In the integrated scenario, you do not need to create and replicate the database tables, as they are
already available in the SAP HANA database. They are maintained through the data dictionary of
the corresponding ABAP Application Server. Therefore, all steps regarding table creation and data
replication are not relevant in this scenario.
Since the ABAP server creates all tables in one specific database catalog schema (typically
<SAPSID>), this needs to be mapped to the authoring schema of the imported content packages.
See Schema Mapping.
SAP HANA Live reports can be accessed by HTML 5, native Excel or SAP BusinessObjects
Business Analytics applications.
SAP Lumira
HANA Live back-end can be accessed using the SAP HANA Studio. We can also use html-based
SAP HANA Live View Browser to access the structures and elements of the virtual data model.