Hana Interview Questions

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

hana log file system full ?

(Log Volume is Full and the database doesn't accept any new requests.
I have tried to take backup but it is getting failed.)

Actually you need to make sure few things in live systems:


Make sure the automatic log backup is enabled.
You install HANA in log drive which has sufficient space available keeping into consideration future db
growth. refer to HANA sizing guide for that.

To resolve your issue, follow below:


1. Open HANA Studio.
2. Open SQL editor for your HANA system.
3. Execute:
alter system reclaim log;
4. Take the backup.

Please move some log segment to another file system and create a symbolic link to the original
directory. It will allow the HANA to restart.
Take a backup from Studio.

After that try this command "ALTER SYSTEM RECLAIM LOG" to get free space in the log
area.

If the log_mode is set to legacy the logvolume keeps all log segments since the last full backup
in the logvolume. If no backups are performed the logvolume can run out of space.

1. Stop the database:


HDB stop

2. Change into the folder mnt00001 of the logvolume (Default: /usr/sap/<SID>/global/hdb/log):


cd /usr/sap/<SID>/global/hdb/log/mnt00001

3. You have to move one of the logvolumes temporarily to another volume where enough space
is available. You should free at least 2 GB of space to ensure that the database has enough
space to start. To find out the space consumption of each volume execute:
du -sh *

4. Move a volume which consumes at least 2 GB of space (e.g. hdb00003) to a volume with
enough free space, e.g. to the data volume (Default: /usr/sap/<SID>/global/hdb/data):
mv hdb00003 /usr/sap/<SID>/global/hdb/data

5. Create a symbolic link to the new folder in the old location:


ln -s /usr/sap/<SID>/global/hdb/data/hdb00003
/usr/sap/<SID>/global/hdb/log/mnt00001/hdb00003

6. Start the database (HDB start) and set the log_mode to 'overwrite' using the following SQL-
Statement:
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence',
'log_mode') = 'overwrite' WITH RECONFIGURE;

7. Restart the database (HDB restart) and use the following SQL-Statement to clean up the
logvolume:
ALTER SYSTEM RECLAIM LOG;

8. Set back the log_mode to 'legacy' (you should consider switching to log_mode = normal; see
note 1645183 for more information):
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence',
'log_mode') = 'legacy' WITH RECONFIGURE;

9. Stop the database again and remove the symbolic link:


rm -f /usr/sap/<SID>/global/hdb/log/mnt00001/hdb00003

10. Move the log volume back to its original location:


mv /usr/sap/<SID>/global/hdb/data/hdb00003 /usr/sap/<SID>/global/hdb/log/mnt00001

11. Start the database and perform a backup.

Troubleshooting SAP HANA High


CPU Utilisation
Whilst using SAP HANA i.e. running reports, executing queries, etc. you get an alert in HANA Studio that the system
has consumed CPU resources and the system has reached full utilisation or hangs.

Before performing any tracing, please check to see if you have Transparent HugePages enabled on your system. THP
should be disabled across your landscape until SAP has recommended activating them once again. Please see the relevant
notes in relation to TransparentHugesPages

TRANSPARENT HUGEPAGES

SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation

SAP Note 1824819 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP2

SAP Note 2131662 - Transparent Huge Pages (THP) on SAP HANA Servers
SAP Note 1954788 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3

The THP activity can also be checked in the runtime dumps by searching AnonHugePages. Whilst also checking the THP,
it is also recommended to check for:

Swaptotal = ??

Swapfree = ??

This will let you know if there is a reasonable amount of memory in the system.

Next you can Check the (GAL) Global allocation limit: (search for IPM) and check the limit and ensure it is not lower than
what the process/thread in question is trying to allocate.

So after you have checked to see if Transparent HugePages were disabled, and they were, the next step in analysing is to
identify the action that caused this high usage of CPUs.

Usually it is evident what caused the High CPUs. In many events it is caused by the execution of large queries or running
reports from HANA Studio on models.

In order to analyse the activities, the second step is to run a Kernel Profiler Trace along with 3-4 runtime dumps whilst the
issue is occurring.

The kernel profiler is a sampling profiler built into the SAP HANA database. It can be used to analyze performance issues
with systems on which third-party software cannot be installed, or parts of the database that are not accessible by the
performance trace. It is inactive by default.

Caution

To be able to use the kernel profile, you must have the SAP_INTERNAL_HANA_SUPPORT role. This role is intended only
for SAP HANA development support

The kernel profile collects, for example, information about frequent and/or expensive execution paths during query
processing.

It is recommended that you start kernel profiler tracing immediately before you execute the statements you want to analyze
and stop it immediately after they have finished. This avoids the unnecessary recording of irrelevant statements. It is also
advisable as this kind of tracing can negatively impact performance.

When you stop tracing, the results are saved to trace files that you can access on the Diagnosis Files tab of the
Administration editor.

You cannot analyze these files meaningfully in the SAP HANA studio, but instead must use a tool capable of reading the
configured output format, that is KCacheGrind or DOT (default format).

(http://www.graphviz.org/Download_windows.php)

You activate and configure the kernel profile in the Administration editor on the Trace Configuration tab.
Please be aware that you will also need to have run the runtime dumps also. The Kernel Profiler Trace results reads in
conjunction from the runtime dumps to pick out the relevant Stacks and Thread numbers.

Connect to your HANA database server as user sidadm (for example via putty) and start HDBCONS by typing command
"hdbcons".
To do a Kernel Profiler Trace of your query, please follow these steps:

1. "profiler clear" - Resets all information to a clear state

2. "profiler start" - Starts collecting information.

3. Execute the affected query.

4. "profiler stop" - Stops collecting information.

5. "profiler print -o /path/on/disk/cpu.dot;/path/on/disk/wait.dot" - writes the collected information into two dot files which can
be sent to SAP.

Once you have this information you will see two dot files called

1: cpu.dot

2: wait.dot.

To read these .dot files you will need to download GVEdit. You can download this at the following:

http://www.graphviz.org/Download_windows.php

From there on it will depend on what the issue is that you are processing.

Normally you will be looking for the process/step that has the highest amount on value for

E=

Where "E" means Exclusive

There is also:

I=

Where "I" means Inclusive

The Exclusive is of more interest because it is the exclusive value just for that particular process or step that will indicate if
more memory/CPU is used in that particular step or not. In this example case we can see that __memcmp_se44_1= I
=16.399% E = 16.399%. By tracing the RED colouring we can see where most of utilisation is happening and we can trace
the activity, which will lead you to the stack in the runtime dump, which will also have the thread number we are looking for.

By viewing the CPU.dot you have now traced the RED trail to the source of the most exclusive. It is now that you open the
RTE (Runtime Dump). Working from the bottom up, we can now get an idea of what the stack will look like in the RTE
(Runtime Dump).
Symptom

You experience high memory consumption or performance issues during query


execution. You are asked to provide a (Kernel) Profiler Trace

Other Terms

HANA, support, provide data, hdbcons, profiler, Kernerl profiler, trace

Reason and Prerequisites

SAP HANA Development Support requires information about a suspected memory issue.

Solution

Connect to your HANA database server as user sidadm (for example via putty) and
start hdbcons by typing command "hdbcons".
To do a Kernel Profiler Trace of your query, please follow these steps:

1. "profiler clear" - Resets all information to a clear state


2. "profiler start" - Starts collecting information.
3. Execute the affected query.
4. "profiler stop" - Stops collecting information.
5. "profiler print -o /path/on/disk/cpu.dot;/path/on/disk/wait.dot" -
writes the collected information into two dot files which can be sent to
SAP.
Attention: Specifying both a database user and an application user with SAP HANA
SPS <= 10 can result in a crash. This problem is fixed as of SAP HANA SPS 11.
Symptom

SAP Support asks you to generate runtime dump(s) during a high


CPU/Memory consumption.

Environment

SAP HANA Database 1.0

Resolution

There are 3 ways of recording runtime dumps:

1) From the OS Level

1) Log into the linux HANA host presenting the issue as sidadm user;

2) Run command 'hdbcons';

3) On the hdbcons console run command below:

> runtimedump dump

This will create a runtimedump for the host you logged in. The generated file will be
under traces directory with naming like 'indexserver....rtedump.trc'.

4) Attach generated trace file to the OSS Message

2) From HANA Studio - SQL Editor

1) Log into HANA with a user with privileges enough to run procedure
SYS.MANAGEMENT_CONSOLE_PROC;

2) Execute procedure SYS.MANAGEMENT_CONSOLE_PROC as a prepared


statement;

3) Fill in parameter 1 with 'runtimedump dump' and parameter 2 with


'<hostname>:<indexserver port>' (whithout quotes) and execute the statement;
4) Download the generated rtedump file and attach it to the message.

The screenshots below depicts an example of how to record that dump in Studio:

3) From HANA Studio - Trace Collection

SAP HANA Studio SP8 adds the possibillity of recording one or multiple samples of
runtimedump based on a certain time frequency.

For instance we can record a set of 5 runtimedumps, 1 every minute for every host
or for a particular host only. Here are the steps:

1) Go to Administration -> Diagnosis Files;

2) Select "Diagnosis Information -> Collect...";

3) Set the option 'Create and collect one or multiple sets of runtime environment
(RTE) dump file(s)';

4) Set the hosts, amount of samples (sets) and frenquency on which the runtime
dumps will be recorded;

5) Finish.

5) OS Level - Collect rtedump for all services on one node at once

In order to collect the rtedumps of all database services on a particular node, the
script command_rtedump.txt is attached to this document. It can be executed by just
copying the contained commands into the current shell that was opened as
<sid>adm.

The rtedump files are then created in the current work directory of the user
executing the command.

6) Known issues related to rtedump creation

2040144 - SAP HANA DB: Collecting Diagnosis Information from the SAP HANA
Studio can cause TLB shootdowns
Keywords

runtimedump; CPU; hdbcons; SYS.MANAGEMENT_CONSOLE_PROC; runtime


environment, RTE

HANA Security: Creating End


Users with minimal rights
In this document, I am going to explain how we can create an End User for reporting purposes with minimal rights.
This document is based on HANA Rev 82.

Lets say we have created an Analytic View AN_TEST as shown below and we want the end user to access this View
using reporting tools like Analysis Office for Excel/Tableau or using Web.

First, we want the User to access the View using Web.


Below is a screenshot of how the View is displayed in Web from a Developer's ID.
Now we will create a new user in HANA and assign him a role so that he can access the View via web.

The first thing we need to do is Create a Role for our end user. We can create Catalog Roles and Repository Roles. The
difference between the two is listed below:

Catalog Role Repository Role

are runtime objects are design-time objects

Not transportable and not versioned Transportable and versioned

Created in runtime(directly in Production


System) Created in Development System, transported and activated in Production System

Grantor must have a privilege to include it


in a role Grantor does not need privileges included in role

Any user with EXECUTE privilege on built-in GRANT/REVOKE procedures can


Creator can always grant/revoke role grant/revoke roles

Easy to create using SQL or Graphical


interface Needs to create a .hdbrole file and needs to code few lines to create a role

As per the SAP HANA Security Guide, it is recommended to create roles as repository roles because
1. They can be transported between systems
2. They are not directly associated with a database user(All design-time objects are owned by user _SYS_REPO)

Let's start by creating a role for our end-user.


For this purpose, we will create a Project in HANA Development Perspective.
Now let's say we also want to apply restriction on our End User - user should be able to see data only related to Sales
Region PUNJAB.
For this purpose, we will create an Analytic Privilege. For this, R-Click on Project -> Select New -> Other -> Select
Analytic Privilege and name it AP_AN_TEST and then add restriction as shown below:

Now we have our Analytic View and Analytic Privilege ready. So now we can create a role for our end user.
To create a Role, R-Click on Project -> Select New -> Other -> Select Role and name it AN_TEST_ROLE and
provide it access to Column View of Analytic View present in _SYS_BIC schema and also add analytic Privilege that
we created in the step above.
On activating this design-time role, it becomes a runtime role that can be granted to any user.

Now our role is ready.

The second thing I need to do is Create a User. In SPS08, we can create a Normal Database User and a Restricted User.
The difference between the two is listed below:

Normal Database User Restricted Database User

PUBLIC role is granted implicitly PUBLIC role is not granted

Can create objects in its own schema and can connect Can not create objects in its own schema and can connect to HANA
to HANA Database using JDBC and ODBC Database only through HTTP/HTTPS interface and cannot view any data in
connections or HTTP/HTTPS interface the database as they are not even granted PUBLIC role

As we don't want our end user to create anything in Production Environment, so we will create a Restricted User.
To create a restricted User, expand Security Folder -> R-Click on Users and Select New User-> Enter user name
asTEST and check the restricted User Check box.

By default no roles are granted to the end user. To grant the AN_TEST_ROLE that we created above, we will run the
below SQL and call GRANT_ACTIVATED_ROLE procedure.

Now our user is created and he is able to access the View via Web as shown below

We have applied restriction that our user should be able to view only PUNJAB region data so the user is able to see
only PUNJAB sales Region data.

Now, lets say the user wants to access this View using Analysis Office for Excel. When the user will try to log into
HANA, using Analysis Office for Excel,
he will get the below error:
The reason being, our User is a restricted User and is not allowed to log in using ODBC access.
To resolve this error, we need to grant RESTRICTED_USER_ODBC_ACCESS role to our User and we also need to
provide SELECT privilege on _SYS_BI schema(or we can provide SELECT access on few objects like BIMC* objects
in _SYS_BI schema) so that the user can access metadata otherwise the user will get the below error:
As RESTRICTED_USER_ODBC_ACCESS is a Catalog Role, so any user which has ROLE ADMIN privilege and
hasRESTRICTED_USER_ODBC_ACCESS role with GRANT option can grant this role to our User TEST.

But as per the document How to Define Standard Roles for SAP HANA Systems, we should not grant ROLE ADMIN
privilege to any user.

But as we know that _SYS_REPO user has ROLE ADMIN privilege, so we can
grantRESTRICTED_USER_ODBC_ACCESS role to _SYS_REPO user with GRANT option.

After that we will create a new role where we will extend RESTRICTED_USER_ODBC_ACCESS role and
AN_TEST_ROLE role that we created earlier and will also provide SELECT access to _SYS_BI schema.

Once the above role is activated, we can see that it contains both RESTRICTED_USER_ODBC_ACCESS role
andSecurity.roles::AN_TEST_ROLE roles.
Now we will revoke AN_TEST_ROLE and grant newly created AN_TEST_OD_ROLE role to the TEST user.

Now when we log into Analysis Office for Excel using user TEST, we can select the Analytic View for reporting
Then we can drag and drop fields to check the report as shown below:

Keep the following things in mind when creating a End User.

1. If you are running HANA SPS08 or higher, I would recommend you to create End Users as Restricted Users if you don't want
them to create content in Production System.
2. Create repository roles as they can be versioned and transported
3. Never Grant SELECT access on the Tables to end users
4. Never Grant SELECT access on _SYS_BIC schema to end users
5. The design-time version of a role in the repository and its activated runtime version should always contain the same privileges
6. It is not possible to drop the runtime version of a role created in the repository using the SQL statement DROP ROLE or in the
SAP HANA studio. To drop a repository role, you must delete it in the repository and activate the change. The activation process
deletes the runtime version of the role

These are some of the basic things that I have mentioned here and of course there are lot of other things related to
User Administration that need to be checked.

Please refer the below documents to know more about HANA Security:
SAP HANA Security Guide - SAP Library
How to Define Standard Roles for SAP HANA Systems
SAP HANA Administration. of Richard Bremer, Lars Breddemann - by SAP PRESS

Note: From HANA SPS09, we will be able to create Repository roles via GUI available through HANA Web IDE.

You might also like