How ABAP affects BW performance: Tips from Joerg Boeke & Raymond Busher (Q&A transcript)

How ABAP affects BW performance: Tips from Joerg Boeke & Raymond Busher (Q&A transcript)

Reading time: 12 mins

I recently moderated a web forum with BI experts Joerg Boeke and Raymond Busher of BIAnalyst on tuning ABAP to improve BW performance. Joerg and Raymond took questions on transformations, maximizing process chain performance, speeding up tRFC processing, managing Shared Object Memory, and other topics.

For the full Q&A, you can view the questions and Joerg and Raymond’s responses in the BI/BW Forum, or read excerpts from the transcript of the Q&A below.


Bridget Kotelly, BI 2013 (moderator): Welcome to today’s BI/BW Forum!

We’re looking forward to a great discussion today on both ABAP and BW. Thanks to consultants and BI experts Joerg Boeke & Raymond Busher of BIAnalyst for joining us today.

Both Joerg and Raymond will be here for the hour, posting their answers to your questions about optimizing your ABAP code to enhance BW performance.

Joerg has been a speaker at SAPinsider’s
annual BI conferences
with sessions on BW implementation and performance tuning. Today, his colleague at BIAnalyst, Raymond Busher, joins us as well. Welcome to you both!

Joerg and Raymond, a question I’d like to ask to start off:
What do you think is the top source — or most overlooked source — of performance degradation that can be traced back to ABAP code?

Joerg Boeke, BI Analyst: Hi Bridget,
The most common problem in load routines due to ABAP is based in transformations.

A lot of customers are doing loops and DB lookups in the individual routines on key figures and characteristics.
You should never do DB lookups in the individual routines because they will be executed for each record.
Better to do that lookup in start routine one, and just use read access to that global internal tables in routines. That will speed up a lot, because the DB lookup will happen once a package (so the reduction, for example, in the case of 50K records / package is enormous).

Also:

  • In case of having a huge number of packages, you may use Shared Object Memory tables (SOM). We use it often in customer projects to optimize the loads to perfection.
  • Another problem in ABAP you may overcome is using field symbols and assigning commands instead of loop into structure. You just cut down internal time as well.
  • In terms of ABAP access to tables (lookups), make sure you have an index on DB supporting that access.  

 

Ole Paludan Larsen: Hi Joerg and Raymond,

Some basic questions regarding transformations:

Should we still try to avoid using formulas in 7.x? Better to use routines?

Should we also still move rule details from every single info object to start/end routines?

But when using currency conversion
, then we have to stay in rule details/routine on each info object? Not in start/end routines? Or how?

Raymond Busher, BI Analyst: Hi Ole,

I prefer to use ABAP Routines, but that is because I am used to writing in ABAP. It is only at the generation time that there is a minimal difference. Because generating the transformation creates a Class Library, it is not a runtime issue.

If you move all rules to end routines, I feel you lose readability. It is not usually a performance issue unless you have selects in the rules.

Currency Conversion is best done at the rule, for readability reasons only — it is not a performance issue. Because at the end of the day, the number of currency conversions are the same.


Laszlo Torok:
Hi Joerg and Raymond,

Could you please talk about the key performance areas at:

  • Virtual key figures
  • Virtual cubes (and expert inverse routines)

Also: How to maximize performance of nightly process chains (to utilize the absolute maximum resources of the system). E.g., when more chains (each has parallel execution) are running.

[What is the optimal parallel work process allocation to each process type (to stay in the available work process numbers)? Hope it’s not a try and error…]

Thanks and best regards,

Laszlo

Joerg Boeke: Hello Laszlo,

Quite a few questions, let me try to answer all:

– Virtual key figures and characteristics will have a real impact on the query execution speed. Try to avoid them whenever possible. If possible try to switch to that type of calculation in the regular loads. Virtual key figures will run normally on more detailed date and regular users of BEx will not see where performance is lost.

– Virtual cubes, except remote
cubes, are great in terms of dynamical lookups, i.e., stock data, but have a lack on performance because network latency times may impact query execution and cannot be handled nor optimized within BW.

– In terms of process chains, I would check the number of processes/dialog and batch) and night-day settings in transaction RZ04.

If there are plenty of processes, you may use table RSBATCHPARALLEL to fine tune your loads activations and Co.

Hope that answers your questions.

Laszlo Torok: Thank you,

I know RSBATCHPARALLEL. My question was if you have any experience about the ratios assigned to different process types. Any rules of thumb — e.g., DSO activation x%, rollup y%?

Or are there any tools to monitor correctly whether at any point of time (during the night) there was a process shortage, which degrades performance?

Regards,

Laszlo

Joerg Boeke: To fine tune it you need to use transaction OS07.

That transaction will show you (detail display) the CPU usage during the actual situation or aggregated for 24 hours.

Average IDLE time should not be less than 25% (CPU is not doing anything). If idle is less than 25%, you have a HW problem and you may add better HW or additional app server.

If CPU idle is not exhausted next step is to check transaction SM50.

Turn on CPU time from menu entry.

For each process type (dialog and batch…) you should have at least one process indicating 0:00 time usage. That indicates that not all processes have been used (queuing did not happen).

If so you may turn on more parallel steps in RSBATCHPARALLEL.

To find out where the problem is located you may use transaction ST13.

Use BW-Tools==> Process chain==> check where the time has been consumed most (load, activation, routi
nes…).

Then tune the process by either enhancing ABAP code if routines are the reason for bad performance or, in case of activation, use more records, processes.


shahidkhan: Do you recommend any single resource where there is excellent documentation for tuning a BW system, and which include real-life examples and steps taken to resolve them? What I have seen so far is that the documentation is either mixed or not version specific.

Raymond Busher: No, I am afraid I haven’t found a single resource that covers everything. But I think that is the nature of performance issues.

There are many components working together, from extraction to reporting, and because of these many components it is difficult to cover them all in one resource.

It is experience, but most importantly it is ongoing monitoring that will maintain performance.


SubashiniBhoopathiraj: Hi Joerg and Raymond,

Do you have any suggestion for how I can achieve this in BW?

Add Flag attribute to material master data to check old vs. new material with condition if Material create date – quarters first day >5 then old material or < 5 then new material.

Should keep the history for 5 years?

Can I do this by adding attribute to masterdata 0Material?

Thanks,

Suba

Raymond Busher: Hallo Suba,

Ad hoc I think this looks like something for an APD process — especially as you are talking about updating attributes. I know I am being a bit vague. But I would need more details for the requirements to give a more concrete answer.

SubashiniBhoopathiraj: Hi Raymond,

Requirement is to track whether material is old vs. NEW by condition create date – quarters first date.

If (create date – quarters
first date) > 5 then flag material as old

Elseif < 5 flag as New.

Should keep this flag data for 5 years (20 quarters), meaning if I run the current quarter, flag may be NEW? And if I run the material for next quarter it may be OLD? I want to see both current quarter & next quarter data.

Do you have any suggestion how I can do this in 0material master infoobject?

Thanks,

Suba

Raymond Busher: If I understand correctly. You will need an attributes:

1 – for “New in Current Quarter”

2 – for “New in Previous Quarter”

These can be filled with an APD or a transformation from 0MATERIAL to itself. Move “1” to “2” and determine “1” from difference between creation date and beginning of Quarter.

Run this at the beginning of each quarter when all 0MATERIALs are available.

 

Bridget Kotelly: Upgrading is always a hot topic with our conference attendees. For those who have upgraded to BW 7.3, are there best practices for spotting outdated, non-performing routines?

Joerg Boeke: This is similar to Laszlo’s question.

The best approach is transaction ST13 (note that free SAP Add-On ST/API needs to be installed in your system to use that transaction).

– In BW tools, you will find process chain (PC) analysis you need to execute.

It will give an overview about all PCs for selected time frame.

You can then drill down each individual PCs and see something like BEx reporting of aggregated time consumption and you may dig into details.

– First spot the real bad areas. In case, your PC runs 2 hours and you think it’s too slow. Check what part eats up most of the time.

–         Then check the individual process. For example you see that a load for 50K records a package and uses 20 minutes. You can check in the DTP monitor directly from that transaction where time is being consumed, i.e., load 1 minute, routines 5 minutes, SID generation 14 minutes.

In that case, SID generation takes all the time and you may use buffering on SIDs and dimensions —  I did a session on BI2012 about that buffering. Transaction ST13 can help here as well.

– Use the Infoprovider analysis from BW-tools.Dimensions (mostly unbalanced) can be detected very easily.

 

SraonE: We are facing an issue where we notice that the tRFCs are getting delayed in the QOUT Scheduler during the peak time. Is there some way we can speed up the processing of the tRFCs in the QOUT Scheduler?

Raymond Busher: I had a similar problem during a BW Migration when transferring the data from our old BW to our new 7’er. We had lots of queued up tRFCs.

We wrote a small program which was effectively a “Front-End” to the SM58 that kick-started the tRFC queue and triggered the LUWs. This prevented a backlog from building in the queue. And we could process more than the parallelization parameter really allowed.

 

Laszlo Torok: Hi,

Sometimes we have to load data to DSO from a datasource which is only cube ready. Now suppose there can be direct data deletion in the source system without storno records. We need the DSO to produce the delta for the cube.

For this case we developed a complex ABAP program in the start routine. I would like to ask you if you have any solution for this. Maybe I can improve our method?

Thank you

Laszlo

Raymond Busher: Laszlo,

I am not sure what you mean by “cube ready” datasources. But I did have a discussion  this week with a colleague of mine who had created a generic extractor on a table from which records could be directly deleted. This, however, did not create a deletion/Storno in the DSO.

He also had the problem that the source was a relatively large table. So FULL updates were not viable.

We decided to create an ON DELETE database trigger so that when deletions occur the key fields are moved to a temporary table (written with timestamp) so that they could then be sent to the DSO as deletions.

 

Dave Hannon: Joerg, Raymond:

Thanks so much for taking our questions. I’m wondering if you have any advice for dealing with heap memory leaks? Any suggestions for locating the source of those problems in the code?

Thanks,

Dave

Joerg Boeke: Hi Dave,

Heap memory is a bit tricky.

The only thing that really works well is a reboot now and then.

What also might help is to free some memory by running SAP report SAP_DROP_TMPTABLES.

Normally queries (based on Multiprovider) will join the individual lookups in memory, as well as all types of SAP internal routines that lie in generated Programs GP****

Normally such programs will reallocate the memory – although not every time. Running this report will tidy up all of it. SAP recommends running it once a week.

!!!!!!!! MAKE SURE NOT TO USE IT DURING LOADS!!!!!!

Even active loads and memory usage will be cleared. 🙂

Run the report and check all flags. Depending how often you use it, it will run from seconds to few minutes and indicates all type of cleaned areas. Especially before upgrades it’s a MUST to run it.

&n
bsp;

Ken Murphy: What about situations where several master data attributes are being read in routines – and slowing down performance. Any suggestions to speed up this process?

Joerg Boeke: How do you do the look ups?

Are you using the SAP attribute lookup or did you use your own ABAP code to read P or whatsover MD tables by yourself?

There are multiple ways. Because I do not know how you do the lookup a global answer may be that you can put indices on the masterdata tables (secondary index) right for the attribute (s)you are looking for.

Especially on huge MD tables, that helps a lot.

Raymond Busher: SAP has done some good work itself in standard master data lookups. The tables are buffered and the number of database accesses are reduced. In 7.3 it is standard in 7.0 the option came as an RSADMIN switch.

I prefer to write the stuff myself. Create an internal table in the START-Routine with a select with “FOR ALL ENTRIES” in source_package. Sort the table and get the records with a binary search.

Upping the number of records per packet reduces the number of database reads.

If all this doesn’t help a relatively new feature is to use the Shared Object Memory (SOM). Then you can pump the master data to memory once only and allow transformations to read from the memory.

 

krishnaperiyala: Hello Raymond/Jorg – Regarding SOM, is there a limitation on the memory size so that it could handle a large volume of data? And how does the locking mechanism work in SOM?

Thanks.

Raymond Busher: Memory size is of course limited by the size of the memory of your machine, but there are also excellent monitoring tools so that you can keep an eye on the amount being used.

You also have options to define
how long things stay in the SOM. It is good practice to make the space available if finished with the objects and take them out.

The locking mechanism prevents a process from replacing the SOM object when someone else is reading it, but with versioning as soon as the READ-Lock is removed, then the new version moves to be the current one.


NarasaM:
Hi,
We have escalations from users that system is slow today, system is slow now?

Please suggest, what are quick checks whenever I receive such escalations?

Thanks.

Joerg Boeke: A quick approach is always using transaction SM50 to see what stress is on the system and what process is causing it.

Transaction = S07 may help as well to see if sufficient CPU power is currently available.

Best approach is trying to find out if you have specific peak times of system or user based (queries, etc.) stress.

You can check table OSMON, which will track 30 days of system behavior.

I really recommend using technical checks,  because you may use it to find out what causes the bad behavior.

In case you see sequential reads in SM50 / SM51 you may want to check your indices.

As well, transaction DB13 DB14 may help to spot whether some DB jobs have stalled (index statistic updates) and are causing the problem.

 

Bridget Kotelly (moderator): Thanks to all who posted questions and followed the discussion!

A full summary of all the questions will be available here in Insider Learning Network’s BI/BW Group. If you have registered for this Q&A, you will also receive an email alerting you when the transcript is posted.

For more on BI, I invite you to join me at next  BI 2013
conference, coming to Las Vegas March 19-22 and then to Amsterdam June 11-13. For more information on both events, you’ll find full details on the BI 2013 conference web site.

And thank you again to Joerg Boeke and Raymond Busher of BIAnalyst for joining us today!


Did you find this Q&A helpful? Get access to the latest updates and resources from SAPinsider with a free subscription.

Get the SAPinsider subscription now »»

More Resources