Google Search - Blog...........

ABAP - Fine Tuning ABAP Developments

Fine Tuning ABAP DevelopmentsFine Tuning developments:

Factors influencing Performance Tuning
§ Database Operation
§ Application Code
§ Memory Handling
§ Tools for Tuning

- SQL Trace
- Runtime Analysis

Database Operation


§ Database Access
§ Data volume and Network Load
§ Bulk retrieval or update
§ Frequency of Communication between Application Program & Database System (Array operation over Single Row)
§ Usage of Index
§ Table buffering

Design Stage Considerations


§ Restrict Data Selection
§ Make more mandatory fields..
§ If initial data selection itself is huge.
- warn user about performance before proceeding further
- Terminate further processing with suggestion to run with more restriction
- Suggest Background operation
§ Designing the selection screen :
- Correctly group the fields which are used for filtering the data during selection.
- group the fields which would be used for only restricting the output.
User then would know, which can improve response and which cannot.
§ If the report is used for on-line purpose only, drill down report would be better alternative.
§ Get more information only when you need it!
§ Check current data size and the expected growth rate.
Effective Select Statement
§ Specify by Key fields while selection
§ Avoid select within loops / select.. Endselect
§ Inner join / views better than nested selects / multiple selects
§ Take care 'For All Entries in Table'
- Initial or Not
- Large number of records
§ Keep Select Clause simple
§ Into Corresponding Fields Vs Into Table

Data Selection from database

§ INTO TABLE with the SELECT statement provides faster access to the database than using an APPEND statement with SELECT

§ The INTO TABLES statement groups the records which accounts for reduction in network traffic whereas APPEND statements access the DATABASE for individual records
§ For data which is used once the SELECT ENDSELECT loop is effective since the INTO TABLE statement collects all the data of a table in the memory
§ Use package size option, Data is put in table in packets of size n
§ Use "where" clause with index rather than select all and check
Aggregate Clauses
§ Use ORDER BY Clauses only if indexes are taken
§ Do not use ORDER BY if data selected is more ( more than 20% of total )
§ Statements like COUNT, SUM, MIN, MAX, AVG are done at the database level
§ Usage of these statements avoid transferring large amounts of data to an application for calculating its aggregate
§ These aggregate functions can also be combined well with the GROUP BY statement

INDEXES

? An index can be considered a sorted copy of a database table of certain fields.
? Sorting provides faster access to the data records of the table, ( binary search ).
? Indexes are of two types:- Primary and Secondary
? .The primary index contains the key fields of the table and a pointer to the non-key fields
? The primary index is created automatically when the table is created in the database.
? Other indexes created are called secondary indexes.
? Table is accessed frequently with fields other than fields of primary index


Programming using Indexes

§ Data being selected
§ Volume of data
§ Order of fields in Index

What to Keep in Mind for Secondary Indexes

§ When table contents change, indexes are readjusted
§ Slows down database insertion
§ Frequently accessed tables should not have too many indexes
§ Create indexes only when that index is likely to be used frequently
§ Indexes created may not be used to one's liking.
§ Determined by database optimizer by an algorithm
§ Indexes should have few common fields. Too many common fields would
create problems for the optimizer
§ Use SQL Trace for determining indexes


Type of Index Scans


§ Unique Index Scan :- The entry specified by that index is unique
§ Range Scan:- Most frequent, a range of values scanned
§ Full Table Scan:- No index used, full table scanned for data


Buffering - Important Points to consider


§ The database interface of SAP provides buffers on each application server which allows local storage of database tables
§ Access to data in tables which are buffered can take place from application server instead of accessing the database directly
§ Table buffers enhance the performance of a system by reducing the number of times the database is accessed through the network for data
§ The performance improvement due to table buffers on a systems with several application servers is considerably more compared to a central system with only one application server
§ But on a central system with one application server a noticeable effect on the performance comes due the reduction in the number of processes when a buffer is accessed instead the database
§ The buffer values remain unchanged when a read access is made. If the application changes the data the changes are made on the database and then the buffer on the corresponding application server
§ When data is changed on the database it is logged in the DDLOG table on the database
§ Buffers on other application servers are updated at intervals of one to two minutes
§ This is done with the help of the log maintained in the DDLOG table. When the synchronizing mechanism is run, the log invalidates the buffers on all other application servers
§ This causes all the other application servers to access the database directly for data and update their buffers the next time when data is needed

Operations on Internal Tables

§ Copying of Internal Tables - Copy whole table rather than line by line
§ Do not delete records of the same internal table within loop
§ Delete records using where clause
Memory Management
§ Avoid unnecessary variables
§ Use local variables in modularization units
§ Transfer key information to the Calls
§ Free Internal table no longer in use
§ Optimize usage of bulk data, memory and processing overhead


SQL Trace (ST05)

§ Overview
§ Understanding What is Measured
§ Creating an SQL Trace Data File
§ Calling an SQL Trace Data File
§ Analyzing an SQL Trace Data File


SQL Trace: Overview

§ The SQL statements that the application uses.
§ Shows what the ABAP processor is requesting from the database
§ Lists actual duration of each database request
§ What database accesses or changes occur in the update section of the application
§ How the system translates ABAP OPEN SQL commands (such as SELECT) into standard SQL commands
§ Gives index usage and available indexes
§ Presents types of table scans used on fields
§ Red flags questionable SQL statements
§ Where the application makes unnecessary database accesses or repeated accesses
§ Gives index usage and available indexes
§ Presents types of table scans used on fields
§ Red flags questionable SQL statements
§ Where the application makes unnecessary database accesses or repeated accesses

From the time the trace function is turned on to the time it is turned off again, all database activity occurring either for a specific user or for an entire system is recorded. The SQL Trace tool measures the following database requests:
§ Buffering of Database Requests
- To keep the number of runtime-consuming PREPARE calls small, each an application's work processes hold a certain number of already translated SQL statements in a special buffer. By default, a process holds up to 250 statements.
- If the system must execute a specific OPEN SQL, the system first checks whether this statement is stored in the "statement cache". If the statement is in the cache, the system executes it immediately using a REOPEN (SELECT) or a REEXEC (INSERT, UPDATE, DELETE).
§ Buffering of Database Requests (cont'd)


- If the statement is not buffered, a PREPARE operation prepares it for the subsequent OPEN/EXEC. The system administers the buffer according to the LRU algorithm ("least recently used"). When space is needed for new statements, the statements that are rarely used are deleted. As a result of the LRU algorithm, the statement must prepare frequently used statements usually only once.
- An application server buffers the DECLARE, PREPARE, OPEN, and EXEC requests within the cursor cache of one work process. As a result, once the system opens a cursor for a DECLARE operation, it can use this cursor over and over again within the same work process.
Understanding What is Measured
§ Logical Sequence of Database Requests
- Database requests are interconnected and always occur in the same logical sequence. The DECLARE function defines and numbers the cursor. DECLARE precedes the PREPARE function. Use PREPARE to prepare a specific database statement, such as:
- select * from sflight where carrid eq 'LH'.
- and define the access method before the system can transfer the request to the database. During this preparation, the system is concerned only with the structure of the SQL statement and not with the values it contains.
- The OPEN function takes the prepared SELECT statement and completes it with the correct values. In the above example, OPEN would issue the field carrid the value LH.

Creating an SQL Trace Data File

§ Go to the program to be SQL traced
§ If the program has a selection screen, execute the program and bring it to the selection screen
§ From the menu, choose System > Create Session, and type /nst05 in the transaction entry field of the new session. Select the Trace on button.
§ Go back to the program session, and execute the program (F8). Once the program is through executing, return to the SQL Trace session and select Trace off. Now select List Trace. A detailed list of all database requests will appear, as shown on the next slide.


Retrieving Trace Data File

1) Call up the initial screen of the SQL Trace tool.
2) Choose List trace.
The system asks to specify a trace file. The last trace that was run is suggested as the default value.
3) Ensure that the information is correct.
If a trace is run using an * (asterisk) for the user name, enter * (asterisk) in the DB trace for user to retrieve the trace.
4) Choose OK.

General Tips

§ Visualize table growth and build suitable restrictions while on drawing board itself like limiting document selection by date / status / material etc.
§ SELECT less with right where clause than selecting all and then filtering.
§ Use effective table joins or use correct views.
§ Reduce database hits by selecting all info. in one shot rather than repeated access.
§ Use of secondary sales table may reduce time instead using main table and using custom indices.
§ Check whether right index is picked
§ Apply all performance related techniques while processing internal tables, especially when they are expected to be big.
§ Perform run time analysis and SQL trace to identify bottle necks

No comments:

Post a Comment