Quantcast
Channel: SCN : Blog List - SAP Business Warehouse
Viewing all 333 articles
Browse latest View live

Setting BW Safety Belt for Reporting with BOBJ Clients

$
0
0

     When reporting with BOBJ Clients on BW data users might request too detailed information pushing BW system over its limit and causing performance / system stability issues. There is a safety belt functionality which allows to set a maximum number of cells retrieved from BW. In my blog I will explain to set safety belt for different BOBJ Clients. If you do not authorization or system to play with you can create trial BW / BOBJ landscape in Cloud like exaplined here

 

     Setting BW Safety Belt for Analysis OLAP

     It is set in Central Management Console updating Properties of Adaptive Processing Server

   BW Safety Belt 1.jpg

Here are setting and default values

SettingDefault Value

Maximum Client Sessions

15
Maximum number of cells returned by a query100,000
Maximum number of members returned when filtering100,000


To demonstrate how Safety Belt is working lets change Maximum number of cells returned by a query to something small, for example, 5.

 

BW Safety Belt 2.jpg

and restart the Server

BW Safety Belt 3.jpg

Now if we run Analysis for OLAP without drill down, then no error occurs

BW Safety Belt 4.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 5.jpg

 

     Setting BW Safety Belt for Web Intelligence and Crystall Report

     It is set maintaining BICS_DA_RESULT_SET_LIMIT_DEF and BICS_DA_RESULT_SET_LIMIT_MAX paremeters in RSADMIN table. To demonstrate how safety belt works lets set the limits to some small value, for example, 5 running SAP_RSADMIN_MAINTAIN program.

BW Safety Belt 6.jpg

BW Safety Belt 7.jpg

Now if we run Web Intelligence report without drill down, then no error occurs

BW Safety Belt 8.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 9.jpg

Safety Belt for Crystal Reports works the same way as for Web Intelligence


Extraction in SAP BI

$
0
0

What is Data Extraction?


Data extraction in BW is extracting data from various tables in the R/3 systems or BW systems. There are standard delta extraction methods available for master data and transaction data. You can also build them with the help of transaction codes provided by SAP. The standard delta extraction for master data is using change pointer tables in R/3. For transaction data, delta extraction can be using LIS structures or LO cockpit etc.


Types of Extraction:


  1. Application Specific:
    • BW Content Extractors
    • Customer Generated Extractors
  2. Cross Application Extractors
    • Generic Extractors.

 

extractors.gif



BW Content Extractors


SAP provided the predefined Extractors like FI, CO, LO Cockpit etc, in OLTP system (R/3) . The thing that you have to do is, Install business Content.

 

Lets take an example of FI extractor. Below are the steps you need to follow:

  • Go to RSA6 >> select the desired datasource >> In the top there is a tab Enhance Extract Structure>> Click on it


Untitled.jpg

  • It will take you to DataSource: Customer Version Display. Double click on the ExtractStruct.

Untitled.png

 

  • Click on Append Structure button as shown:

Untitled.png

  • Add the field Document Header Text (eg: ZZBKTXT) in the Append Structure with ComponentType: BKTXT. Before you exit, make sure that you activate the structure by clicking on the activate button.

Untitled.png

  • Required field has been successfully added in the structure of the data source.

Untitled.png

Populate the Extract Structure with Data

       SAP provides enhancement RSAP0001 that you use to populate the extract structure. This enhancement has four components that are specific to each of        the four types of R/3 DataSources :


  • Transaction data EXIT_SAPLRSAP_001
  • Master data attributes EXIT_SAPLRSAP_002
  • Master data texts EXIT_SAPLRSAP_003
  • Master data hierarchies EXIT_SAPLRSAP_004

 

With these four components (they're actually four different function modules), any R/3 DataSource can be enhanced. In this case, you are enhancing a transaction data DataSource, so you only need one of the four function modules. Since this step requires ABAP development, it is best handled by someone on your technical team. You might need to provide your ABAP colleague with this information:

  • The name of the DataSource (0FI_GL_4)
  • The name of the extract structure (DTFIGL_4)
  • The name of the field that was added to the structure (ZZBKTXT)
  • The name of the BW InfoSource (0FI_GL_4)
  • The name of the R/3 table and field that contains the data you need (BKPFBKTXT)

With this information, an experienced ABAP developer should be able to properly code

the enhancement so that the extract structure is populated correctly. The ABAP code itself

would look similar to the one shown below:

 

Untitled.png


  • Now check the data via tcode RSA3.

 

(You can open the four Function Modules given above (Tcode SE37), you will get include statement in all the FMs. Double click on the include program you will get the ABAP code as above for all standard data sources which can be modified.)

 

 

Note: Similarly you can enhance all other SAP delivered extractors. ( For LO Cockpit use tcode LBWE)

 

 

Customer Generated Extractors

 

For some application which vary from company to company like LIS , CO-PA ,FI-SL because of its dependency on organization structure , SAP was not able to provide a standard data source for these application. So customer have to generate their own data source. So this is called Customer generated Extractors.

 

Lets take an example of CO-PA extraction

  • Go to Tcode KEB0 which you find in the SAP BW Customizing for CO-PA in the OLTP system.

Untitled.jpg

 

 

  • Define the DataSource for the current client of your SAP R/3 System on the basis of one of the operating concerns available there.
  • In the case of costing-based profitability analysis, you can include the following in the DataSource: Characteristics from the segment level, characteristics from the segment table, fields for units of measure, characteristics from the line item, value fields, and calculated key figures from the key figure scheme.
  • In the case of account-based profitability analysis, on the other case, you can only include the following in the DataSource: Characteristics from the segment level, characteristics from the segment table, one unit of measure, the record currency from the line item, and the key figures.
  • You can then specify which fields are to be applied as the selection for the CO-PA extraction.

Untitled.jpg

 

 

Generic Extractors


When the requirement of your company could not be achieved by SAP delivered business content data source , Then you have to create your own data source that is purely based on your company's requirement , That is called generic extractors .

 

Based on the complexity you can create Data source in 3 ways .

 

1. Based on Tables/Views ( Simple Applications )

2. Based on Infoset

3. Based on Function Module ( Used in complex extraction)


Steps to create generic extractor:


1. Based on Tables/Views ( Simple Applications )


  • Go to Tcode RSO2 and choose the type of data you want to extract (transaction, Masterdata Attribute or Masterdata Text)

Untitled.png

  • Give the name to the data source to be created and click on create.

Untitled.png














  • On the Create data source screen, enter the parameters as required:

Untitled.jpg

Application Component: Component name where you wish to place the data source in the App. Component hierarchy.

Text: Descriptions (Short, Medium and Long) for the data source.

View/Table: Name of the Table/View on which you wish to create the Generic data source. In our case it is ZMMPUR_INFOREC.

 

  • The Generic datasource is now displayed allowing you to Select as well as Hide field. The fields ‘hidden’ will not be available for extraction. Fields in the ‘Selection’ tab will be available for Selection in the Infopackage during data extraction from the source system to the PSA.

Untitled.jpg


  • Select the relevant fields and Save the data source.

Untitled.png

  • Now save the DataSource.

Untitled.jpg

 

 

2. Based on Infoset


https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiCruWWusPLAhWBPZoKHa7KArgQFg…


3. Based on Function Module


https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwjOk46_usPLAhVqIJoKHej3A8wQFg…


 

Note: Data for all types of extractors can be viewed via Tcode RSA3, where you have to give the DataSource name, Data Records/call, No. of Extr calls and the selections:

 

Untitled.png

 


The detailed information on LO-Cockpit, Update modules, generic extractor using FM and Infoset, delta pointer, safety interval will be shared in upcoming blogs.

 

Thanks,

Suraj Yadav

How to Change hierarchy node name using ABAP Program.

$
0
0


Hi All,

 

 

 

Requirement – There are lot of Bex queries which uses hierarchy node hardcoded .Currently hierarchy maintained at BI and future you are automated by maintaining set hierarchy at ecc side. Your query filtered with node for example “REG IND” but issue is at ecc side you cannot name node with space so if you maintain as “REG_IND” will not show proper result in query.

 

If you do not wants to change Bex queries because it’s huge effort for modification Bex queries.So you can go for following work around.

 

Note – It’s always do correct changes at ecc side or modify bex queries with correct node which is coming from ECC.

 

This blog gives you an idea to change node using abap program.

 

If you search with “how to change hierarchy node “you will find lot of thread which saying you are not able to change node name in BI but able to change description.

 

h1.PNG

 

 

That is correct manually we cannot change but using abap program you can change it.

 

Step1 – Go to T code Se38

 

Provide program name and copy following code.

Here our hierarchy info object is zgkb

 

 

REPORT ZHIRACHY_NODECHANGE.

 

 

data: wa_node_change type /BIC/Hzgkb.

 

 

update /BIC/Hzgkb set NODENAME = 'REG IND'

 

 

where IOBJNM = '0HIER_NODE'

 

 

and OBJVERS ='A'

 

 

and NODEID = 1

 

 

and NODENAME = 'REG_IND'.

 

  

 

Execute this program as one step after ecc hierarchy load complete through process chain.In this way you can change hierarchy node using ABAP program.

 

Thanks for reading. Hope it is useful information..

   

 

Regards,

Ganesh Bothe

 

 

 

 

Column headers in Bex for Key + text

$
0
0

Hi,

 

anyone who has ever tried to create a pivot table on top of the Bex Analyzer output will have experienced this issue.

When displaying key and text for an info object, the column header for the key is filled, but the text column remains empty without a header.

This makes it impossible to create a pivot table on top of it.

 

Using the Callback macro in Bex 7.x it is possible to scan the column headers in the result area and put in a custom text.

In this blog I describe how to do this.

 

First of all, run the query in Bex analyzer.

 

After running the query, go to view --> macro's --> view macro's

select CallBack and press edit.

macro screen.jpg

 

Scroll below to the following piece of code

callback macro before.JPG

After the End With and before End If, insert the following lines:

 

    'set column headers for key + text

    Dim nrCol As Long

    Dim resultArea As Range

    Set resultArea = varname(1)

    nrCol = resultArea.Columns.Count

    For i = 1 To nrCol - 1

        If resultArea.Cells(1, i + 1) = "" Then

            resultArea.Cells(1, i + 1) = resultArea.Cells(1, i) & "_text"

        End If

    Next i

 

This code will put suffix _text in the column header, based on the preceding column header.

 

The end result in the macro then looks like this:

callback macro after.JPG

After refreshing the query, you will now see the column headers being added based on the previous column header, with _text behind it.

 

Hope this will help a lot of people.

 

Best regards,

Arno

How to do Remodeling on DSO

$
0
0

Overview of Remodeling

 

If we want to modify an DSO that data has already been loaded. We can use remodeling to change the structure of the object without losing data.

If we want to change an DSO  that no data has been loaded into yet, we can change it in DSO maintenance.

 

We may want to change an InfoProvider that has already been filled with data for the following reasons:

 

We want to replace an InfoObject in an InfoProvider with another, similar InfoObject. we have created an InfoObject ourself but want to replace it with a BI Content InfoObject.

 

Prerequisites

 

As a precaution, make a backup of your data before you start remodeling. In addition, ensure that:

we have stopped any process chains that run periodically and affect the corresponding InfoProvider. Do not restart these process chains until remodeling is finished.

There is enough tablespace available in the database.

After remodeling, we have to check which BI objects that are connected to the InfoProvider (for example, transformation rules, MultiProviders) have been deactivated. we have to reactivate these objects manually. The remodeling makes existing queries that are based on the InfoProvider invalid. we have to manually adjust these queries according to the remodeled InfoProvider. If, for example, we have deleted an InfoObject, we also have to delete it from the query.

 

Features

A remodeling rule is a collection of changes to your DSO that are executed simultaneously.

For DSO, you have the following remodeling options:

For characteristics:

Insert or replace characteristics with:

Constants

An attribute of an InfoObject within the same dimension

A value of another InfoObject within the same dimension

A customer exit (for user-specific code)

Delete

For key figures:

Insert:

Constant

A customer exit (for user-specific code)

Replace with: ○ A customer exit (for user-specific code)

Delete You cannot replace or delete units. This avoids having key figures in the DSO without the corresponding unit.


Implementation of Remodeling Procedure To carry out the Remodeling  procedure, Right click on your DSO and in the context menu, navigate through Additional Functions -----> Remodeling.


Capture.PNG                                                                                                                       we will get the following window after clicking on Remodeling. Enter a remodeling rule name and press Create to create a new rule.

Capture1.PNG                                                                             After clicking on Create we will get the following pop-up window where in we have to enter a description for the rule we wish to create (as shown below).

Capture2.PNG                                                                               After entering the description, press the Create button. we will see the following screen.


           Capture4.PNG                                                                              

As we can see, the left pane shows the structure of the DSO in consideration.

To add a new remodeling rule, Click on the Green Plus sign on the Top-Left corner of your screen (Also circled in Red below). It is called the Add Operation to List button.

 

Capture4.PNG                                                                                   You will get the following pop-up where you can add the remodeling rules.

Capture5.PNG                                                                                         Business Requirement The requirement is as follows:

To delete the Time Characteristic 0CALDAY from the data fields.

To add 0COMP_CODE to the key fields with constant value 1100.

To delete the key figure Revenue(ZREVU8) as it is no longer relevant for reporting in this DSO.

We will implement these requirements one by one.

 

In the pop-up that opened in the last step, select the Delete Characteristic Radio Button and enter the technical name of the Characteristic name you wish to delete (0CALDAYin this case)

Capture 6.PNG                                                                                               Confirm by pressing the CREATE button.

capture 9.PNG


Adding Characteristic 0COMP_CODE.with value 1100 to key fields of DSO.

Capture 7.PNG           

 

 

we need to check AS Key Field check box.if we want that in particular position.

click on create button.


capture 10.PNG

To delete key figure we need to follow these steps.

Capture 8.PNG                                                                                                 Then click on create button.

                                                                                                 

capture 11.PNG



after that click on activate and simulate then go for schedule option.

Capture12.PNG                                                                                    simulation done and click on continue then now it will schedule screen.

Capture 13.PNG                                                                                                  select immediate option then it will appear the below screen here we need to select save option.

capture 14.PNG                                                                                                now we will get message like this.

Capture 15.PNG                                                                                                                                                  if we want to see job click on jobs then we will check it. after that DSO will be inactive and we need to activate.

Capture 16.PNG                                                                                    Now Remodeling successfully done on DSO.

 



Setting BW Safety Belt for Reporting with BOBJ Clients

$
0
0

     When reporting with BOBJ Clients on BW data users might request too detailed information pushing BW system over its limit and causing performance / system stability issues. There is a safety belt functionality which allows to set a maximum number of cells retrieved from BW. In my blog I will explain to set safety belt for different BOBJ Clients. If you do not authorization or system to play with you can create trial BW / BOBJ landscape in Cloud like exaplined here

 

     Setting BW Safety Belt for Analysis OLAP

     It is set in Central Management Console updating Properties of Adaptive Processing Server

   BW Safety Belt 1.jpg

Here are setting and default values

SettingDefault Value

Maximum Client Sessions

15
Maximum number of cells returned by a query100,000
Maximum number of members returned when filtering100,000


To demonstrate how Safety Belt is working lets change Maximum number of cells returned by a query to something small, for example, 5.

 

BW Safety Belt 2.jpg

and restart the Server

BW Safety Belt 3.jpg

Now if we run Analysis for OLAP without drill down, then no error occurs

BW Safety Belt 4.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 5.jpg

 

     Setting BW Safety Belt for Web Intelligence and Crystall Report

     It is set maintaining BICS_DA_RESULT_SET_LIMIT_DEF and BICS_DA_RESULT_SET_LIMIT_MAX paremeters in RSADMIN table. To demonstrate how safety belt works lets set the limits to some small value, for example, 5 running SAP_RSADMIN_MAINTAIN program.

BW Safety Belt 6.jpg

BW Safety Belt 7.jpg

Now if we run Web Intelligence report without drill down, then no error occurs

BW Safety Belt 8.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 9.jpg

Safety Belt for Crystal Reports works the same way as for Web Intelligence

Roles And Authorization on Hierarchy in SAP BW 7.4

$
0
0

Introduction to Roles and Authorizations in BW 7.4

The Roles and Authorization maintained in BW7.4 provides a restriction on accessing reports based on infocube level, Characteristics level, Characteristics Value level, Key Figure level, hierarchy Node Level. The above mentioned restrictions are maintained by using this below mentioned approach;

 

Authorizations are maintained in authorization objects.

Roles contain the Authorizations.

Users are assigned to roles

 

Capture 21.PNG

 

Transactions Used

Infoobject Maintenance - RSD1.

Role Maintenance - PFCG

Roles and Authorization maintenance - RSECADMIN.

User creation SU01.

 

Note: A Characteristic object should be Authorization Relevant to make it available for restrictions. To make a characteristics object, Authorization Relevant; Go to “Business Explorer” tab in Info object details. Without making an object Authorization relevant checked, we cannot use it or include it into the Authorization Object.

 

Enter T code RSD1

Capture.PNG

enter the info object and click on Maintain.

Capture 1.PNG

Click on Business Explorer Tab then select the Authorization Relevant check box.so now we can use this

object in Roles and Authorization.

 

SCENARIO:

In my Scenario we want to create authorization on info object(0FUNCT_LOC) with hierarchy.suppose the hierarchy have three level's and i have 3 user's like User1,User2,User3. but User1 need to access hierarchy level 1 data ,User2 need to access hierarchy level 2 and User3 need to access hierarchy level 3.so that we need to follow the steps.

 

Creating Roles and Authorization objects

Creating Authorization objects

Enter T code RSECADMIN

Capture 2.PNG

then click on Ind.Maint.

 

 

cap2.png

Enter the Authorization name and click on create.

cap1.png

 

 

 

Maintain short,medium,long description and click on Insert Row and enter the objects.

0TCAACTVT Activity in Analysis Authorizations

0TCAACTVT Grant authorizations to different activities like to change and Display, Default value is 03 Display.

0TCAIPROV Authorizations for InfoProvider

0TCAIPROV Grant authorization to particular InfoProviders, Default value is * .

0TCAVALID Validity of an Authorization

0TCAVALID Define when authorizations are valid or not valid, Default Value is * .

and click on insert special characteristics.

 

cap3.png

cap4.png

 

 

cap5.png

 

 

 

now enter the info object 0FUNCT_LOC. and double click on that then go for Hierarchy Authorizations Tab.

click on create option.

cap6.pngcap7.png

 

  

 

select Hierarchy click on browse.

cap8.png

select Node Details and click on browse.

 

 

select particular Node from left side and move to right side what ever we required for particular user.

select particular Type of Authorization is

Capture 12.PNG

then click on continue.

Now click on User Tab.

Capture 13.PNG

 

 

 

click on Indvl Assignment then it will appear the below screen.

cap10.png

 

Enter the User and click on Role Maintenance.

cap11.png

 

click on create single role.

cap12.png

 

enter the description and click on change authorization data ICON.

 

 

cap13.png

 

add the above marked objects and click on generate ICON.

Now come to User tab enter the required user's

 

cap14.png

 

Click on user comparison then we get the below screen.

cap15.png

If we want to give access particular T code then go to Menu tab click on Add that T code and then screen will appear like this.

 

Capture 20.PNG

enter t code and click on Assign Transactions.and save it.

now log in Analyzer or SAP BW with

for the User2 and User3 also we need to follow the same steps.

Enhance Service Class to Handle Complex Virtual Cube Selections

$
0
0
    Virtual Cube Function Module can be very easily implemented using CL_RSDRV_REMOTE_IPROV_SRV class services (there is an example in class documentation). I like its simplicity, but unfortunatelly it can not handle complex selections. In my blog, I will explain how to keep Virtual Cube Function Module implementation simple and in the same time handle complex selections enhancing service class.
      Below is Function Module that implementation Virtual Cube reading from SFLIGHT table
*---------------------------------------------------------------------*
*      CLASS lcl_application  DEFINITION
*---------------------------------------------------------------------*
CLASS lcl_applicationDEFINITION.
 
PUBLIC SECTION.
   
CLASS-METHODS:
      get_t_iobj_2_fld
RETURNINGVALUE(rt_iobj_2_fld) TYPE
                    cl_rsdrv_remote_iprov_srv
=>tn_th_iobj_fld_mapping.

ENDCLASS.

*---------------------------------------------------------------------*
*      CLASS lcl_application  IMPLEMENTATION
*---------------------------------------------------------------------*
CLASS lcl_applicationIMPLEMENTATION.
*---------------------------------------------------------------------*
* get_t_iobj_2_fld
*---------------------------------------------------------------------*
METHOD get_t_iobj_2_fld.

  rt_iobj_2_fld
= VALUE #( ( iobjnm = 'CARRID'    fldnm = 'CARRID' )
                         
( iobjnm = 'CONNID'    fldnm = 'CONNID' )
                         
( iobjnm = 'FLDATE'    fldnm = 'FLDATE' )
                         
( iobjnm = 'PLANETYPE' fldnm = 'PLANETYPE' )
                         
( iobjnm = 'SEATSOCC'  fldnm = 'SEATSOCC' )
                         
( iobjnm = 'SEATSOCCB' fldnm = 'SEATSOCC_B' )
                         
( iobjnm = 'SEATSOCCF' fldnm = 'SEATSOCC_F' ) ).

 
ENDMETHOD.
ENDCLASS.

FUNCTION z_sflight_read_remote_data.
*"----------------------------------------------------------------------
*"*"Local Interface:
*"  IMPORTING
*"    VALUE(INFOCUBE) LIKE  BAPI6200-INFOCUBE
*"    VALUE(KEYDATE) LIKE  BAPI6200-KEYDATE OPTIONAL
*"  EXPORTING
*"    VALUE(RETURN) LIKE  BAPIRET2 STRUCTURE  BAPIRET2
*"  TABLES
*"      SELECTION STRUCTURE  BAPI6200SL
*"      CHARACTERISTICS STRUCTURE  BAPI6200FD
*"      KEYFIGURES STRUCTURE  BAPI6200FD
*"      DATA STRUCTURE  BAPI6100DA
*"----------------------------------------------------------------------

  zcl_aab=>break_point( 'Z_SFLIGHT_READ_REMOTE_DATA' ).

 
DATA(iprov_srv) = NEW
    cl_rsdrv_remote_iprov_srv
( i_th_iobj_fld_mapping = lcl_application=>get_t_iobj_2_fld( )
                                i_tablnm             
= 'SFLIGHT' ).

  iprov_srv
->open_cursor(
    i_t_characteristics
= characteristics[]
    i_t_keyfigures     
= keyfigures[]
    i_t_selection     
= selection[] ).

  iprov_srv
->fetch_pack_data( IMPORTING e_t_data= data[] ).

 
return-type = 'S'.

ENDFUNCTION.
This how BW Query is defined which sends complex selection to Virtual Cube Function Module.
Service Class 2.jpg
Service Class 3.jpg
As you can see the Query reads number of seats occupied in Airbus Airplanes Types (global restriction) for All Carriers, Lufthansa and American Airlines in each 2015 and 2016 years.  Following selection is sent to Virtual Cube Function Module
Service Class 4.jpg
Expression 0 correspinds to global restriction and expressions 1 through 6 correspond to restricted key figures (All Carriers 2015, All Carriers 2016, Lufthansa 2015, Lufthansa 2016, American Airlines 2015 and American Airlines 2016).
Service class in our Virtual Cube Function Module used in such a way that generates wrong SQL Where clause expression. It is not a problem with Service Class as such, but the way it is used.
Service Class 6.jpg
BW Query results are wrong (All Carries data is a sum of Lufthansa and American Airlines. e.g. other carriers data is missing).
Service Class 7.jpg
The problem is that generated SQL Where clause expression does not follow the rule below:
E0  AND (  E1 OR E2 OR E3 ... OR EN ),
where E0 corresponds to the global restrictions and E1, E2, E3 ... EN to other restrictions.
The problem can easily be fixed enhancing CL_RSDRV_REMOTE_IPROV_SRV service class. What it takes is to:

 

Service Class 8.jpg

Creation of BUILD_WHERE_CONDITIONS_COMPLEX method
Service Class 9.jpg
METHOD build_where_conditions_complex.
DATA: wt_bw_selectionTYPE tn_t_selection.
DATA: wt_whereTYPE rsdr0_t_abapsource.

* E0 AND ( E1 OR E2 OR E3 ... OR EN )
 
LOOP AT i_t_selectionINTO DATA(wa_bw_selection)
   
GROUP BY ( expression = wa_bw_selection-expression )
             
ASCENDING ASSIGNING FIELD-SYMBOL(<bw_selection>).
   
CLEAR: wt_bw_selection,
          wt_where.
   
LOOP AT GROUP <bw_selection> ASSIGNING FIELD-SYMBOL(<selection>).
      wt_bw_selection
= VALUE #( BASE wt_bw_selection ( <selection> ) ).
   
ENDLOOP.
    build_where_conditions
( EXPORTING i_t_selection = wt_bw_selection
                           
IMPORTING e_t_where    = wt_where ).
   
CASE <bw_selection>-expression.
   
WHEN '0000'.
     
IF line_exists( i_t_selection[ expression = '0001' ] ).
       
APPEND VALUE #( line = ' ( ' ) TO e_t_where.
     
ENDIF.
     
APPEND LINES OF wt_whereTO e_t_where.
     
IF line_exists( i_t_selection[ expression = '0001' ] ).
       
APPEND VALUE #( line = ' ) AND ( ' ) TO e_t_where.
     
ENDIF.
   
WHEN OTHERS.
     
IF <bw_selection>-expression > '0001'.
       
APPEND VALUE #( line = ' OR ' ) TO e_t_where.
     
ENDIF.
     
APPEND VALUE #( line = ' ( ' ) TO e_t_where.
     
APPEND LINES OF wt_whereTO e_t_where.
     
APPEND VALUE #( line = ' ) ' ) TO e_t_where.
     
IF ( line_exists( i_t_selection[ expression = '0000' ] ) ) AND
       
( NOT line_exists( i_t_selection[ expression = <bw_selection>-expression + 1 ] ) ).
       
APPEND VALUE #( line = ' ) ' ) TO e_t_where.
     
ENDIF.
   
ENDCASE.
 
ENDLOOP.

ENDMETHOD.
BUILD_WHERE_CONDITIONS_COMPLEX method contains logic to build selection accorindg to the rule. It is calling original
BUILD_WHERE_CONDITIONS method using it as buling block. New LOOP AT ... GROUP BY ABAP Syntax is used to split selection table into individual selections converting then them into SQL Where clause expressions and combining them into final expression as per the rule.

 

 

 

Implemention of Overwrite-exit for OPEN_CURSOR method
CLASS lcl_z_iprov_srvDEFINITION DEFERRED.
CLASS cl_rsdrv_remote_iprov_srv DEFINITION LOCAL FRIENDS lcl_z_iprov_srv.
CLASS lcl_z_iprov_srvDEFINITION.
PUBLIC SECTION.
CLASS-DATA objTYPE REF TO lcl_z_iprov_srv. "#EC NEEDED
DATA core_object TYPE REF TO cl_rsdrv_remote_iprov_srv . "#EC NEEDED
INTERFACES  IOW_Z_IPROV_SRV.
 
METHODS:
  constructor
IMPORTING core_object
   
TYPE REF TO cl_rsdrv_remote_iprov_srvOPTIONAL.
ENDCLASS.
CLASS lcl_z_iprov_srvIMPLEMENTATION.
METHOD constructor.
  me
->core_object= core_object.
ENDMETHOD.

METHOD iow_z_iprov_srv~open_cursor.
*"------------------------------------------------------------------------*
*" Declaration of Overwrite-method, do not insert any comments here please!
*"
*"methods OPEN_CURSOR
*"  importing
*"    !I_T_CHARACTERISTICS type CL_RSDRV_REMOTE_IPROV_SRV=>TN_T_IOBJ
*"    !I_T_KEYFIGURES type CL_RSDRV_REMOTE_IPROV_SRV=>TN_T_IOBJ
*"    !I_T_SELECTION type CL_RSDRV_REMOTE_IPROV_SRV=>TN_T_SELECTION .
*"------------------------------------------------------------------------*
 
DATA:
    l_t_groupby   
TYPE rsdr0_t_abapsource,
    l_t_sel_list 
TYPE rsdr0_t_abapsource,
    l_t_where     
TYPE rsdr0_t_abapsource.

  core_object
->build_select_list(
   
exporting
      i_t_characteristics
i_t_characteristics
      i_t_keyfigures     
i_t_keyfigures
   
importing
      e_t_sel_list
= l_t_sel_list
      e_t_groupby 
= l_t_groupby).

  core_object
->build_where_conditions_complex(
   
exporting
      i_t_selection
= i_t_selection
   
importing
      e_t_where
= l_t_where).

* #CP-SUPPRESS: FP secure statement, no user input possible
 
open cursor with hold core_object->p_cursor for select (l_t_sel_list) from (core_object->p_tablnm)
   
where (l_t_where)
   
group by (l_t_groupby).

ENDMETHOD.
ENDCLASS.
OPEN_CURSOR  Overwrite-exit method has the same logic as original method except that BUILD_WHERE_CONDITIONS_COMPLEX method is called instead of BUILD_WHERE_CONDITIONS
Now when the changes are in place, lets run the report again and see what SQL Where Clause expression is generated
Service Class 10.jpg
Finally, lets run the report again and see if shows correct data.
Service Class 11.jpg
Now data is correct. All Carriers includes all data not only Lufthansa and American Airlines.

Setting BW Safety Belt for Reporting with BOBJ Clients

$
0
0

     When reporting with BOBJ Clients on BW data users might request too detailed information pushing BW system over its limit and causing performance / system stability issues. There is a safety belt functionality which allows to set a maximum number of cells retrieved from BW. In my blog I will explain to set safety belt for different BOBJ Clients. If you do not authorization or system to play with you can create trial BW / BOBJ landscape in Cloud like exaplined here

 

     Setting BW Safety Belt for Analysis OLAP

     It is set in Central Management Console updating Properties of Adaptive Processing Server

   BW Safety Belt 1.jpg

Here are setting and default values

SettingDefault Value

Maximum Client Sessions

15
Maximum number of cells returned by a query100,000
Maximum number of members returned when filtering100,000


To demonstrate how Safety Belt is working lets change Maximum number of cells returned by a query to something small, for example, 5.

 

BW Safety Belt 2.jpg

and restart the Server

BW Safety Belt 3.jpg

Now if we run Analysis for OLAP without drill down, then no error occurs

BW Safety Belt 4.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 5.jpg

 

     Setting BW Safety Belt for Web Intelligence and Crystall Report

     It is set maintaining BICS_DA_RESULT_SET_LIMIT_DEF and BICS_DA_RESULT_SET_LIMIT_MAX paremeters in RSADMIN table. To demonstrate how safety belt works lets set the limits to some small value, for example, 5 running SAP_RSADMIN_MAINTAIN program.

BW Safety Belt 6.jpg

BW Safety Belt 7.jpg

Now if we run Web Intelligence report without drill down, then no error occurs

BW Safety Belt 8.jpg

But if we drill down by Product or Sold-to number of cells will exceed the limit.

BW Safety Belt 9.jpg

Safety Belt for Crystal Reports works the same way as for Web Intelligence

Flatfile formating related issues in APD

$
0
0

List of issues faced during the flat file generation from APD.

 

1> Header name for key figures displaying with technical names in flat file.

 

2> Negative sign of key figures like amount and quantity displaying post values in flat file.

      Which will result in wrong Total Amount in flat file.

      i.e.     Amount

                 $1000 –

 

3> Leading zeros has been added in to the key figures of APD.

 

4> Values are getting rounded off. No decimal places displayed in the flat file.

 

Solution:

 

First create Z info objects as per your header field names length.

  1. i.e. ZCHAR20 for field name length 20

 

Assign these Z info objects in your target field of APD Routine as below,

              

Capture.PNG

 

Write following login in routine tab,

 

DATA: ls_source TYPE y_source_fields,
ls_target
TYPE y_target_fields.


ls_target
-Char1 = 'ABC'.
ls_target
-Char2 = 'XYZ'.

APPEND ls_target to et_target.

data : lv_value type p length 16 DECIMALS 2(Add decimal places as per your need)

LOOP AT it_source INTO ls_source.
    

           *    MOVE-CORRESPONDING ls_source TO ls_target.
                    ls_target
-Char1 = ls_source-Char1.
                    ls_target
-Char2 = ls_source-Char2.
          
*    ls_target-KYF_0001 = ls_source-KYF_0001.
                    
clear : lv_value.
                    
if ls_source-KYF_0001 is not initial.
                        lv_value
= ls_source-KYF_0001.
                              
if lv_value is not initial.
                                  ls_target
-KYF_0001 = lv_value.
                                   
if lv_value lt 0.
                                        
SHIFT ls_target-KYF_0001 RIGHT DELETING TRAILING '-'.
                                        
SHIFT ls_target-KYF_0001 LEFT DELETING LEADING ' '.
                                        
CONCATENATE '-' ls_target-KYF_0001 INTO ls_target-KYF_0001.
                                   
endif.
                              
endif.
                      
else.
                             ls_target
-KYF_0001 = '0.00'.
                      
endif. 

 

 

Note: Here Char1, Char2 is your info object technical name.

           ABC, XYZ is field name which you want to display in header field of flat file.

Cleanising BW Data Using Regular Expressions

$
0
0

     Sometimes data in Source System is not checked for quality. For example, input data is not checked for non printable characters e.g. tabulation, carriage return, linne feed etc. If user copy and paste data into input fields from email or web page then non printable characters can be entered into the system causing BW data loading issues (not permitted characters). In case of master data quality issue must fixed immediately otherwise problem will become worse with every transaction where incorrect master data is used. In case incorrect of just information fields that are stored in DSO at document level, then data can be fixed in transfer rules.

     What it takes is to correct data in transfer rule start routine using regular expression.

REGEX1.jpg

Prior to executing REPLACE statement HEWORD SOURCE_PACKAGE field contains hex 09 (tabulation) character.

REGEX2.jpg

Once REPLACE statement is executed, non printable character is gone.

REGEX3.jpg

REGEX4.jpg

How to write code in DTP to select Previous Month Data

$
0
0

Scenario: If I execute DTP in current month, it always picks only "Current month – 1" data.

 

                Eg: If Current month is Jan-2016, the DTP should fetch Dec-2015 Data from Source and updates into Target object.

 

 

Occasionally we need to filter the Date Characteristic InfoObject to extract only “Previous Month” data. Here the filter selection is not on SAP Content InfoObject, the filter selection is on Custom InfoObject.

 

If it is SAP Content InfoObject, we may have few SAP Customer exit/variables to use directly in DTP, but in this example I’m using Custom InfoObject which is created Data Type as DATS.

 

In DTP select the InfoObject and choose Create Routine and write/add the below Code in DTP Routine.

 

* Global code used by conversion rules
*$*$ begin of global - insert your declaration only below this line  *-*
* TABLES: ...


   DATA:   dt_range  TYPE STANDARD TABLE OF rsdatrange,

        btw LIKE STANDARD TABLE OF rsintrange,
        wdt_range
TYPE rsdatrange
.


*$*$ end of global - insert your declaration only before this line   *-*

 

 

*$*$ begin of routine - insert your code only below this line        *-*
     
data: l_idx like sy-tabix.
     
read table l_t_range with key
      fieldname
= ' '.
      l_idx
= sy-tabix.
*....

  CALL FUNCTION 'RS_VARI_V_LAST_MONTH' 
   * EXPORTING
   * SYSTIME          = ' '
  TABLES
   p_datetab 
= dt_range
   p_intrange
= btw.

READ TABLE dt_range INTO wdt_range INDEX 1.

      l_t_range
-fieldname = '/BIC/<Your_InfoObject_Name>'.
      l_t_range
-option = 'BT'.
      l_t_range
-sign = 'I'.
      l_t_range
-low = wdt_range-low.
      l_t_range
-high = wdt_range-high.

APPEND l_t_range
.

 

*  IF l_idx <> 0.
*    MODIFY l_t_range INDEX l_idx.
*  ELSE.
*    APPEND l_t_range.
*  ENDIF.

*$*$ end of routine - insert your code only before this line         *-*

Simplify Transformations with End Routine

$
0
0

There are scenarios when Transformation End Routine is a good fit. In my blog will demonstrate how to simplify Transfer Rules by means of:

 

Reducing Coding

 

In my case I load PO GR data and lookup multiple characteristic values from PO Item level. Instead of repetitive coding similar lookup / mapping for each characteristic in individual Transfer Rule I did once in End Routine. It saved me not only coding efforts, but also increased performance by reducing a number of lookups.

 

End Routine 1.jpg

 

End Routine 2.jpg

 

 

Increasing Reusability

 

During PO GR data load I calculate Delivery Duration based on Due Date, Delivery Duration based on GR data and over / under variances of two durations. I did not like the idea to repeat durations calculation logic in variance transformation rules. Instead I used results of duration calculations in end routine to calculate variances.

 

End Routine 3.jpg

End Routine 4.jpg

How to edit multiple records in PSA at once

$
0
0

One of the most common issues with the BW data loads is the incorrect data from the source system. For occasional failures we edit the PSA records instead of using a routine since it doesn't needs development work and transports.  If we need to correct multiple records then it will be pain to correct them one-by-one. In this blog, I will show how to correct multiple records at once.

Example:

  1. You have loaded the data and it failed with incorrect data. You have checked the PSA records and noticed there are multiple records with the same issue.

1.png

 

   2. You can filter the records which have incorrect data and select all the records and click on ‘Edit’ button.

2.png

 

    3. A blank record opens up on a pop-up screen.  Enter the correct data and save.

3.png

   4. Now you can check that the data is corrected for all the records you have selected.

4.png

 

Don't forget to notify the  owners/analysts to correct the data in the source system

HANA based BW Transformation

$
0
0

1      HANA based BW Transformation

What is a SAP HANA push down in the context of BW transformations? When does a push down occur? What are the prerequisites for forcing a SAP HANA push down?

 

Before I start to explain how a SAP HANA based BW transformation could be created and what prerequisites are necessary to force a push down I will provide some background information on the differences between an ABAP and SAP HANA executed BW transformation.

 

A HANA based BW transformation executes the data transformation logic inside the SAP HANA database. Figure 1.1 shows on the left-hand side the processing steps for an ABAP based transformation and on the right-hand side for a SAP HANA based transformation.

 

This blog provides information on the push-down feature for transformations in SAP BW powered by SAP HANA. The content here is based on experiences with real customer issues. The material used is partly taken from the upcoming version of the SAP education course PDEBWP - BW Backend und Programming.


This blog is planned as part of a blog series which shares experiences collected while working on customer issues. The listed explanations are primarily based on releases between BW 7.40 SP09 and BW 7.5 SP00.

 

The following additional blogs are planned / available:

  • HANA based Transformation (deep dive)
  • DTP Source - Target Dependencies
  • Analyzing and debugging HANA based BW Transformations
  • SAP HANA Analysis Process
  • General recommendation
  • New features delivered by 7.50 SP04
    • Routines
    • Error Handling

 

A HANA based BW transformation is a “normal” BW transformation. The new feature is that the transformation logic is executed inside the SAP HANA database. From a design time perspective, in the Administrator Workbench, there is no difference between a HANA based BW transformation and a BW transformation that is executed in the ABAP stack. By default the BW runtime tries to push down all transformations to SAP HANA. Be aware that there are some restrictions which prevent a push down. For example a push-down to the database (SAP HANA) is not possible if a BW transformation contains one or more ABAP routines (Start-, End-, Expert- or Field-Routine). For more information see Transformations in SAP HANA Database.

 

Restrictions for HANA Push-Down

Further restrictions are listed in the Help Portal. However, the documentation is not all-inclusive. Some restrictions related to complex and "hidden" features in a BW transformation are not listed in the documentation. In this context “hidden” means that the real reason is not directly visible inside the BW transformation.

The BAdI RSAR_CONNECTOR is a good example for such a “hidden” feature. A transformation using a customer specific formula implementation based on this BAdI cannot be pushed down. In this case the processing mode is switched to ABAP automatically.

The BW workbench offers a check button in the BW transformation UI to check if the BW transformation is “SAP HANA executable” or not. The check will provide a list of the features used in the BW transformation which prevent a push down.

 

SAP is constantly improving the push down capability by eliminating more and more restrictions In order to implement complex customer specific logic inside a BW transformation it is possible to create SAP HANA Expert Script based BW transformations. This feature is similar to the ABAP based Expert-Routine and allows customers to implement their own transformation logic in SQL Script. A detailed description of this feature is included later on.

 

SAP Note 2057542 - Recommendation: Usage of HANA-based Transformations provides some basic information and recommendations regarding the usage of SQL Script inside BW transformations.

 

1.1      HANA Push-Down

What is a SAP HANA push down in the context of BW transformations? When does a push down occur? What are the prerequisites for forcing a SAP HANA push down?

Before I start to explain how a SAP HANA based BW transformation could be created and what prerequisites are necessary to force a push down I will provide some background information on the differences between an ABAP and SAP HANA executed BW transformation.

A HANA based BW transformation executes the data transformation logic inside the SAP HANA database. Figure 1.1 shows on the left-hand side the processing steps for an ABAP based transformation and on the right-hand side for a SAP HANA based transformation.

 

Figure_1_1.png

Figure 1.1: Execution of SAP BW Transformations


An ABAP based BW transformation loads the data package by package from the source database objects into the memory of the Application Server (ABAP) for further processing. The BW transformation logic is executed inside the Application Server (ABAP) and the transformed data packages are shipped back to the Database Server. The Database Server writes the resulting data packages into the target database object. Therefore, the data is transmitted twice between database and application server.

 

During processing of an ABAP based BW transformation, the source data package is processed row by row (row-based). The ABAP based processing allows to define field-based rules, which are processed as sequential processing steps.

 

For the HANA based BW transformation the entire transformation logic is transformed into a CalculationScenario (CalcScenario). From a technical perspective the Metadata for the CalcScenario are stored as a SAP HANA Transformation in BW (see transaction RSDHATR).

 

This CalcScenario is embedded into a ColumnView. To select data from the source object, the DTP creates a SQL SELECT statement based on this ColumnView (see blog »Analyzing HANA based BW transformation«) and the processing logic of the CalcScenario applies all transformation rules (defined in the BW transformation) to the selected source data. By shifting the transformation logic into the CalcScenario, the data can be transferred directly from the source object to the target object within a single processing step. Technically this is implemented as an INSERT AS SELECT statement that reads from the ColumnView and inserts into the target database object of the BW transformation. This eliminates the data transfer between Database Server and Application Server (ABAP). The complete processing takes place in SAP HANA.


1.2      Create a HANA based BW Transformation

The following steps are necessary to push down a BW transformation:

  • Create a SAP HANA executable BW transformation
  • Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA


1.2.1       Create a standard SAP HANA executable BW transformation

A standard SAP HANA executable BW transformation is a BW transformation without SAP HANA specific implementation, which forces a SAP HANA execution.

The BW Workbench tries to push down new BW transformations by default.

The activation process checks a BW transformation for unsupported push down features such as ABAP routines. For a detailed list of restrictions see SAP Help -Transformations in SAP HANA Database.  If none of these features are used in a BW transformation, the activation process will mark the BW transformation as SAP HANA Execution Possible see (1) in Figure 1.2.

 

Figure_1_2.png

Figure 1.2: First simple SAP HANA based Transformation

 

When a BW transformation can be pushed down, the activation process generates all necessary SAP HANA runtime objects. The required metadata is also assembled in a SAP HANA Transformation (see Transaction RSDHATR). The related SAP HANA Transformation for a BW transformation can be found in menu Extras => Display Generated HANA Transformation, see (2) in Figure 1.2.

 

From a technical perspective a SAP HANA Transformation is a SAP HANA Analysis Process (see Transaction RSDHAAP) with a strict naming convention. The naming convention for a SAP HANA Transformation is TR_<< Program ID for Transformation (Generated)>>, see (3) in Figure 1.2. A SAP HANA Transformation is only a runtime object which cannot not been explicit created or modified.

 

The tab CalculationScenario is only visible if the Export Mode (Extras => Export Mode On/Off) is switched on. The tab shows the technical definition of the corresponding CalculationScenario which includes the transformation logic and the SQLScript procedure (if the BW transformation is based on a SAP HANA Expert Script).

 

If the transformation is marked as SAP HANA Execution Possible, see (1) in Figure 1.2 the first precondition is given to push down and execute the BW transformation inside the database (SAP HANA). That means if the flag SAP HANA Execution Possible is set the BW transformation is able to execute in both modes (ABAP and HANA) and the real used processing mode is set inside the DTP. To be prepared for both processing modes the BW transformation framework generates the runtime objects for both modes. Therefore the Generated Program (see Extras => Display Generated Program) for the ABAP processing will also be visible.

 

The next step is to create the corresponding DTP, see paragraph 1.2.4 »Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA«.

 

1.2.2       Create a SAP HANA transformation with SAP HANA Expert Script

 

If the business requirement is more complex and it is not possible to implement these requirements with the standard BW transformation feature, it is possible to create a SQLScript procedure (SAP HANA Expert Script). When using a SAP HANA Expert Script to implement the business requirements the BW framework pushes the transformation logic down to the database. Be aware that there is no option to execute a BW transformation with a SAP HANA Expert Script in the processing mode ABAP, only processing mode HANA applies.

 

From the BW modelling perspective a SAP HANA Expert Script is very similar to an ABAP Expert Routine. The SAP HANA Expert Script replaces the entire BW transformation logic. The SAP HANA Expert Script has two parameters, one importing (inTab) and one exporting (outTab) parameter. The importing parameter provides the source data package and the exporting parameter is used to return the result data package.

 

However, there are differences from the perspective of implementation between ABAP and SQLScript. An ABAP processed transformation loops over the source data and processes them row by row. A SAP HANA Expert Script based transformation tries to processes the data in one block (INSERT AS SELECT). To get the best performance benefit of the push down it is recommended to use declarative SQLScript Logic to implement your business logic within the SAP HANA Expert Script, see blog »General recommendations«.

 

The following points should be considered before the business requirements are implemented with SAP HANA Expert Script:

  • ABAP is from today's perspective, the more powerful language than SQL Script
  • Development support features such as syntax highlighting, forward navigation based on error messages, debugging support, etc. is better in the ABAP development environment.
  • SQL script development experience is currently not as widespread as ABAP development experience
  • A HANA executed transformation is not always faster

 

From the technical perspective the SAP HANA Expert Script is a SAP HANA database procedure. From the BW developer perspective the SAP HANA Expert Script is a SAP HANA database procedure implemented as a method in an AMDP (ABAP Managed Database Procedure) class.

 

The AMDP class is be generated by the BW framework and can only be modified within the ABAP Development Tools for SAP NetWeaver (ADT), see https://tools.hana.ondemand.com/#abap. The generated AMDP class cannot not be modified in the SAP GUI like Class editor (SE24) or the ABAP Workbench (SE80). Therefore it is recommended to implement the entire dataflow in the Modeling Tools for SAP BW powered by SAP HANA, see https://tools.hana.ondemand.com/#bw.  The BW transformation itself must still be implemented in the Data Warehousing Workbench (RSA1).

 

Next I’ll give a step by step introduction to create a BW transformation with a SAP HANA Expert Script.

 

Step 1: Start a SAP HANA Studio with both installed tools:

  • ABAP Development Tools for SAP NetWeaver (ADT) and
  • Modeling Tools for SAP BW powered by SAP HANA

 

Now we must switch into the BW Modeling Perspective. To open the BW Modeling Perspective go to Window =>Other .. and select in the upcoming dialog the BW Modeling Perspective, see Figure 1.3.

 

Figure_1_3.png

Figure 1.3: Open the BW Modeling Perspective

 

To open the embedded SAP GUI a BW Project is needed. It is necessary to create the BW Project before calling the SAP GUI. To create a new BW Project open File => New => BW Project. To create a BW Project a SAP Logon Connection is required, choose the SAP Logon connection and use the Next button to enter your user logon data.

 

Recommendations: After entering your logon data it is possible to finalize the wizard and create the BW Project. I recommend to use the Next wizard page to change the project name. The default project name is:

 

     <System ID>_<Client>_<User name>_<Language>

 

I normally add at the end a postfix for the project type such as _BW for the BW Project. For an ABAP project later on I will use the postfix _ABAP. The reason I do that is both projects are using the same symbol in the project viewer and the used postfix makes it easier to identify the right project.

 

Once the BW Project is created we can open the embedded SAP GUI. The BW Modeling perspective toolbar provides a button to open the embedded SAP GUI, see Figure 1.4.

 

Figure_1_4.png

Figure 1.4: Open the embedded SAP GUI in Eclipse

 

Choose the created BW Project in the upcoming dialog. Next start the BW Workbench (RSA1) within the embedded SAP GUI and create the BW transformation or switch into the edit mode for an existing one.

 

To create a SAP HANA Expert Script open Edit => Routines => SAP HANA Expert Script Create in the menu of the BW transformation. Confirm the request to delete the existing transformation logic. Keep in mind that all implemented stuff like Start- End- or Field-Routines and formulas will be deleted if you confirm to create a SAP HANA Expert Script.


In the next step the BW framework opens the AMDP class by calling the ABAP Development Tools for SAP NetWeaver (ADT). For this an ABAP project is needed. Select an existing ABAP Project or create a new one in the dialog.

 

A new window with the AMD class will appear. Sometimes it is necessary to reload the AMDP class by pressing F5. Enter your credentials if prompted.


The newly generated AMDP class, see Figure 1.5, cannot not directly be activated.


Figure_1_5.png

Figure 1.5: New generated AMDP Class


Before I explain the elements of the AMDP class and the method I will finalize the transformation with a simple valid SQL statement. The used SQL statement, as shown in Figure 1.6, is a simple 1:1 transformation and is only used as an example to explain the technical behavior.


Figure_1_6.png

Figure 1.6: Simple valid AMDP Method

 

Now we can activate the AMDP class and go back to the BW transformation by closing the AMDP class window. Now it is necessary to activate the BW transformation also. For a BW transformation with a SAP HANA Expert Script the flag SAP HANA Execution possible is set, see Figure 1.7.

 

Figure_1_7.png

Figure 1.7: BW Transformation with SAP HANA Script Processing


As explained before, if you use a SAP HANA Expert Script the BW transformation can only been processed in SAP HANA. It is not possible to execute the transformation on the ABAP stack. Therefore the generated ABAP program (Extras => Display Generated Program) is not available for a BW transformation with the processing type SAP HANA Expert Script.


1.2.2.1       Sorting after call of expert script


Within the BW transformation the flag Sorting after call of expert script, see Figure 1.8, (Edit => Sorting after call of expert script) can be used to ensure that the data is written in the correct order to the target.


Figure_1_8.png

Figure 1.8: Sorting after call of expert script


If the data is extracted by delta processing the sort order of the data could be important (depending on the type of the used delta process).

 

By default, the flag is always set for all new transformations and it’s recommended to leave it unchanged.

 

For older transformations, created with a release before 7.40 SP12, the flag is not set by default. So the customer can set the flag if they need the data in a specific sort order.

 

Keep in mind that the flag has impact at two points:

  • The input/output structure of the SAP HANA Expert Script is enhanced / reduced by the field RECORD
  • The result data from the SAP HANA Expert Script will be sorted by the new field RECORD, if the flag is set, after calling the SAP HANA Expert Script


The inTab and the outTab structure of a SAP HANA Expert Script will be enhanced by the field RECORD if the flag is set. The added field RECORD is a combination of the fields REQUESTSID, DATAPAKID and RECORD from the source object of the transformation, see Figure 1.9.


Figure_1_9.png

Figure 1.9: Concatenated field RECORD


The RECORD field from the outTab structure is mapped to the internal field #SOURCE#.1.RECORD. Later on in a rownum node of the CalculationScenario the result data will be sorted by the new internal field #SOURCE#.1.RECORD, see Figure 1.10.


Figure_1_10.png

Figure 1.10: CalculationScenario note rownum


1.2.2.2       The AMDP Class


The BW transformation framework generates an ABAP class with a method called PROCEDURE. The class implements the ABAP Managed Database Procedure (AMDP)marker interfaceIF_AMDP_MARKER_HDB. The interface marks the ABAP class as an AMDP class. A method of an AMDP class can be written as a database procedure. Therefore the BW transformation framework creates a HANA specific database procedure declaration for the method PROCEDURE, see Figure 1.11:


Figure_1_11.png

Figure 1.11: Method PROCEDURE declaration


This declaration specifies the method to the HANA database (HDB), the language to SQLSCRIPT and further on defines that the database procedure is READ ONLY. The read only option means that the method / procedure must be side-effect free. Side-effect free means that only SQL elements (DML) could be used to read data. Elements like DELETE, UPDATE, INSERT used on persistent database objects are not allowed. These data modification statements can also not be encapsulated in a further procedure.


You cannot directly read data from a database object managed by ABAP like a table, view or procedure inside an AMDP procedure, see (1) in Figure 1.12. A database object managed by ABAP has to be declared before they can used inside an AMDP procedure, see (2). For more information about the USING option see AMDP - Methods in the ABAP documentation.


Figure_1_12.png

Figure 1.12: Declaration of DDIC objects


The AMDP framework generates wrapper objects for the declared database object managed by ABAP .  The view /BIC/5MDEH7I6TAI98T0GHIE3P69D1=>/BIC/ATK_RAWMAT2#covw in (3) was generated for the declared table /BIC/ATK_RAWMAT2 in (2). The blogUnder the HANA hood of an ABAP Managed Database Procedure provides some further background information about AMDP processing and which objects are generated.


AMDP Class modification

Only the method implementation belongs to the BW transformation Meta data and only this part of the AMDP class would been stored, see table RSTRANSCRIPT.


Currently the ABAP Development Tools for SAP NetWeaver (ADT) does not protect the source code which should not been modified, like in an ABAP routine. That means all modifications in the AMDP class outside the method implementation will not be transported to the next system and will be overwritten by the next activation process. The BW transformation framework regenerates the AMDP class during the activation process.


Later on I’ll provide some general recommendations in a separate blog which are based on experiences we collected in customer implementations and customer incidents. The general recommendation will cover the following topics:

  • Avoid preventing filter push down
  • Keep internal table small
  • Initial values
  • Column type definition
  • Avoid implicit casting
  • Use of DISTINCT
  • Potential pitfall at UNION / UNION ALL
  • Input Parameter inside underlying HANA objects
  • Internal vs. external format
  • ...


1.2.3       Dataflow with more than one BW transformation


The push down option is not restricted on data flows with one BW transformation. It is also possible to push down a complete data flow with several included BW transformations (called stacked data flow). To get the best performance benefits from the push down it is recommended to stack a data flow by a maximum of three BW transformations. More are possible but not recommended.

 

The used InfoSources (see SAP Help: InfoSource and Recommendations for Using InfoSources) in a stacked data flow can be used to aggregate data within the data flow if the processing mode is set to ABAP. If the processing mode set to SAP HANA the data will not be aggregated as set in the InfoSource settings. The transformation itself does not know the processing mode, therefore you will not get a message about the InfoSource aggregation behavior. The used processing mode is set in the used DTP.

 

That means, the BW transformation framework prepares the BW transformation for both processing modes (ABAP and HANA). During the preparation the framework will not throw a warning regarding the lack of aggregation in the processing mode HANA.


By using the check button for the HANA processing mode, within the BW transformation, you will get the corresponding message (warning) regarding the InfoSource aggregation, see Figure 1.13

 

Figure_1_13.png

Figure 1.13: HANA processing and InfoSources


CalculationScenario in a stacked data flow

The corresponding CalculationScenario for a BW transformation is not available if the source object is an InfoSource. That means the tab CalculationScenario is not available in the export mode of the SAP HANA transformation, see Extras => Display Generated HANA Transformation. The source object for this CalculationScenario is an InfoSource and an InfoSource cannot be used as data source object in a CalculationScenario. The related CalculationScenario can only be obtain by using the SAP HANA Transformation from the corresponding DTP. I’ll explain this behavior later on in the blog »HANA based Transformation (deep dive)«.

 

1.2.4       Create a Data Transfer Process (DTP) to execute the BW transformation in SAP HANA


The Data Transfer Process (DTP) to execute a BW transformation provides a flag to control the HANA push-down of the transformation. The DTP flag SAP HANA Execution, see (1) in Figure 1.14, can be checked or unchecked by the user. However, the flag in the DTP can only be checked if the transformation is marked as SAP HANA Execution Possible, see (1) in Figure 1.2. By default the flag SAP HANA Execution will be set for each new DTP if

  • the BW transformation is marked as SAP HANA execution possible and
  • the DTP does not use any options which prevent a push down.

 

Up to BW 7.50 SP04 the following DTP options prevent a push down:

  • Semantic Groups
  • Error Handling - Track Records after Failed Request


The DTP UI provides a check button, like the BW transformation UI, to validate a DTP for HANA push down. In case a DTP is not able to push down the data flow (all involved BW transformations) logic, the check button will provide the reason.

 

Figure_1_14.png

Figure 1.14: DTP for the first simple SAP HANA based Transformation

 

In the simple transformation sample above I’m using one BW transformation to connect a persistent source object (DataSource (RSDS)) with a persistent target object (Standard DataStore Object (ODSO)). We also call this type a non-stacked dataflow - I’ll provide more information about non-stacked and stacked data flows later. The related SAP HANA Transformation for a DTP can be found in menu Extras => Display Generated HANA Transformation, see (2) in Figure 1.14. In case of a non-stacked data flow the DTP uses the SAP HANA Transformation of the BW transformation, see (3) in Figure 1.14.

 

The usage of a filter in the DTP does not prevent the HANA push down. ABAP Routines or BEx Variables can be used as well. The filter value(s) is calculated in a pre-step and added to the SQL SELECT statement which reads the data from the source object. We will look into this later in more detail.

 

1.2.5       Execute a SAP HANA based transformation

 

From the execution perspective, regarding the handling, a HANA based transformation behaves comparable to an ABAP based transformation, simply press the 'Execute' button or execute the DTP form a process chain.

 

Later on I will provide more information about packaging and parallel processing.

 

1.2.6       Limitations

 

There is no option to execute a transformation with a SAP HANA Script on the ABAP application server. With BW 7.50 SP04 (the next feature pack) it is planned to deliver further option to use SAP HANA Scripts (Start-, End- and Field-Routines are planned) within a BW transformation.


Algorithm to determine if members selected from SAP BW characteristics will result in no data being extracted

How to delete overlapping requests DSO using ABAP except loads from source DSO

$
0
0

After reading this great blog from J. Jonkergouw on his website about Delete overlapping requests DataStore Object using ABAP, we had a similar issue, however we needed to keep one historic load in the DSO. That's why we altered the code a bit to keep requests from one DSO.

 

Kudos to the code of Joury, very nice to implement.

 

     DATA:

       l_t_rsiccont   TYPE STANDARD TABLE OF rsiccont,

       lv_ftimestampc TYPE c LENGTH 14,

       lv_ttimestampc TYPE c LENGTH 14,

       lv_frtimestamp TYPE rstimestmp,

       lv_totimestamp TYPE rstimestmp,

       lv_calweek     TYPE /bi0/oicalweek,

       lv_first_date  TYPE scal-date,

       lv_last_date   TYPE scal-date.

 

     CONSTANTS:

       lc_begin_time TYPE c LENGTH 6 VALUE '000000',

       lc_end_time   TYPE c LENGTH 6 VALUE '235959',

       lc_dso        TYPE rsinfocube VALUE 'ZJJ_DSO_NAME',

       lc_dso_out    TYPE rsinfocube VALUE 'ZRB_DSO_NAME'.

 

     FIELD-SYMBOLS:

            <lfs_rsiccont> TYPE rsiccont.

 

*-  Convert system date to calendar week.

     CALL FUNCTION 'ZBW_DATE_TO_ANYTHING'

       EXPORTING

         i_calday  = sy-datum

       IMPORTING

         e_calweek = lv_calweek.

 

*-  Get week first and last day.

     CALL FUNCTION 'WEEK_GET_FIRST_DAY'

       EXPORTING

         week         = lv_calweek

       IMPORTING

         date         = lv_first_date

       EXCEPTIONS

         week_invalid = 1

         OTHERS       = 2.

 

*-  Define last day of the week

     lv_last_date = lv_first_date + 6.

 

*-  Concatenate to a string with format YYYYMMDDHHIISS

     CONCATENATE lv_first_date lc_begin_time INTO lv_ftimestampc.

     CONCATENATE lv_last_date lc_end_time INTO lv_ttimestampc.

 

*-  Convert the from and to string to a timestamp format

*-  Needed to select data from the RSICCONT

     lv_frtimestamp = lv_ftimestampc.

     lv_totimestamp = lv_ttimestampc.

 

*-  Select all requests which are currently in the data monitor

*- The adjustment made is an inner join from the request table

*- which stores the source of the DTP and is filtered out in the where class

     SELECT rnr timestamp FROM rsiccont AS p

     INNER JOIN rsbkrequest AS r ON r~request = p~rnr

     INTO CORRESPONDING FIELDS OF TABLE l_t_rsiccont

     WHERE icube EQ lc_dso

     AND NOT src EQ lc_dso_out

     AND timestamp BETWEEN lv_frtimestamp AND lv_totimestamp.

 

*-  If we start ASCENDING then the oldest requests will be

*-  deleted including the ones till the current date.

     SORT l_t_rsiccont BY timestamp DESCENDING.

 

*-  Start looping over the requests.

     LOOP AT l_t_rsiccont ASSIGNING <lfs_rsiccont>.

 

*-    Delete requests from the DSO

       CALL FUNCTION 'RSSM_DELETE_REQUEST'

         EXPORTING

           request                    = <lfs_rsiccont>-rnr

           infocube                   = lc_dso

           dialog                     = abap_false

         EXCEPTIONS

           request_not_in_cube        = 1

           infocube_not_found         = 2

           request_already_aggregated = 3

           request_already_comdensed  = 4

           no_enqueue_possible        = 5

           cube_in_planning_mode      = 6

           OTHERS                     = 7.

 

*      Uncomment if you want to enable error handling

*      IF sy-subrc <> 0.

*        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno

*              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.

*      ENDIF.

 

     ENDLOOP.

HANA based BW Transformation - SAP Notes

$
0
0

A1      SAP Notes

This blog provides an overview of the most important SAP notes regarding the topic BW transformation and  HANA processing mode. This blog is part of the blog series  HANA based BW Transformation.

 

A1.0 General Notes

2057542 - Recommendation: Usage of HANA-based Transformations

2230080 - Consulting: DTP: Out of memory situation during 'SAP HANA Execution' and the 'Request by Request' Extraction

 

A1.1      BW 7.40

2067912 - SAP HANA transformations and analysis processes: SAP Notes for SAP NetWeaver 740 with Support Package 8 or higher

2152643 - SAP HANA Processing: SAP HANA Processing: Sorting of records after call of expert script - Manual Activities

2222084 - DTP: Out of memory situation during 'SAP HANA Execution' and the 'Request by Request' Extraction

2254397 - SAP HANA Processing: BW 7.40 SP8 - SP14: HANA Analysis Processes and HANA Transformations (Part 19)

2299940 - SAP HANA Processing: BW 7.40 SP8 - SP15: HANA Analysis Processes and HANA Transformations (Part 20)

A1.2      BW 7.50

2192329 - SAP HANA Processing: BW 7.50 SP00 HANA Analysis Processes and HANA Transformations

2220753 - SAP HANA Processing: BW 7.50 SP00 - SP01: HANA Analysis Processes and HANA Transformations

2262474 - SAP HANA processing: BW 7.50 with SP00 - SP02: SAP HANA analysis processes and SAP HANA transformations

2281480 - SAP HANA processing: BW 7.50 with SP00 - SP03: SAP HANA analysis processes and SAP HANA transformations

2303781 - SAP HANA processing: BW 7.50 with SP00 - SP03: SAP HANA analysis processes and SAP HANA transformations (II)

BW Transport for Infopackage

$
0
0

Hello Friends,

 

Our team had an error while transporting Info-packages to Production server

 

This error occurred only in Production server and affected only Info-packages

 

After almost 3 days , we found an OSS Note to resolve this error - 1965709

 

https://websmp230.sap-ag.de/sap/support/notes/1965709

 

 

 

ISIP Object Type entry.gif

 

TABLE RSTLOGOPROP.gif



And the solution which worked for us is given below:

 

1 - To update this  table RSTLOGOPROP

2 - To delete actual Transport Requests from import buffers as advised in OSS note (need help from Basis team for this)

3 - To create new Transport for table RSTLOGOPROP (move it from development to production)

4 - To create new Transport for Info-packages and to transport again to Production server

 

 

Thanks

Aby Jacob

HANA based Transformation (deep dive)

$
0
0

2      HANA based Transformation (deep dive)


This blog is part of the blog series  HANA based BW Transformation.


Now I will look a little bit behind the curtain and provide some technical background details about SAP HANA Transformations. The information provided here serves only for a better understanding of BW transformations which are pushed down.

 

As part of the analysis of HANA executed BW transformations we need to distinguish between simple (non-stacked) and stacked data flows. A simple, non-stacked data flow connects two persistent objects with no InfoSource in between, only one BW transformation is involved. We use the term stacked data flow for a data flow with more than one BW transformation and at least one InfoSource in between.

 

Stability of the generated runtime objects

All information I provide here are background information to get a better understanding on SAP HANA executed BW transformation.

It is important to keep in mind that all object definition could be changed!

Do not implement any stuff based on the generated objects!

The

  • structure of a CalculationScenario (view names, number of views, …) or
  • generated SQL statements or
  • PLACEHOLDER
could be changed by the next release, support package or a  SAP note.

 

2.1      Simple data flow (Non-Stacked Data Flow)

 

A simple data flow is a data flow which connects two persistent BW objects with no InfoSource in between. The corresponding Data Transfer Process (DTP) processes only one BW transformation.

 

In case of a non-stacked data flow the DTP reuses the SAP HANA Transformation (SAP HANA Analysis Process) of the BW transformation, see Figure 2.1.

 

Figure_2_1.png

Figure 2.1: Non-Stacked Transformation


2.2      Stacked Data Flow

 

A stacked data flow connects two persistent data objects with at least one InfoSource in between. Therefore a stacked data flow contains at least two BW transformations. The corresponding Data Transfer Process (DTP) processes all involved BW transformations.

 

In case of a stacked data flow, the DTP cannot use the SAP HANA Transformation (SAP HANA Analysis Process) of the BW transformations. Strictly speaking, it is not possible to create a CalcScenario for a BW transformation with an InfoSource as source object. An InfoSource cannot be used as a data source in a CalculationScenario.


Figure_2_2.png

Figure 2.2: Stacked Transformation


Figure 2.2 shows a stacked data flow with two BW transformations (1) and (2) and the corresponding SAP HANA Transformations(3) and (4). There is no tab for the CalculationScenario in the SAP HANA Transformation (3) for the BW transformation (1) with an InfoSource as source object.

 

Therefore the DTP generated its own SAP HANA Transformations(6) and (7) for each BW transformation. The SAP HANA Transformations for the DTP are largely equivalent to the SAP HANA Transformation for the BW transformations (3) and (4).

 

In the sample data flow above, the SAP HANA Transformation (5) for the DTP get its own technical ID TR_5I3Y6060H25LXFS0O67VSCIF8. The technical ID is based on the technical DTP ID and the prefix DTP_ would be replaced by the prefix TR_.

 

The SAP HANA Transformation (5) and (6)is only a single object, in fact. I included it twice in the picture to illustrate that the SAP HANA Transformation (6) is based on the definition of the BW transformation (1) and will be used(5) from the DTP.

 

Figure 2.3 provides a more detailed view of the generated SAP HANA Transformation. The SAP HANA Transformation (6) is based on the definition of the BW transformation (1) and is therefore largely identical to the SAP HANA Transformation (3).(3) and (6)differs only in the source object. The SAP HANA transformation (6)uses the SAP HANA Transformation / SAP HANA Analysis Process (7) as the source object instead of the InfoSource as shown in (3). The SAP HANA Transformations (4) and (7)are also quite similar. They only differ with respect to the target object. In the SAP HANA Transformation (3), the InfoSource is used as the target object. The SAP HANA transformation (7) does not have an explicit target object. Its target object is only marked as Embedded in a Data Transfer Process. That means the SAP HANA Transformation (7) is used as data source in a SAP HANA Transformation, in our case in the SAP HANA Transformation (6).


Figure_2_3.png

Figure 2.3: Stacked Transformation (detailed)


The technical ID of the embedded SAP HANA Transformation (7) is based on the technical ID of the SAP HANA Transformation (5)and (6)of the DTP. Only the last digit 1 will be added as a counter for the level. This means that in case of a stacked data flow with more than two BW transformations, the next SAP HANA Transformation would get the additional digit 2 instead of 1 and so on.

 

Later on we will need the technical IDs to analyze a HANA based BW transformation, therefore it is helpful to understand how they are being created.


2.3      CalculationScenario


To analyze a SAP HANA based BW transformation, it is necessary to understand the primary SAP HANA runtime object, the CalculationScenario (CalcScenario). The BW Workbench shows the CalcScenario in an XML representation, see (2) in Figure 2.4. The CalculationScenario is part of the corresponding SAP HANA Transformation (Extra è Generated Program HANA Transformation) of the DTP. The CalculationScenario tab is only visible if the Expert Mode (Extras è Expert Mode on/off) is switched on. Keep in mind, if the source object of the BW transformation is an InfoSource the CalculationScenario could only been reached by using the DTP Meta data, see »Dataflow with more than one BW transformation« and »Stacked Data Flow«.


The naming convention for the CalculationScenario is:


     /1BCAMDP/0BW:DAP:<Technical ID – SAP HANA Transformation>


The CalculationScenario shown in the CalculationScenario tab, see (2) and (1)in Figure 2.4, is only a local temporary version, therefore the additional postfix .TMP.


The CalculationScenario processes the transformation logic by using different views (CalculationViews) to split the complex logic into more simplified single steps. One CalculationView, the default CalculationView, represents the CalculationScenario itself. The default CalculationView uses one or more other CalculationViews as source and so on, see Figure 2.5. A CalculationView cannot be used in SQL statements, therefore for each CalculationView a ColumnView will be generated.

 

Inside the SAP HANA database, the ColumnViews are created in the SAP<SID>schema, see (1). Each ColumnView represents a CalculationView, see paragraph 2.3.2.1 »CalculationScenario - calculationViews«, within a CalculationScenario. The create statement of each ColumnView provides two objects. First the CalculationScenario and as second object the ColumnView based on the CalculationScenario. All ColumnViews that belong to a SAP HANA transformation are based on the same CalculationScenario.


Figure_2_4.png

Figure 2.4: CalculationScenario and ColumnViews


SAP HANA internally uses the JSON notation to represent a CalculationScenario. Figure 2.5 shows the CalculationScenario depicted in Figure 2.4 (2) in a JSON Analyzer. The tree representation provides a good overview how the different CalculationViews are consumed.  The JSON based CalcScenario definition can be found in the Create Statement tab in the column view definition in the SAP HANA studio. The definition can be found in the USING clause of the CREATE CALCULATION SCENARIO statement, the definition starts with ‘[‘ and ends with ‘]’.


Figure_2_5.png

Figure 2.5: CalculationScenario in a JSON Analyzer


The SAP HANA studio also provides a good tool to visualize a CalculationScenario, see Figure 2.6. To open the visualization tool click on Visualize View in the context menu of a ColumnView based on a CalculationScenario. The visualization view is divided in three parts. The first part (1) provides a list of the CalculationViews which are used inside the CalculationScenario. Depending on each view definition there are more information about variables, filter or attributes are below the view node available.  The second part (2) provides an overview about the view dependencies. Which view consumes which view? The third part (3) provides context sensitive information for a view form the second part.


Figure_2_6.png

Figure 2.6: CalculationScenario visualization in the SAP HANA Studio


Now we will have a deeper look into the following sub nodes of the calculationScenario node (see (2) in Figure 2.4):

  • dataSources
  • variables
  • calculationViews


2.3.1 CalculationScenario - dataSources


The node dataSources lists all available data source objects of the CalculationScenario. The following data sources are used within a CalculationScenario in the context of a BW transformation:

  • tableDataSource
  • olapDataSource
  • calcScenarioDataSource

In the first sample transformation, we only use a database table (tableDataSource) as the source object, see Figure 2.7. The sample data flow reads from a DataSource (RSDS), therefore the corresponding PSA table is defined as tableDataSource. To resolve the request ID, the SID table /BI0/SREQUID is also added to the list of data sources.


Figure_2_7.png

Figure 2.7: CalculationScenario – Node: TableDataSource


In the second sample a transformation rule Master data read is being used in the BW transformation. In this case an olapDataSource is added to the list of data sources. The olapDataSourceuses the logical index (0BW:BIA:0MATERIAL_F4) from the InfoObject to read the required master data from the real source tables, see Figure 2.8.


Figure_2_8.png

Figure 2.8: CalculationScenario – Node: OLAPDataSource


To read the master data in the requested language, object version and time the PLACEHOLDERS

  • keydate,
  • objvers
  • langu

are added.

 

The third sample is a stacked data flow. In a stacked data flow the CalculationScenario from the DTP uses another CalculationScenario as data source. In these cases, the calcScenarioDataSource is used. The variables defined in the upper CalculationScenario are passed to the underlying CalculationScenario to be able to push down these variables (filters) where possible to the source objects, see Figure 2.9.


Figure_2_9.png

Figure 2.9: CalculationScenario – Node: CalcScenarioDataSource

 

The values for the PLACEHOLDERS are passed in the SQL statement by using the variables, see paragraph 2.3.2 »CalculationScenario - variables«.

 

The variables for the PLACEHOLDER for the variables keydate, objvers and langu are always set in the INSERT AS SELECT statement, whether they are used or not.


2.3.2       CalculationScenario - variables


In the node variables, all parameters are defined which are used in the CalculationScenario and can be used in the SQL statement to filter the result, see Figure 2.10.


Figure_2_10.png

Figure 2.10: CalculationScenario – Node: variables


Placeholder usage by customer

All variables and placeholders defined in a CalculationScenario in the context of a BW transformation are only intended for SAP internal usage. Variables and placeholder names are not stable, that means they can be changed, replaced or removed.


Figure 2.11 provides a further sample, based on a BW transformation, with several dataSource definitions. A variable is been used to control which dataSource and at the end from which table the data are read by the SQL statement.

 

The sample data flow for this CalculationScenario reads from an advanced DSO (ADSO) (based on a Data Propagation Layer - Template) with three possible source tables (Inbound Table, Change Log and Active Data). For each source table (dataSource), at least one view is generated into the CalculationScenario and all three views are combined by a union operation, see (2).

 

The input nodes are used to enhance all three structures by a new constant field named BW_HAP__________ADSO_TABLE_TYPE. The constant values

  • Inbound Table (AQ),
  • Change Log (CL) and
  • Active Data (AT)

can later be used as values for the filter $$ADSO_TABLE_TYPE$$, see (3). The filter value is handed over by the SELECT statement and depends, for example, on the DTP settings (Read from Active Table or Read from Change Log). To read data only from the active data (AT) table the following placeholder setting is used:

 

     'PLACEHOLDER'=('$$adso_table_type$$',   '( ("BW_HAP__________ADSO_TABLE_TYPE"=''AT'' ) )'),

 

For further information see 2.4 »SQL Statement«.

 

Figure_2_11.png

Figure 2.11: CalculationScenario – DataSource and Variable collaboration


2.3.2.1 CalculationScenario - calculationViews


The next relevant node type is the node calculationView. A CalculationScenario uses several layered views (calculationView) to transfer the logic given by the BW transformation. For the CalculationScenario and for each calculationView, a ColumnView is created, see (1) in Figure 2.4. The CalculationScenario related to a column view can be found in the definition of each column view.

 

There are several view types which can be used as sub node of a calculationView:

  • projection
  • union
  • join
  • aggregation
  • unitConversion
  • verticalUnion
  • rownum
  • functionCall
  • datatypeConversion


The view types as well as the number of views used in a CalculationScenario depend on the logic defined in the BW transformation and the BW / SAP HANA release.


A SELECT on a CalculationScenario (ColumnView) always reads from the default view. There is only one default view allowed. The default view can been identified by checking whether the attribute defaultViewFlag is set to “true”. In the JSON representation in Figure 2.5, the default view is always shown as the top node.


The processing logic of each view is described in further sub nodes. The most important sub nodes are:

  • viewAttributesattributes
  • inputs
  • filter

 

The used sub nodes of a CalculationView depend on the view type, on the logic defined in the BW transformation, and the BW / SAP HANA release.


CalculationScenario - calculationViews – view - viewAttributes / attributes


The nodes viewAttributes and attributesare used to define the target structure of a calculation view. The node attributes is used for more complex field definitions like data type mapping and calculated attributes (calculatedAttributes).

 

The InfoObject TK_MAT is defined as CHAR(18) with a conversion routine ALPHA. To ensure that all values comply with the APLHA conversion rules, the CalculationScenario creates an additional field TK_MAT$TMP as a calculatedAttribute. Figure 2.12 shows the definition of the new field. The APLHAconversion rule logic is implemented as a formula based on the original field TK_MAT.


Figure_2_12.png

Figure 2.12: CalculationScenario - CalculationAttributes


Calculated attributes are memory-intensive and we try to avoid them where possible. But there are some scenarios where calculated attributes must be used. For example, in case of target fields based on InfoObjects with conversion routines (see Figure 2.12) and in case of “non-trustworthy” sources. A “non-trustworthy” data source is a field-based source object (that is not based on InfoObjects), for example a DataSource (RSDS) or a field based advanced DataStore-Object (ADSO). In case of “non-trustworthy” data sources, the CalculationScenario must ensure that NULL values are converted to the correct type-related initial values.


CalculationScenario - calculationViews– view inputs


A calculation view can read data from one or more sources (inputs). CalculationViews and/or data sources could be used as sources. They are listed under the node input. A projection or an aggregation, for example, typically have one input node, and a union or a join typically have more than one input node.

 

The input node in combination with the mapping and viewAttribute nodes can be used to define new columns. In Figure 2.13 the attribute #SOURCE#.1.0REQTSN is defined as a new field based on the source field 0REQTSN.


Figure_2_13.png

Figure 2.13: CalculationScenario - input

 

CalculationScenario - calculationViews– view filter


The filter node, see (2) in Figure 2.14, is used in combination with the variable, see (1)in Figure 2.14, to provide the option to filter the result set of the union-view (TK_SOI). The filter value is set as placeholder in the SQL statement see (3) in Figure 2.14


Figure_2_14.png

Figure 2.14: CalculationScenario - input

 

I will come back to the different views later on in the analyzing and debugging section.


2.4 SQL Statement

 

Running a DTP triggers an INSERT AS SELECT statement that reads data from the source and directly inserts the transformed data into the target object. There are two kinds of filter options available to reduce the processed data volume: the WHERE condition in the SQL statement and the Calculation Engine PLACEHOLDER. Placeholders are used to set values for variables / filters which are defined in the CalculationScenario. Which placeholders are used depends on the logic defined in the BW transformation and the BW / SAP HANA release.


PLACEHOLDER


The following table describes the most important PLACEHOLDERS which can be embedded in a CalculationScenario and used in the SQL statements. Which PLACEHOLDERis been used in the CalculationScenario depends on the implemented dataflow logic.


Important note about the PLACEHOLDER

PLACEHOLDER are not stable and the definition can be changed by a new release, support package or note!

It is not supported to use the here listed PLACEHOLDER inside a SQL Script or any other embedded database object in a context of a BW transformation. This also applies to all PLACEHOLDER used in the generated SQL statement.

 

 

Placeholder name

Description

$$client$$

Client value to read client dependent data. The placeholder is always set, whether it is used or not.

$$change_log_filter$$

This placeholder is only used to filter the data which is read from the change log. The filter values are equivalent to the values of the placeholder $$filter$$.

The placeholder is only used for advanced DataStore-Objects (ADSO). See also $$inbound_filter$$and $$nls_filter$$.

$$change_log_extraction$$

Wenn die Extraktion aus dem Changelog erfolgt ist der Wert ‘X’ gesetzt. Wird im Rahmen des Errorhandlings genutzt.

$$datasource_src_type$$

The placeholder is set to ‚X‘ in case the data would be read from the remote source object and not from the PSA.

$$dso_table_type$$

This placeholder is used to control which table of a standard DataStore-Object (ODSO) is used as data source. Therefore the field BW_HAP__________DSO_TABLE_TYPE can be set to:

  • Active Table (0)
  • Change Log (3)

$$filter$$

This placeholder is used to filter the source data where possible. That means the placeholder is typically used in the next view above the union for all available source tables.

This placeholder contains the filters defined in the DTP extraction tab plus some technical filters based on the REQUEST or the DATAPAKID.

In case of using an ABAP routine or a BEx-Variable in the DTP filter, the result of both are used to create the filter condition.

The placeholder $$filter$$ is used for all BW objects except advanced DataStore-Objects (ADSO). To filter an ADSO see $$inbound_filter$$, $$change_log_filter$$and $$nls_filter$$.

$$inbound_filter$$

This placeholder is only used to filter the data which is read from the inbound queue[KT1]. The filter values are equivalent to the values of the placeholder $$filter$$.

This placeholder is only used for advanced DataStore-Objects (ADSO). See also $$change_log_filter$$and $$nls_filter$$.

$$changelog_filter$$

This placeholder is only used to filter the data which is read from the change log. The filter values are equivalent to the values of the placeholder $$filter$$.

This placeholder is only used for advanced DataStore-Objects (ADSO). See also $$change_log_filter$$and $$nls_filter$$.

$$keydate$$

Date to read time dependent master data. The variable value is applied to the logical index of an InfoObject in an olapDataSource. The variable is always set, whether it is used or not.

$$langu$$

Language to read master data. The variable value is applied to the logical index of an InfoObject in an olapDataSource. The placeholder is always set, whether it is used or not.

$$navigational_attribute_filter$$

This placeholder lists the filters based on navigation attributes, which are used in the DTP filter.

$$objvers$$

Object version to read master data. The variable value is applied to the logical index of an InfoObject in an olapDataSource. The variable is always set, whether it is used or not.

$$runid$$

For some features, it is necessary to store metadata in a temporary table. In a prepare phase, the metadata is inserted into this temporary table identified by a unique id (runid). During runtime, the values are then read by using the runid of this placeholder.

This placeholder is primarily used in a CalcScenario based on an explicitly created SAP HANA Analysis Process (see transaction RSDHAAP) and not a SAP HANA Transformation.

$$target_filter$$

The target filter is used in a SQL statement to ensure that only those records of the result set which match this filter condition are inserted into a target object. This placeholder is used if the filter condition is given by the target object, for example by a semantically partitioned Object (SPO). A target filter is applied to the result set of the transformation.

$$datasource_psa_version$$

Relevant version number of the PSA where the current request is located.

$$DATAPAKID$$.DTP

This value is set by the package size parameter maintained in the DTP. More information, especially for dependencies, can be found in the value help (F1) for the DTP package size parameter.

$$REQUEST$$.DTP

This placeholder contains the request ID for the target request.

In the simulation mode, the value is always set to the SID value of the request ID DTPR_SIMULATION in the request table /BI0/SREQUID.

 

 

 

 


The following SQL SELECT statement, see Figure 2.15, belongs to the CalculationScenario as shown in Figure 2.11. The SQL statement passes the value for the variable adso_table_type. The placeholder

 

     'PLACEHOLDER'=('$$adso_table_type$$',   '( ("BW_HAP__________ADSO_TABLE_TYPE"=''AT'' ) )'),

 

sets the field BW_HAP__________ADSO_TABLE_TYPE to ’AT’. In the union definition in Figure 2.11, see (2), the field BW_HAP__________ADSO_TABLE_TYPE is set to the constant value ’AT’for all rows provided by the active data table. Like this, the placeholder ensures that only data from the active data table is selected.

 

Figure_2_15.png

Figure 2.15: DPT - SELECT statement

 

Some PLACEHOLDERS get the transformation-ID as a prefix to ensure that these PLACEHOLDERidentifiers are unique, see placeholder $$runid$$ in Figure 2.15 and the calcScenarioDataSource description in the blog HANA based BW Transformation.

 

 

Viewing all 333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>