Quantcast
Channel: SCN : Blog List - SAP Business Warehouse
Viewing all 333 articles
Browse latest View live

How to download a SAP Note without OSS Id

$
0
0

Introduction

 

Many of our SAP BI/BW Consultants may not have OSS Id to browse SAP Notes at Market place. The following steps will show you How to download Notes without OSS Id..

 

Step 1: Goto t-code SNOTE in Dev system, you will get the screen like below

 

Snote.JPG

Step 2 : Click on Goto -->Download SAP Note

Step 3 : Enter the Note No. and Press Execute

Step 4 : At times, You may get Error in RFC connection. You can ignore this message and try again

Step 5 : Your SAP Note will be downloaded and available under SAP Notes-->New. You can refer the screen shot

 

These simple steps will make you to get any SAP Notes


Data Unification Services - Supplementing Administrative Characteristics

$
0
0

Standardization is a key aspect of SAP BW Layered, Scalable Architecture (LSA), SAP’s best practice in Enterprise Data Warehousing. One of the ways to realize standardization in the data staging process is developing generic, reusable ABAP building blocks with a comprehensive interface.

From an Enterprise Data Warehousing perspective, the process of Data Unification can be classified as a modeling pattern. The Data Unification Services' ABAP building block and central control tables are designed to standardize and facilitate this process.

In this blog I will discuss Data Unification Services, the concept behind it and how to use it in the transformation.

I developed a working version of the Data Unification Services which I am going to share via a document. It will cover all details on creating the ABAP Objects classes, including the source code and all necessary ABAP Workbench objects.

Conceptual Overview

Data Unification in an LSA context is all about supplementing data records with administrative characteristics. Usually it concerns the following characteristics:

  • Data Domain;
  • Organizational Unit;
  • Request ID;
  • Timestamp;
  • Origin.

 

Data Domain is related to domain partitioning or strategic partitioning and is applicable to transactional data flows. Please refer to my blogs Pattern-based Partitioning using the SPO BAdI - Part 1: Introduction and Pattern-based Partitioning using the SPO BAdI - Part 4: Use Cases for more information.

Domain partitioning is not always applicable: Data Acquisition Layer, Corporate Memory Layer and master data flows in general. In these cases it's transparent to assign data to domain 'X' (cross-domain and domainless).

 

Data Unification Services is positioned in the Harmonization & Quality Layer.

 

Figure_1_Positioning.jpg

Figure 1: Positioning in LSA (source: SAP AG)

 

The following figure shows an example of an LSA compliant transactional data flow.

 

Figure_2_Data_Flow.jpg

Figure 2: Data Unification Services in the Data Flow

 

The yellow callout indicates where Data Unification Services is implemented. The transformation is positioned between the Data Acquisition Layer outbound InfoSource and the Harmonization Layer Pass-thru DSO.

For other data flows it is slightly different. In master data flows the transformation is positioned between the Data Acquisition Layer outbound InfoSource and the Harmonization Layer intermediate InfoSource. In Corporate Memory flows the transformation is positioned between the Data Acquisition Layer outbound InfoSource and the Corporate Memory DSO.

Note that in all cases this transformation is exclusively used for Data Unification Services.

Control Tables

For transactional data flows the basic rule is that domain partitioning is applied. Data Unification Services needs help to be able to determine Domain and Organizational Unit automatically. Therefore we have to introduce 3 control tables.

Exclusively for transactional DataSources the following control tables have to be maintained:

  • YBWUNISRC - Domain Driving Source Field DataSource;
  • YBWUNIVAL - Value Assignments;
  • YBWUNIREF - Reference Domain Driving Characteristic.

 

Table YBWUNISRC is always mandatory to be maintained for every new transactional DataSource. It is used to store the domain driving source field of the DataSource and the corresponding InfoObject.

 

Figure_3_Domain_Driving_Source_Field_DataSource.jpg

Figure 3: Control Table - Domain Driving Source Field DataSource

 

Table YBWUNIVAL is used to store the value assignments. It has to be maintained less frequently since already existing value assignments can be reused. E.g. InfoObject Company Code and its characteristic values are already maintained. Every next DataSource can reuse these value assignments, there is no need to maintain them more than once. There might be a future need to maintain the entries if new Company Codes are introduced.

 

Figure_4_Value_Assignments.jpg

Figure 4: Control Table - Value Assignments

 

In some cases you might encounter that multiple InfoObjects are used for the same entity. In such cases it must be avoided that the value assignments are entered more than once. That is why the reference domain driving characteristic functionality was introduced. Table YBWUNIREF is used for this purpose.

 

Figure_5_Reference_Domain_Driving_Characteristic.jpg

Figure 5: Control Table - Reference Domain Driving Characteristic

Using Data Unification Services

Prerequisite for using Data Unification Services is the presence of the administrative characteristics in the target structure of the transformation. I.e. you have to add these characteristics to the DSO or InfoSource (depending on the data flow). The example below shows a DSO where you have to add the characteristics to the Data Fields section.

 

Figure_6_Adding_Administrative_Characteristics.jpg

Figure 6: Adding Administrative Characteristics to DSO

 

The Data Unification Services building block is realized as an ABAP Objects class. One of the following public methods has to be called in the transformation (end routine or expert routine):

  • EXECUTE_END - Execute Transformation End Routine;
  • EXECUTE_END_X - Execute Transformation End Routine (Only Domain 'X');
  • EXECUTE_EXPERT - Execute Transformation Expert Routine;
  • EXECUTE_EXPERT_X - Execute Transformation Expert Routine (Only Domain 'X').

 

You can see in the screenshot below an example of the limited source code to be entered here.

 

Figure_7_Coding_Example.jpg

Figure 7: Example of Data Unification Services in the Transformation

 

The source code can easily be inserted by using the appropriate ABAP Pattern. Click on push button Pattern, choose option Other Pattern and select the appropriate pattern in line with the type of routine (end routine or expert routine) and domain.

 

Figure_8_ABAP_Patterns.jpg

Figure 8: Inserting the Data Unification ABAP Pattern

 

The following ABAP Patterns are available:

  • YBW_END_UNI - Execute Transformation End Routine;
  • YBW_END_UNI_X - Execute Transformation End Routine (Only Domain 'X');
  • YBW_EXPERT_UNI - Execute Transformation Expert Routine;
  • YBW_EXPERT_UNI_X - Execute Transformation Expert Routine (Only Domain 'X').

 

All Data Unification functionality is provided by the class. The source package and/or result package are processed dynamically. The relevant information is fetched from the control tables. Any exceptions are handled appropriately: they terminate the process in a controlled way and fill the monitor log with information on the errors.

Conclusion

From an Enterprise Data Warehousing perspective, the process of Data Unification can be classified as a modeling pattern. The Data Unification Services' ABAP building block and central control tables are designed to standardize and facilitate this process.

In this blog we discussed Data Unification Services, the concept behind it and how to use it in the transformation.

How to determine the tables behind a standard BW object

$
0
0

I have seen a lot of posts asking for standard tables behind BW objects. I hope to clarify this with this blog. Actually an object is transported between two BW system by transporting the table entries of standard table associated with the object. Let me show you how to see the standard table behind a DSO.

 

1) Collect any DSO in a transport request

TR.JPG

 

2) Double click on the TR to get the below screen

 

TR1.JPG

 

3) Double click on the line corresponding to the DSO or Right click on the line and press display . You will get the list of tables associated with DSO.

 

Table.JPG

Thus you can get all the tables associated with the DSO. You can see the description of the table in SE11 to know what is stored in these tables and find the table you are looking for.

 

You can see all the tables associated with any BW object (Cube,infoobject, Multiprovider, transformation etc) by following the above technique.

Tutorial of Association Analysis

$
0
0

Hello everyone,

 

I am here going to demonstrate how to use Data Mining Model of SAP BW to operate Assocation Analysis.

 

In order to complete this tutorial, data source CSV file which contains relevant data is needed for data extraction. Also you can create your own like this:

24-05-2013 Friday 10-26-39 PM.jpg

   Access the Data Mining Workbench.  Use TCODE RSDMWB or Double Click Data Mining Model

23-05-2013 Thursday 4-03-22 PM.jpg

  The Data Mining Workbench screen appears as follows:

23-05-2013 Thursday 4-07-08 PM.jpg

The workbench includes a list of Data Mining methods that can be used to analyze data, commonly stored in business applications.These methods include classification (decision trees), clustering, association analysis, approximation, and further analyses techniques such as ABC classification.

 

 

Phase 1  --- Creating an Association Model

 

1. Expand 23-05-2013 Thursday 4-34-11 PM.jpgand Right Click 23-05-2013 Thursday 4-44-50 PM.jpgCreate Model,

 

The Create Model dialog screen appears as follows:

23-05-2013 Thursday 5-00-09 PM.jpg

 

2. Name Model and Description, select Radio button: Manul, and accept the entry.

 

The Create Model XXX screen will appear as follows:

23-05-2013 Thursday 4-57-36 PM.jpg

You now need to select the fields (InfoObjects) needed for your model. The fields selected will need to match those identified in the data source file shoppingdataset.csv.

3. Click    the icon 23-05-2013 Thursday 5-10-43 PM.jpg to insert a row for the first InfoObject (Material).

 

Notice that properties (data type and length) of the InfoObject are automatically populated. The Content Type option defines how each field is to be treated within the model. There are three possible options; transaction, item and transaction weight.

4. Click  23-05-2013 Thursday 5-35-28 PM.jpg  the icon  in each specific field to display all the available options and select corresponding option.

 
After completing this step the field assignment for the Association Model show appear as follows;

23-05-2013 Thursday 5-53-42 PM.jpg

5. Click  Parameters  tab to set the following parameters.

 

Support can be used to help determine how useful a product association rule is.

 

Confidence (or predictability) measures how dependent a particular item.

 

Lift helps to identify and eliminate rules that are only generated because some of the items naturally occur very frequently, although there is no actual association between the set of leading and depending items.

 

The Association Model uses Leading and Dependent items. If a leading item has been purchased, what is the likelihood the following dependent items is also purchased.  If a customer purchases milk (leading item) what is the likelihood they will also purchase bread (dependent item). The relationship between leading and dependent items is not limited to one to one. The parameter Leading Depth can be set to identify the number of products to be considered together as leading items and the parameter Dependent Depth can be used to set the number of dependent items to be considered in the model’s analysis.

 

                   Leading Items                        Dependent Items

                   milk                                           bread

                   milk, spread                            bread

                   milk                                           bread,spread

 

6. Put value in the Maximum Leading Depth field, Tand in the Maximum Dependent Depth field.

7. Click 16-05-2013 Thursday 2-25-22 PM.jpgreturn to the previous screen.

8. Click 16-05-2013 Thursday 2-01-11 PM.jpg to save your model.

 

A message appears in the Status Bar.

9. Click 23-05-2013 Thursday 6-35-20 PM.jpg to activate the model.  A log screen will appear indicating the status of each step of the model building proces23-05-2013 Thursday 6-36-57 PM.jpg

10. Click  23-05-2013 Thursday 6-36-46 PM.jpgto close this screen. Now you have successfully created Association Model

 

Next, I am going to use SAP GUI and run Phase 2 & 3 --- Training and Execution of the Assocation Model :


 

Thanks, I hope you enjoy it ! 

How to edit a Transport Request

$
0
0

This guide is sort of a trick..:D


This will teach youhow to edit a transport requestthat you have already transported toQASorPROD..For example, you already seek for an approval of the TRANSPORT REQUEST that you have created..After transporting all the Infoproviders and Infoobjects that you have created, you noticed that you missed to include a single INFOOBJECT on your request..Hence, you will again seek for an approval of another transport request for that one (1) infoobject as well as re-transport all the infoproviders and infoobjects that you have already transported..


Using this trick, you need not to seek for approval for another transport request since you will be using the sameTransport Request Number..You will just edit your first transport and add that single infoobject that you have missed..


Here is the trick guyz!

 

1. Go to SE10 or STMS and copy the transport request number that you want to edit.

 

1.PNG

 

2. Go to SE38 and run the program RDDIT076. Click the EXECUTE button (near the ACTIVATE button) or press F8 to RUN.

 

2.PNG

 

3. Paste the Transport Request Number (from STEP 1) and click the EXECUTE button.

 

3.PNG

 

4. Double click the first row and edit its STATUS; change it from R to D. Click the SAVE button afterwards.

 

4.png

 

5.png

 

5. Do STEP 4 on the second row. You will have the screen below.

 

6.PNG

 

6. Go to SE10 and double click the top row of your transport request number.

 

7.png

 

7. DELETE all ROWS.

 

8.png

 

8. Go to PROPERTIES tab. Delete all rows in the ATTRIBUTE column by clicking each row then click the DELETE ATTRIBUTE button.

 

9.png

 

9. Click SAVE.

 

10.PNG

 

10. Include now the objects that you want to add in your existing request by clicking the OWN REQUESTS button and locate your edited transport request.

 

11.png

 

By the way, you can also use this trick if you already released the your transport request and you forgot something to include in that transport request.

 

Hope this will help you guyz!

Error Replicating 3.5 Datasources after upgrade from BI 7.0 to BW 7.3

$
0
0

Hi Users,
I wanted to share my experience which i faced recently after upgrading our system landscape to BW 7.3 from BI 7.0.

 

We have 3.5 datasources which are serving as targets for the 3.5 datasources in other teams systems. These are master data loadings.
The sytem landscape for those(source) teams is in BW 3.5 version.

 

When those teams were executing the process chains after our upgrade to BW 7.3, the loads were failing.
The error message was not much helpful. It was error code 2 - source file must be open in source system.

 

The source teams tried to replicate the datasources. But the moment they click on replicate datasources, the datasources would just dissapear.
This was a strange behaviour i have seen. We concluded that this would be due to  BW 3.5 and BW 7.3 versions incompatibity.

I tried to google, scn etc. but could not find much info on this behaviour.


We checked the dataflows in both the target and source systems and it was active.

Then we tried to "Activate the transfer structure for the info sources which had issue using the Program “RS_TRANSTRU_ACTIVATE_ALL”.and executed the loads
and it was succesful.

 

Hope this error blog gets helpful for users.

 

Regards
Syed Zabiullah

CHALLENGE: Call BW planning sequence from Script logic with input parameters

$
0
0

Armed with only ABAP development experience and hardly any BW or BPC Script Logic experience I had to consider where to start?

 

...SCN..of course

 

With my first search I found this document which was really helpful in explaining how I can use the exit framework which allows customers to use ABAP within BPC Script Logic by calling a BADI within the Script Logic.

 

After creating the implementation as required:

Z_EI_PLAN_SEQ.PNG

The document explained to me how to implement the BADI but obviously did not have the logic to call the BW planning sequence.

In order to do this I had to implement the BADI UJ_CUSTOM_LOGIC and wrote the code in the EXECUTE method of the implementing Class:

Z_EI_PLAN_SEQ Implementing Class.PNG

Logic to call the BW planning sequence

METHOD if_uj_custom_logic~execute.  DATA: ls_param             TYPE LINE OF ujk_t_script_logic_hashtable,             lv_plan_seq         TYPE rspls_seqnm,             lt_bapiret2           TYPE TABLE OF bapiret2,             lcl_area_handle    TYPE REF TO zcl_bwip_shared_area,             lcl_memory          TYPE REF TO zcl_bwip_shared_memory.  READ TABLE it_param INTO ls_param WITH TABLE KEY hashkey = 'ZPLAN_SEQ'.  IF sy-subrc EQ 0.    TRY.        CALL METHOD zcl_bwip_shared_area=>attach_for_write          RECEIVING            handle = lcl_area_handle.      CATCH cx_shm_exclusive_lock_active .      CATCH cx_shm_version_limit_exceeded .      CATCH cx_shm_change_lock_active .      CATCH cx_shm_parameter_error .      CATCH cx_shm_pending_lock_removed .    ENDTRY.    TRY.        CREATE OBJECT lcl_memory AREA HANDLE lcl_area_handle.        lcl_area_handle->set_root( lcl_memory ).        lcl_memory->set_t_param( it_param ).        lcl_area_handle->detach_commit( ).      CATCH cx_shm_error.    ENDTRY.    lv_plan_seq = ls_param-hashvalue.    CALL FUNCTION 'RSPLAPI_PLSEQ_EXECUTE'      EXPORTING        i_sequence       = lv_plan_seq
*     IMPORTING
*       E_SUBRC          =      TABLES        e_t_return       = lt_bapiret2              .  ENDIF.

ENDMETHOD.

 

 

Being the lazy developer that I am, I'm always looking for ways to make life easier for me.

I did not want to keep going to BPC to test the call to the BADI by the Script Logic, so I looked for an easier way to do my testing.

 

I came across another useful post in SCN which explained the use of Tx UJKT.

I then used Tx UJKT to enter and execute the following Script Logic.

 

*START_BADI ZPLAN_SEQ 
 ZPLAN_SEQ = /CPMB/ZGL01
ZGLPORT = $ZGLPORT$
 ZVERSION = $ZVERSION$
*END_BADI

 

In the above Script Logic I call the BADI using the filter 'ZPLAN_SEQ'.

I am also passing 3 parameters through to the BADI. NB: The BADI can accept any number of parameters.

The logic above in the EXECUTE method caters for calling any planning sequence with any number of parameters.

I am using a shared memory object which was available since NetWeaver 6.40.

These parameters however must still be read by the planning sequence being called.

When the planning sequence is created these parameters must be created as SAP Exit parameters.

This will then invoke the standard SAP exit for the parameter (EXIT_SAPLRRS0_001):

EXIT_SAPLRRSO_001.png

I then added the following code to read the saved parameters in the SAP exit:

 

   data: lt_param TYPE UJK_T_SCRIPT_LOGIC_HASHTABLE,       ls_param type LINE OF UJK_T_SCRIPT_LOGIC_HASHTABLE,       lcl_area_handle TYPE REF TO zcl_bwip_shared_area,       lcl_memory      TYPE REF TO zcl_bwip_shared_memory.    TRY.        CALL METHOD zcl_bwip_shared_area=>attach_for_read          RECEIVING            handle = lcl_area_handle.      CATCH cx_shm_exclusive_lock_active .      CATCH cx_shm_version_limit_exceeded .      CATCH cx_shm_change_lock_active .      CATCH cx_shm_parameter_error .      CATCH cx_shm_pending_lock_removed .    ENDTRY.    TRY.        lt_param = lcl_area_handle->root->get_t_param( ).        lcl_area_handle->detach( ).      CATCH cx_shm_error.    ENDTRY.    READ TABLE lt_param into ls_param WITH TABLE KEY hashkey = I_VNAM.    if sy-SUBRC eq 0 and i_step = 1.      wa_range-sign = 'I'.      wa_range-opt  = 'EQ'.      wa_range-low  = ls_param-HASHVALUE.      wa_range-high  = ls_param-HASHVALUE.      append wa_range to e_t_range.    endif.

 

This is how I managed to pass parameters from Script Logic to BADI and then execute Planning Sequence with the parameters entered for Script Logic.

Various ways for File Moving from one folder to other on application server

$
0
0

Business Requirement

Move file on application server (T coade AL11) from one folder to another folder

Implementation Logic in BW

This can be achieved in BW in three different ways:

  1. Using Trasnfer dataset statement
  2. Using external operating system commands
  3. Using function module ‘ARCHIVFILE_SERVER_TO_SERVER’

Logic

 

Case 1: Using Transfer Dataset statement

 

ABAP Code

 

PARAMETERS: PS_dir(50)  TYPE C ,
                            pa_dir(50)  TYPE c

                          PF_name(50) TYPE C OBLIGATORY LOWER CASE.

DATA: L_OLDFILE type localfile,
            L_NEWFILE type localfile,

                L_NEWLINE(240) type c,

Concatenate  ps_dir pf_ name  into l_oldfile.

Concatenate pa_dir pf_name into l_newfile.

OPEN DATASET  l_oldfile FOR INPUT IN BINARY MODE.

    If sy-subrc = 0.

                OPEN DATASET l_newfile FOR OUTPUT IN BINARY MODE.

                If sy-subrc = 0.

              DO.
                                READ DATASET l_oldfile INTO l_newline.
                                IF sy-subrc EQ 0.
                                              TRANSFER l_newline  TO l_newfile.
                                ELSE.
                                              if l_newline is not initial.
                                                      TRANSFER l_newline TO l_newfile.
                                              endif.
                                              EXIT.
                                ENDIF.
                ENDDO.

                Endif.

                CLOSE DATASET l_newfile.

          Else.

                Message ‘Can not open source file’ type ‘E’.

          Endif.

 

CLOSE DATASET l_oldfile.

 

DELETE DATASET  l_oldfile.

 

Advantages of this approach:

                Simple Process which transfers data in set of 240 characters

Disadvantages

                Data is handled while moving file

                For large files program will take more time

 

Case 2: Using External Operating System Command

ABAP Code

PARAMETERS: p_src TYPE sapb-sappfad,
                          p_tgt TYPE sapb-sappfad.

DATA: l_para TYPE btcxpgpar,
            l_stat(1)  TYPE c.
CONCATENATE p_src p_tgt INTO l_para SEPARATED BY space.

CALL FUNCTION 'SXPG_COMMAND_EXECUTE'
 
EXPORTING
    commandname                        =
'ZMV'
    additional_parameters              = l_para

IMPORTING
  status                              = l_stat
EXCEPTIONS
  no_permission                      =
1
  command_not_found                  =
2
  parameters_too_long                =
3
  security_risk                      =
4
  wrong_check_call_interface          =
5
  program_start_error                =
6
  program_termination_error          =
7
  x_error                            =
8
  parameter_expected                  =
9
  too_many_parameters                =
10
  illegal_command                    =
11
  wrong_asynchronous_parameters      =
12
  cant_enq_tbtco_entry                =
13
  jobcount_generation_error          =
14
 
OTHERS                              = 15
          .IF sy-subrc <> 0.
 
MESSAGE 'Error' TYPE 'E' DISPLAY LIKE 'I'.
ELSE.
 
if l_stat = 'O'.
   
message ‘File Archived successfully’ type 'I'.
  endif.
ENDIF.


Advantages :

No file data handling

Time taken is less as we using system command to move file

 

Disadvantages:

We need create Operating system command in T code SM49

 

Case 3: Using Function module ‘ARCHIVFILE_SERVER_TO_SERVER’

ABAP Code:

PARAMETERS: p_src TYPE sapb-sappfad,
            p_tgt TYPE sapb-sappfad.

CALL FUNCTION 'ARCHIVFILE_SERVER_TO_SERVER'
 
EXPORTING
    sourcepath            = p_src
  targetpath            = p_tgt* IMPORTING*  LENGTH                =
EXCEPTIONS
  error_file            =
1
  no_authorization      =
2
 
OTHERS                = 3
          .IF sy-subrc <> 0.
 
WRITE : 'Error in archival', sy-subrc.
ELSE.
 
WRITE 'File archived successfully'.
ENDIF.

Advantages:

Standard Function Module is used for file archival

Disadvantages

File names passed to Function Module should be logical paths to file

How to create logical path for file is given in below link

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60e86545-9940-2d10-9f89-eb5bce1f9228?QuickLink=index&overridelayout=true&47519518718715


Semantic Groups in DTP

$
0
0

Introduction

From SAP BI 7.0 onwards, in DTP we have an option for semantic groups.  We have different perception about Semantic Groups in DTP and its usage. Here I am summarizing specific usage of semantic group with examples.

 

A1.jpg

Case 1. Calculation on group of records

 

Example: I would like to provide discount to each customer based on total sales value of Customer No, Country and Product category.

Discount should be assigned to each row i.e. for each product. I am having below sample records.

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

P001

C001

US

Home

5000

P002

C002

UK

Electronics

3000

P003

C003

Germany

Kids

2000

P004

C004

India

Home

4000

P005

C001

US

Electronics

1000

P006

C003

Germany

Kids

6000

P007

C005

France

Home

5000

P008

C004

India

Electronics

3000

P009

C002

UK

Kids

2000

P010

C001

US

Home

1000

 

Discount Criteria based on Sales value as per grouping (Customer No + Country + Product Category)

 

Total Sales (In US $)

Discount %

>=1000 AND <5000

5

>=5000 AND <10000

10

>=10000

20

 

Data Upload from PSA to Standard DSO.

DTP Package size is 5 Records.

 

End routine is calculating total sales which are used for discount calculation of each customer based on Sales value and assigning it to each product row.

 

Without Semantic Group Data Upload

 

Total sales value and Discount amount when data loaded without semantic group is as below

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sales (In US $)

Discount Amount (In US $)

P001

C001

US

Home

5000

5000

500

P002

C002

UK

Electronics

3000

3000

150

P003

C003

Germany

Kids

2000

2000

100

P004

C004

India

Home

4000

4000

200

P005

C001

US

Electronics

1000

1000

50

P006

C003

Germany

Kids

6000

6000

600

P007

C005

France

Home

5000

5000

500

P008

C004

India

Electronics

3000

3000

150

P009

C002

UK

Kids

2000

2000

100

P010

C001

US

Home

1000

1000

50

 

Here if you notice about Customer C001 having same country and product category but Total sale is assigned as 5000 and 1000 respectively.

In such case for Product P010, customer received only 5% discount where as actually he/she should get 10%.

 

Similar case is for Customer C003 and product P003.

 

This is because Data uploaded into 2 equal packages of 5 records each, first 5 records into 1st package and rest 5 records in 2nd package.

 

Since in First package 5 records are as below,

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

P001

C001

US

Home

5000

P002

C002

UK

Electronics

3000

P003

C003

Germany

Kids

2000

P004

C004

India

Home

4000

P005

C001

US

Electronics

1000

 

Total Sales Value is calculated as below

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sale (In US $)

P001

C001

US

Home

5000

5000

P002

C002

UK

Electronics

3000

3000

P003

C003

Germany

Kids

2000

2000

P004

C004

India

Home

4000

4000

P005

C001

US

Electronics

1000

1000

 

So Discount values for first 5 rows are stored into Target DSO as below

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sale (In US $)

Discount Amount (In US $)

P001

C001

US

Home

5000

5000

500

P002

C002

UK

Electronics

3000

3000

150

P003

C003

Germany

Kids

2000

2000

100

P004

C004

India

Home

4000

4000

200

P005

C001

US

Electronics

1000

1000

50

 

For next 5 rows as below

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sale (In US $)

Discount Amount (In US $)

P006

C003

Germany

Kids

6000

6000

600

P007

C005

France

Home

5000

5000

500

P008

C004

India

Electronics

3000

3000

150

P009

C002

UK

Kids

2000

2000

100

P010

C001

US

Home

1000

1000

50

 

 

With Semantic Group Data Upload

 

Key fields are Customer no, Country and Product category

 

Total sales value and Discount amount when data loaded with semantic group are as below

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sale (In US $)

Discount Amount (In US $)

P001

C001

US

Home

5000

6000

500

P002

C002

UK

Electronics

3000

3000

150

P003

C003

Germany

Kids

2000

8000

200

P004

C004

India

Home

4000

4000

200

P005

C001

US

Electronics

1000

1000

50

P006

C003

Germany

Kids

6000

8000

600

P007

C005

France

Home

5000

5000

500

P008

C004

India

Electronics

3000

3000

150

P009

C002

UK

Kids

2000

2000

100

P010

C001

US

Home

1000

6000

100

 

In this case records will be grouped based on Semantic key definition and even if there are 2 packages each will have records as below.

 

Records in First package grouped based on are Customer No, Country and Product Category

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sale (In US $)

Discount Amount (In US $)

P001

C001

US

Home

5000

6000

500

P002

C002

UK

Electronics

3000

3000

150

P005

C001

US

Electronics

1000

1000

50

P009

C002

UK

Kids

2000

2000

100

P010

C001

US

Home

1000

6000

100

 

 

Records in Second package

 

Product No

Customer No

Country

Product Category

Sale Value (In US $)

Total Sale (In US $)

Discount Amount (In US $)

P003

C003

Germany

Kids

2000

8000

200

P004

C004

India

Home

4000

4000

200

P006

C003

Germany

Kids

6000

8000

600

P007

C005

France

Home

5000

5000

500

P008

C004

India

Electronics

3000

3000

150

 

Due to this semantic grouping we have correct values for the Total Sales and our discount values are also calculated as expected.

 

Please note to accommodate records as per semantic group, package size will be automatically adjusted. Hence, it will not be constant through out the load.

 

Case 2. Error handling

 

Similarly if Error handling is enabled and error DTP is available, error handling functionality will work as below based on semantic group.

 

Without Semantic Group Data Upload

 

In case there is an error in the record having product no P001 then it will go into error stack and rest of the records will be uploaded. In such case for Product P010, total sales value will be 1000 and hence discount will be 5% i.e. 50$.

 

Even later if we will correct first record (P001) into error stack and upload it through error DTP, Discount value will not change for record P010.

 

With Semantic Group Data Upload

 

In case there is an error in the record having product no P001 then it will go into error stack along with record having product id P010 and rest of records will be

uploaded into target. It is due to semantic definition, so even one record is having error, both records are transferred into error stack to maintain semantic grouping.

 

In such case, when product P001 will be corrected into error stack and error DTP will run, it will upload both records with correct total sales and discount value.

 

Additional Note

Here I have used only 2 cases for illustration purpose. In real life there can be more use cases as per specific needs.

 

Please note above example is to explain the concept of semantic groups. There can also be other ways to achieve this calculation.

 

Hope it will help to get better insight about semantic groups into DTP for beginners.

Long texts in BW 7.4

$
0
0

Finally! We can use up to 250 characters in characteristic values and up to 1 333 in texts.

 

You not need methods like this (http://scn.sap.com/docs/DOC-11404) any more

 

See SAP Note 1823174 - BW7.40 Changes and customer-specific programs for details.

How to know the list of users who runs the particular Query and at time of the execution

$
0
0

We can check theList of users who used particular query in RSSDDSTAT Table.

 

 

 

 

 

 

 

 

If you install BW statastics standard providers we can check the data in  0BWTC_C10 Multiporvider .if not we can check the data in RSSDDSTAT Table.

Below are the steps :

Go to Se16

Provide theTable name as RSDDSTAT

 

Table.JPG

 

 

Provide the query Name and excute

 

 

Query.JPG

 

It will display the users list who used the Particular query and displays the time stamp also.

 

sas.JPG

 

Below are the steps to  activate the BW Statistics:

·         To activate statistics goto tcode RSA1à Tools à Settings for BI Stats

·         Set stats for Query- set stats for InfoProvider    

·         Set stats for Web Template- set stats for Workbook

 

Below are the some tables to check the statastics

RSDDSTATAGGR              Detail Table for Aggregate Setup

RSDDSTATAGGRDEF      Detail Table of Navigation for each InfoCube/Query

RSDDSTATSTEPTP:           Type of steps

BEX, BEX3, BRCS (broadcasting), EXTN (external read), JAVA, MDX

RSDDSTATHANDLTP:     combines Handle & Step type

Helps to identify where the time is being spent

RSDDSTATWHM:            Time it takes to do a load

RSDDSTATCOND              InfoCube Compression

RSDDSTATDELE                InfoCube Deletions

RSDDSTAT_WRITE           InfoCube Writes

RSDDSTATBCACT              Business Content Activation Statistics

RSDDSTATCOND               Data on Condensing Run InfoCube

RSDDSTATEXTRACT         Time of last delta load

BW Requests how to Transport and Import

$
0
0

I thought to give some basic transport steps. how we can transports from BW Dev - Bw Quality - BW prod.

 

Actually this part was taken care by basis team. If we know basic steps it will be good for us.

 

Here i am explained how we transport a Full DTP from dev to Quality and then to prod in parts wise..

 

After creating the DTP, all changes may be stored in local package. So we need to collect in custom package because objects which are in local packages are not transportable.

 

 

How to capture DTP in TR(transport request):

In bw we can use Transport connection to capture changes in TR or Directly we can capture changes in TR.

DTP Was in active status., double click on DTP, from menu Go to Object Directory Entry. like below

123`.jpg

 

 

 

Go to change symbol, enter ZBW(custom - package which we have) save, package  was changed, continue, on next screen
it may ask TR, click on create icon(white symbol one), There enter description of your TR, continue, automatically one TR will generated. Continue, again save DTP and activate. Your TR have activated DTP.

 

Go to Tx – SE10, click on display button  or press F5, you will get below screen.

1231.jpg

 

 

Enter your TR and continue. You will get below screen. My screen shows as released and target system was BQA. When
we see this first time, you will see modifiable(in place of released).

 

re.jpg

 

 

Now we need to release transport in BW dev. Here always first need to release task(XXXX0958) if you have multiple
below 0958, in any sequence you can release them after that only we need to release TR(XXXX0957).

 

 

How to release:

 

Select your task, ex: here XXXX0958(child) and click on transport bus symbol, like this way release task(s) and finally release TransportXXXX0957 (Parent). Refresh the screen. All requests which are under modifiable, after release will come under released as above scree showed. Against request you will see tick mark also after released.

 

You can check the transport logs(red mark on above pic), on the same screen, select your request and on tool bar you will see icon for transport logs, click on it and see, you may see its as success like below.

re.jpg

 

Now we released in BW Dev environment and next step we need to import in BW Quality. i will Continue on part 2.

 

http://scn.sap.com/community/data-warehousing/netweaver-bw/blog/2013/06/28/bw-requests-how-to-transport-and-import-part--2

 

Thanks for reading

RK

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

BW Requests how to Transport and Import Part - 2

$
0
0

Continue..

 

http://scn.sap.com/community/data-warehousing/netweaver-bw/blog/2013/06/28/bw-requests-how-to-transport-and-import

 

How to import at Quality side:

Log into BW Quality server, Tx – STMS, After that Press F5 or select transport bus symbol as shown below.

 

34.jpg

On next screen you will see three  environments which are in landscape.

34.jpg

 

Now we need to import into BW quality so double click on BQA. We will get below screen.

If you use Tx – STMS_IMPORT, You will directly come into(see) below screen.

34.jpg

 

Its already existing system, so after refresh we won’t see our request, we need to go end of the page.

As shown above,click on last page icon. Its take you end of the page. Our bw dev TR released request visible at bottom most.

If any TR was released in bw dev and in quality not yet imported then you will see that TR also bottom page.

But against that you will see one symbol. Like below arrow mark in pic.

34.jpg

 

 

TR which was in block box not yet imported due to some issue.

After import of bw TR. Our request will move upper side and the TR which are not imported to quality(but released from bw dev)

will bottom most with that symbol.

 

After refresh we will see bottom most TR which are not yet imported with that graded symbol(orange) and arrow mark.

34.jpg

Import starts here:

34.jpg

 

 

Above screen we see two transport buses. But we need to choose 2nd one which have little orange color one.

If we choose the transport(Import All Requests, its danger one) which was in red circle, it imports all TR from source.

We need to import latest one only, so choose Import Request(Ctrl+F11, green tick mark).

 

After clicking on Import request we will get below screen:

34.jpg

 

 

On Date tab, default its selected as immediate, that’s fine.

One Execution tab my default option was Asynchronous(you can go as default on your machine).

Provide target client as per your system.

 

Options Tab: you can select top 5 options as below:

34.jpg

 

 

After choose above 3 tab options, click on continue green tick mark, next you will get message box, like below:

34.jpg

 

 

Click on yes,  its starts importing, You will get STMS_IMPORT Screen with truck symbol against our TR.

Means our TR was importing in process.

34.jpg

 

Keep on refresh, after finish, truck symbol will gone and you will yellow symbol there and green symbol against our TR.

 

 

34.jpg

 

After the success, we can see the transport logs as shown(part 1). Same like this way we can import in to Production also.

If we are authorize to access  Tx – STMS.

 

Thanks for reading..........

RK

How to delete a request from a database table in a debug mode

$
0
0

Introduction

 

Scenario 1 :

 

At times, we will be struck while activating requests in a DSO due to some incorrect request deletions in DSO. But these requests might still exists in some database tables like RSICCONT, RSODSACTREQ etc..

 

Scenario 2 :

 

Intialization requests would have been deleted from Infopackage-->Scheduler menu. But these requests would have been existing in RSSDLINIT, RSSELDONE, RSREQDONE tables etc..

 

Unless, we delete these bad requests from the above tables, we will not be able to move forward. The following procedure will help us to delete the requests from the back-end. I strongly advice you to follow this procedure in extreme situations only.

 

Step 1: Goto the DB table in SE16

Debug1.JPG

Step 2: Enter the Request Num and press execute

 

Debug2.JPG

Step 3: Sometimes we may not have authority to delete the table entries, delete options will be disabled like below

Debug3.JPG

Step 4: Select the entry and click on Spectacles button and enter /H in the command prompt and press Enter twice

Debug4.JPG

Step 5 :  You will get into debug screen  and observe Code = Show, right hand side screen will have no entries

Debug5.JPG

Step 6 : Double click on Code like below, then you will see an entry in the right hand side.

Debug6.JPG

Step 7 : Please double click on pencil button and type DELE

Debug7.JPG

Step 8 : Click on save on top of debug screen and press F8, then you will get delete entry like below. Press on it to delete the entry.

Debug7.JPG

 

Conclusion : You can make use this procedure to delete any DB table entries. You must have a debug access to do this. Please be careful while using this procedure. You can take up this procedure in extreme situations only.


List of important SAP Notes in BI 7.0

$
0
0

Purpose

 

We face various issues while working on BI 7.0 systems. I have compiled some of the important SAP Notes to help us when required.

 

SAP Note Number
Description
0001629835
BExAnalyzer: Workbooks cannot be opened in Excel2010.
000156494

P26:DTP:Full deletion causes dump in RSSM_UPDATE_RSBKREQUEST

0001524315

BW LISTCUBE terminates on write-optimized DSO

0001469638

DYNPRO_NOT_FOUND when running ST13 -> BW-TOOLS or /SSA/BWT

0001379839RuntimeError DBIF_RSQL_SQL_ERROR Exception
0001119924Runtime error SYSTEM_NO_ROLL during load from data stream
0001024554Improving performance in queries in SAPLRSEC_CHECKS
0000930712CX_SY_OPEN_SQL_DB exception when you execute DTPs
0000856148

0FI_A*_4: Long runtimes with delta extraction

0000832712

BW: Migration of Web items from 3.x to 7.0

0000554359

RSAR204: Error when calculating the node level in hierarchy

0000401242

Problems with InfoCube or aggregate indexes

0001410878Maintenance for BW 3.5 front-end add-ons
0000919196Dialog box blocker and Java/ABAP BEx Web applications
0000147104

Error 4 when starting the extraction program


List of important SAP Notes in BI 7.0 - Part 2

$
0
0

Purpose

 

This is continuation of my other blog about important SAP notes in BI 7.0.

 

SAP Note Number
Descriptions
1392715DSO req. activation:collective perf. problem note
1331403SIDs, Numberranges and BW Infoobjects
1162665Changerun with very big MD-tables
1136163Query settings in RSRT -> Properties
1106067Low performance when opening Bex Analyzer on Windows Server
1101143Collective note: BEx Analyzer performance
1085218NetWeaver 7.0NetWeaver 7.x BI Frontend SP\Patch Delivery Schedule
1083175IP: Guideline to analyze a performance problem
1061240Slow web browser due to JavaScript virus scan
1056259Collective Note: BW Planning Performance and Memory
1018798Reading high data volumes from BIA
968283Processing HTTP requests in parallel in the browser
914677Long runtime in cache for EXPORT to XSTRING
899572Trace tool: Analyzing BEx, OLAP and planning
892513Consulting: Performance: Loading data, no of pkg,
860215Performance problems in transfer rules
857998Number range buffering for DIM-IDs and SIDs
803958Debuffering BW master data tables
550784Changing the buffer of InfoObjects tables
192658Setting parameters for BW systems

Basics of Cube Aggregates and Data Rollup

$
0
0

What are Cube Aggregates?

  Definition

An aggregate is a materialized, summarized and condensed view of the data in an Info Cube. An aggregate maintain the dataset of an Info Cube redundantly and persistently.

  • Summarized and Condensed view refers to the condensing of the fact table of an Info cube to an aggregate table.
  • An aggregate table no longer contains certain characteristics of the Info cube and has been condensed across attributes.

 

When We Create Aggregate on Cube?

Basic purpose of using aggregates is to make data extraction faster.

When we access the data frequently for reporting and we have huge amount of data it takes more time retrieve. If a query is frequently used for reporting and we want performance enhancement then we use aggregates on data source (at Info Cube).

  • Aggregation makes data condensed and summarized so you can access the data of an Info Cube quickly when reporting.
  • New data is loaded at regular interval (a defined time) using logical data packages (requests) in an aggregate. After this transaction, the new data is available for rolling up in reporting.
  • Aggregates are used when we often use navigational attributes in queries or we want aggregation up to specific hierarchy levels for characteristic hierarchies. Both time-dependent attributes and time-dependent hierarchies can be used in aggregates.
  • Note:
  1. To find our queries frequently used and taking more time we can check RSDDSTAT_OLAP table (TCODE SE11).
  2. You can use Table RSDCUBE to determine the Info Cube assigned to the aggregate using aggregate id (6digit).

Prerequisites

  • The Info Cube for which we are creating aggregate must be in active state and there should not be any aggregate with same set of attributes for that Info Cube. Every aggregate must be unique.
  • If you have created aggregates for an Info Cube and entered data for them, the OLAP processor automatically accesses these aggregates. When navigating, the different results are consistent. The aggregate is transparent for the end user.
  • If you want to use system propose aggregates, and then you must create at least one query for the selected Info Cube. The necessary aggregates can be proposed when you start the queries and navigate in them.

Steps to create Aggregate

  • Go to TCODE RSA1 and select the InfoCube in which you want to create aggregate and select Maintain Aggregates option.

1.jpg

  • When first time we create an aggregate for an Info Cube it asks for type of aggregate.

2.jpg

a. Generate proposals

  • The system proposes suitable aggregates. The Specify Statistics Data Evaluation dialog box appears.
  • Enter Run time, from date and to date details & Choose Next.
  • This will bring you to the Maintain Aggregates screen .The system displays the proposed aggregates in the right area of the Aggregates screen. 

b. Create yourself

  • This will bring you to the Maintain Aggregate screen.

3.jpg

    • Note : Aggregates are always build on Characteristics not for key figures .
    • Drag all characteristic you want to include in Aggregate to write side window pane. You can add characteristics one by one as well. Following screen will appear.

    4.jpg

    5.jpg

    6.jpg

    • If you will select later for activation of aggregate following screen will appear where you can give date and time according to your requirement.

    7.jpg

    • After activation aggregate will show you record available for aggregation ,summarized record count ,last roll up ,last used (in query) etc. details .

    8.jpg

    This screen will give you following information also:

    • Hierarchy and hierarchy level fields are used for aggregates on hierarchies.
    • Valuation column will show ‘–‘and ‘+’ sign:
    1. The larger the number of minus signs, the worse is the evaluation of the aggregate, "-----" means: The aggregate can possibly be deleted.
    2. The larger the number of plus signs, the better is the evaluation of the aggregate, "+++++" means: The aggregate could make a lot of sense.
    • Records will tell about number of records in the filled aggregate (size of the aggregate).
    • Records Summarized (mean value) will tell about number of records read from source in order to create a record in the aggregate. This shows quality of the aggregate. Large value show better compression (better quality) .If it is 1 the aggregate is a copy of the Info Cube and should be deleted.
    • Usage shows how often has the aggregate been used for Reporting in queries?
    • Last Used shows when was the aggregate last used for Reporting? If an aggregate has not been used for a long time, it should be deactivated or deleted.

    In order to increase the load performance you can follow the below guidelines:


    1. Delete indexes before loading. This will accelerate the loading process.
    2. Consider increasing the data packet size
    3
    . Un check or remove the Bex reporting check box if the DSO is not used for reporting
    4
    . If you are using abap code in the routines then optimize the code. this will increase the load performance
    5
    . Un check the SID generation check box
    6
    . Write optimized DSO are recommended for large set of data records since there is no SID generation in case of write optimized DSO.This improves the performance during data load.


    Steps before using an Aggregate

    • To use an aggregate for an Info Cube when executing a query, we must first activate it and then fill it with data.
    • To use an existing aggregate select the aggregate that you want to activate and choose Activate and Fill. The system will create an active version of the aggregate.
    • Once the aggregate is active you must trigger the action to fill the aggregate with data.
    • The active aggregate that is filled with data can be used for reporting. If the aggregate contains data that is to be evaluated by a query then the query data will automatically come from the aggregate.
    • When we create a new aggregate and activate it initial filling of aggregate table is automatically done.

     

    Rolling Up Data into an Aggregate

    a. ROLL UP

      • If new data packages or requests are loaded into the Info Cube, they are not immediately available for Reporting via an aggregate. To provide the aggregate with the new data from the Info Cube, we need to load the data into the aggregate tables at a time which we can set. This process is known as ROLL UP.

       

      b. Steps of rolling up new requests

        • In the Info Cube maintenance you can set how the data packages should be rolled up into the aggregate for each Info Cube.
        • In the context menu of the required Info Cube select Manage. The Info Provider Administration window appears. In the Manage Data Targets screen select tab Rollup.

        9.jpg

          • Here from selection button you can set date and time of rollup.

          10.jpg

            • You can set an event after aggregation and also create a process to run the job periodically. Selective request can be aggregate by providing request id.
            • Request can be rolled up based on no of days.

            11.jpg

              • After rollup you can see the check sign in manage tab. Selecting a request for rollup will also rollup all previous requests loaded before that but not the new one .

              12.jpg

              Levels of Aggregation

              • Aggregation level indicates the degree of detail to which the data of the underlying Info cube is compressed and must be assigned to each component of an aggregate (characteristics, navigation attributes, hierarchies).

              Aggregation level can be

                        By default, the system aggregates according to the values of the selected objects:

              • '*' All characteristic values
              • 'H' Hierarchy level
              • 'F' Fixed value

               

              Thank you for reading this blog .Please add your comments .


              Reference(s)

              1. http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
              2. http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6791e07211d2acb80000e829fbfe/content.htm
              3. http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/906de68a-0a28-2c10-398b-ef0e11ef2022?QuickLink=index&overridelayout=true

              How to clear the outbound queue and delta queue

              $
              0
              0

              Applies to:

              SAP BW 3.x & SAP BI Net Weaver 2004s.

               

              Author Bio

              Gopinath Ramalingam is working as SAP Consultant with Cognizant technology solutions

              He has got experience and worked on various BW/BI/BO Implementation/Support Projects.

               

              Scenario

              When we are applying patches or in case of SAP server migration, Basis team will bring both ECC and BW System in down state. At that point of time, all the delta record will be moved to BW system ,then the BASIS team can apply the patches/migrating the server activity.

               

              Before clearing the queue, user must locked in the SAP ECC & BW system and also suspend the jobs done by basis team .Also we need to stop the posting from the client, otherwise continuously delta update in the system

               

              Steps to clear Outbound Queue

               

              1. Check the LBWQ t.code and clear the outbound queue using the LBWE.

                   1.jpg

              2. Go to LBWE transaction code and click the Job Control.          

                  2a.jpg

              Then click Start date.

                  3.jpg

              Set the Print param

                    4.jpg

                       5.jpg

              Select ‘Immediate’.

                      6.jpg

              Click Schedule job

                      7.jpg

              Click Job overview

                        8.jpg

              Then check the LBWQ

                          9.jpg

              Now, the queue MCEX03 has been cleared. Likewise move the queue MCEX11, MCEX12 and MCEX13 from LBWQ to RSA7.

               

              Once all the queues clearing activity is completed, we can check on t-code LBWQ having no entries.

                           10.jpg

              Steps to clear Delta Queue:

               

              Check the RSA7 and move the data into BW System using Info Package.

               

                             11.jpg

              Go to Info package and trigger the delta.

               

                            12.jpg

              Now, the 2LIS_02_SCL becomes zero by default.

                             13.jpg

              Likewise proceed with all the extractor. Now, all the extractor must be zero (0).

                        

                              14.jpg

              To confirm the delta queue,

              Check the SMQ1 transaction code if any Queue Name start with MCEX** or Destination BWCLNT*** (i.e. your BW System name).

               

                                  15.jpg

              If any entries are available then trigger the equivalent Info package in the BW System.

               

              Finally, our SMQ1 should not have any entries like MCEX** and BWCLNT** queue. (See below screen shot)

               

              The below two entries are invalid posting or error record in the system and can be ignored. 

                        16.jpg                              

              Select the queue and display the detail of the record.

               

                           17.jpg

              SAP BW 7.3 Upgrade Issues and Solutions

              $
              0
              0
              This blog post is way overdue, apologies for that.
               
              We started our BW 7.3 upgrade in September 2012 and went live in January 2013. The
              actual effort was just under three months utilising one full time and one part time consultant.
               
              We upgraded from BW 7.01 SP 06 to BW 7.30 SP 07.
              The following table containing all issues experienced during the upgrade, root causes and solutions.
              IssueDetails / Root CauseChange / Solution
              0FISCPER text display as July 2012 as opposed to July 2011Entries for table RSADMINS changed with the upgrade. This table controls the text values displayed for the time dimension and implemented read class.
              Update entries in table RSADMINS.
              Program ZBW_RSADMINS_UPDATE was custom built for this change.
              Text variables do not display the text (label) for 0CALDAY
              Refer to SAP Article 1693785.
              In BW 7.30 the infoobject 0DATE and other time characteristics do not have a text table maintained in RSD1. Therefore, if you try to use a variable with
              replacement path "replaced by label", you will see that the texts are
              not replaced. However, in the lower system 7.00 and 7.01, for 0DATE internally
              set text flags are set and hence you can see text are being replaced. This
              behaviour has been changed in 7.30.
                
              • RSA1->InfoObjects->Search
                for 0CALMONTH->Change->In the 'Master Data/texts' tab, check 'With
                texts'->Activate->Continue and accept all warnings.
              • Perform
                above steps for each of the following time chars:
                • CALMONTH
                • 0CALMONTH2
                • 0CALQUARTER
                • 0FISCYEAR
              FM BAPI_ISREQUEST_GETSTATUS returns different results.This function module is returning a blank TECHSTATUS.
               
              • Amend
                the handling of the field TECHSTATUS as returned from the FM i.e.
              • Replace
                the following code:
              WHEN 'Y'. " Y = Yellow (request still processing)
              With this new code:
              WHEN 'Y' OR ''. " Y = Yellow (request still processing)
              Activate the new version of the program
              FM RSKC_CHAVL_OF_IOBJ_CHECK returns different results.
              FM RSKC_CHAVL_OF_IOBJ_CHECK
              changed with the upgrade.
                
              Before the upgrade, the above FM return RC=00 when there is no InfoObject template in a DSS. After the upgrade the FM returns RC=04. 
              Removed the call to the RSKC_CHAVL_OF_IOBJ_CHECK completely.Replace the FM with corrected code to
              perform valid character checks.
              Aggregate roll-up step failed
              Roll up steps where aggregates were deactivated, and the process chain role-up variant did not have flag ""End process successfully if no aggregate exists"
              set - failed after the upgrade.
              The behaviour in BW 7.3 seems to be different for aggregates that have been
              deactivated i.e.
              BW 7.1, - if the flag was not set in the roll up,
              and the aggregate was deactivated, the chain would be successful
               
              BW 7.3, - if the flag was not set in the roll up, and the aggregate was deactivated,
              the chain fails
              Adjust the variants in the following base chains - by checking the flag: "End process successfully if no aggregate
              exists".
              Key figure in Bex Queries returns zeroes.
              Caused by a program error.
               
              The key figure is derived from a calculated key figure which uses the
              NODIM function.
              Implement SAP Note: 1696274: A calculated key figure outputs the value 0.
              KFs display “0 ERR” or “*” inconsistently
              Caused by a program error.
              Implement notes:
              1708084 - Mistaken '0 ERROR' cells for keyfigure with
              unit/currency
              1698057 - * for currency-dependent and unit-dependent key figures
              ABAP Programming Error
              BIT_OFFSET_NOT_POSITIVE
              Caused by a program error.
              Implement
              1722725 - Input-ready query terminates with BIT_OFFSET_NOT_POSITIVE
              KFs return blank values Caused by a program error.Implement Note 1736862
              Text variable returns technical name as opposed to characteristic value
              Refer to SAP Article 1693785.
              In BW 7.30 the infoobject 0DATE and other time characteristics do not have a text table maintained in RSD1. Therefore, if you try to use a variable with replacement path "replaced by label", you will see that the texts are not replaced. However, in the lower system 7.00 and 7.01, for 0DATE internally set
              text flags are set and hence you can see text are being replaced. This
              behaviour has been changed in 7.30.
              Apply the following change to the problematic variables:
              Change replacement path from “Label” to “Characteristic Value”
              Queries on MultiProviders without 0CALDAY, where the underlying Cube(s) are non-*** cubes and contain 0CALDAY, generate errorsThe participating InfoProvider  is a stock InfoCube, meaning that it contains
              at least one stock key figure. There must therefore be a time characteristic
              Calendar day[0CALDAY] (NCUMTIM) in the MultiProvider and Calendar day[0CALDAY] can only be assigned to itself. For InfoProvider JSD_B_012, Calendar day[0CALDAY] may not be assigned to the characteristic ''.
              Change the multi provider by including 0CALDAY into
              the time dimension
              Detail analysis using t/c ST03 failsProgram errorApply manual steps as per note: 1608989
              APD filter from a non-cumulative cube fails
              Program error
              “The argument '00"' cannot be interpreted as a number”
              “An exception with the type CX_SY_CONVERSION_NO_NUMBER occurred”
              Implement SAP Note 1674845
              Symptom
              When reading from a non-cumulative InfoProvider using the function RSDRI_INFOPROV_READ, a
              termination occurs in the method CL_RSDRS_ORACLE_SQL_STMT->BUILD_FORMULA.
               
              Other terms
              RSDRI_INFOPROV_READ NCUM, non-cumulative InfoCube, non-cumulative IC,
              CL_RSDRS_ORACLE_SQL_STMT BUILD_FORMULA
              Queries with exception aggregation after variable replacementYou are trying to replace a variable from a hierarchy attribute or the text of characteristic [0FISCPER]Fiscal year/period or from characteristic [0FISCPER]Fiscal year/period. This replacement should be made after aggregation by [0FISCPER]Fiscal year/period (see Note 1385580). The variable is used in a context though that forces replacement before aggregation by [0FISCPER]Fiscal year/period. An exception aggregation for [0FISCPER]Fiscal year/period is specified for example on formula Working Days FY2007 or a higher-level formula, or a second variable is used there which also shold be
              replaced from characteristic [0FISCPER]Fiscal year/period but before
              aggregation.

              System Response
              The system cannot resolve this conflict. The query cannot be generated.

              Procedure
              Change the definition of the variable or query by breaking up formula Working
              Days FY2007 into corresponding subformulas.

              For example: F = B * ( Va -Vb) to F = Fa - Fb or G = B* Va * Vb to G = Va * Gb
              Fa = B * Va is calculated after aggregation here, while Fb = B * Vb or Gb = B *
              Vb is calculated before aggregation.
              Change the exception aggregation from “Summation” to “Use Standard Aggregation”
              ABAP dump when trying to run an SQL query in t/code ST04
              Program error
              OBJECTS_OBJREF_NOT_ASSIGNED
              ABAP Program CL_ORA_SQL_EXECUTOR===========CP
              Application Component  BC-CCM-MON-ORA
              Implement SAP Note 1709951
                
              Symptom
              This SAP Note is valid for the DBA Cockpit on Oracle.
              A dump occurs when you execute any SQL statement using the SQL Command Editor
              Other terms
              CL_ORA_SQL_EXECUTOR st04_sqlc_n CL_ORA_ACTION_SQL_EDITOR
              Z* Programs deleted after upgrade
              Programs were assigned to $tmp and not on a transport package.
              Programs can be identified by listing all programs on
              table TRDIR which are not on table TADIR i.e. If a program is of type “PROG”
              (excluding classes and function modules) and it is in table TRDIR and not in
              table TADIR – it might be deleted after performing the upgrade.
              Prior to the upgrade, add the
              programs to a package, set the required program attributes i.e. Executable, Customer Production Program and Business Intelligence Program.
              Warning message when executing InfoPackageFile ending  does not match the current adapter CSVFLCONV;
              Ending CSV expected
              Implement SAP note: 1687349
              The function module call to RSNDI_SHIE_STRUCTURE_GET3 fails
              The issue has 2 root causes. RCA 1 below is the primary issue – it was a “bug” to being with. RCA 2 is only highlighting the bug.
              RCA 1 – The object was ’typed’ incorrectly in a custom BW Z-program . It should have been of type RSNDI_S_HIEDIR2 and not RSNDI_S_HIEDIR - to align with the FM interface.
               
              RCA 2 -  SAP changed the structure of RSNDI_S_HIEDIR2 and RSNDI_S_HIEDIR
              If programs use the FM and variable definitions are typed with RSNDI_S_HIEDIR as opposed to RSNDI_S_HIEDIR2, change all calling programs to use the latter i.e. RSNDI_S_HIEDIR2
              Calculate result as….”Summation” on CKF does not return a summated result.
                Standard functionality as per note 1151957.
                It only occurs if a hierarchy info object is in the rows i.e. 0PLANT. If the hierarchy is turned off, the error is resolved – however, this does not satisfy the user requirements
              Changing the aggregation at CKF level to “Before Aggregation” resolves the problem.
                Apply the following change: Open the query, drill to CKFs and uncheck “Calculation After Aggregation”
              Generating reporting authorizations do not workProgram error caused by OSS note: 1634458Implement OSS note 1714370
              Error when activating data in DSO.
              ORA-14400
              Program error. Activation of a DSO fails
                Implement OSS note 1807028
               
              Note: RSRV->All Elementary tests->"PSA Tests" will report the error and also "repair" the error. However, when the Activation is performed again, the error will re-appear. RSRV is therefore not a solution for the error.
                Metadata repository service not active
              RSA1, Select Metadata Repository,The following error is displayed:
              URL http://sndbid10.onesteel.com:8020/SAP/BC
              /WEBDYNPRO/SAP/RSO_METADATA_REPOSITORY call was terminated because the
              corresponding service is not available.
              Error “Service cannot be reached”
              • T/C SICF
              • For Service Name, enter “RSO_METADATA_REPOSITORY”
              • F8 Select
                the service (child node in the displayed hierarchy)
              • Select Service/Host from the main menu
              • Select Activate
              • Click Yes when
                prompted to confirm
              Data loaded is not visible for reporting in
              InfoCubes
              Corrected/changed functionality in one of the Support Packs included in BW 7.3 SP 07.
               
              As per SAP's response to a customer message, raised for this issue: "This is the intended behaviour for 730 system"
              • Change the "Set Quality Status to OK" flag for each InfoCube where this issue will occur. Identify the list of cubes as follows:
                • T/C
                  SE16
                • Table
                  name: RSDCUBE
                • Filters:
                  OBJVERS = 'A', CUBETYPE='B', AUTOQUALOKFL=initial values.
              • For every cube in the list ,do the following.
                • T/C RSA1 and select "InfoProvider"
                • Search for the cube and Select "Manage"
                • From the Main Menu, Select "Environment" and "Automatic Request Processing"
                • Check "Set Quality Status to OK (Confirm Quality of Data)" and select "Save"
                • When prompted to Write a transport request, select "No"
              Once all the changes were applied, generate the SE 16 list again and ensure no cubes are returned.
              BEx Exit variable project components deleted
              T/C CMOD, the custom  "Component" does not exist after the upgrade. Error "no component exists" is displayed.
              The custom abap code is not lost however.
              • T/C: MOD. Provide the componennt name  and
                select Deactivate on the toolbar
              • Check Enhancements and select change
              • Select the current enhancements and select Delete Row on the toolbar and select Save
              • Add the enhancement (RSR00001) and select Components
              • When prompted, select Save.
              • In the list of Components, select Activate
              **Note. ENhancement
              RSR00003 is not required - only enhancement RSR00001 and the related component
              need to be created.
              Inactive local chains cause process chains to fail
              RCA unknown.
              • Local chains embedded in other chains fail if the local chain is inactive.
              • Only a very small percentage of chains are "inactive" after the upgrade.
              The chains that are inactive are not consistent between clients i.e. not the same chains in development and QA.
              • Identify process chains to be re-activated. Re-activate the chain in each environment (the re-activated chains are therefore not transported)
              • Identify the chains as follows: T/C: SE16->RSPCCHAINATTR
              • OBJVERS = ‘A’
              • ACTIVFL != ‘X’
              No authorization to maintain routines (Start
              routines and transformation routines)
              Changed after the upgrade
              The following is required for Auth. S_DEVELOP:
              Activity: 02  
              Package: BWROUT_TRFN
              Object name: GP123
              Object type: PROG
              Authorization group:
              $BWROUT _GROUP
              Locks on temporary table RSDD_TMPNM_ADM not deletedCaused by a program error.Implement note: 1669796 - RSDD_TMPNM_ADM:Lock conflict that cannot be removed in BW7.3
              BW Stats loads fail with error 'Characteristic value
              '20120321143760' of characteristic 0TCTTIMSTMP is not TIMES-converted'
              Caused by a change in the BW 7.3
              • Apply manual changes to the update rules as per Note: 1713932
                • Goto RSA1. Navigate to infoprovider 0TCT_C22 and double click on the transfer rules
                  for datasource 0TCT_DS22
                • Change to edit mode
                • Choose the field 0TCTTIMSTMP in the transfer rules and click on assignment button and
                  select the radio button 'Routine' and click on 'Create' button.
                • Enter the transfer routine name as 'Long timestamp -> short timestamp'.
                • Select the field '0TCTSTRTTST' and enter.
                • Under FORM COMPUTE_TCTTIMSTMP, Please mention the below sentence RESULT = TRUNC(
                  TRAN_STRUCTURE-TSTMP_START ).
                • Save the routine.
              Do this for each BW Stats update rule activated in the system that fails after the upgrade.
              Auth. error when running a WebI report "No RFC
              authorization for function module "BAPI_IOBJ_GETDETAIL"
              Caused by a change in the BW 7.3
              Add the following RFC authorizations to user/developer security roles:
               
              • execute access (Activity 16) for RFC = RSBAPI_IOBJ (Type = FUGR)
              • execute access (Activity 16) for RFC = BAPI_IOBJ_GETDETAIL (type FUNC)
              • execute access (Activity 16) for RFC = BDL5 (Type = FUGR)
              • execute access (Activity 16) for RFC = BDL_GET_CENTRAL_TIMESTAMP (Type FUNC)
              • execute access (Activity 16) for RFC = RSBAPI_IOBJ (Type = FUGR)
              • execute access
                (Activity 16) for RFC = BAPI_IOBJ_GETDETAIL (Type = FUGR)
              0GLACCEXT hierarchy data load fails
              with the following error:
              '00000610098 A' of characteristic 0GLACCEXT is not ACCEX-converted
              Caused by a change in the BW 7.3
              • Change the transfer  rule from iDoc to PSA
              • Delete the modified version of the Hierarchy
              • Reload the data
              The roll-up for InfoCube has terminated.Caused by a program error in Note 1663614 - P29:BATCH:RSBATCH_CHECK_PROCESS:Yellow too long; Hold Procs which is part of BW
              7.3 SP 07.
              Problem solution - implement SAP Note: 1708027 -
              P29:BATCH:Rolling up aggregates terminates with RSDD353.
              Error during assignment of Request
              ODSR_4SY8GZ4IU9G1DU6672D4RC2TU to Partition
              The value of field 'PARTNO' in PSA definition table 'RSTSODS' for PSA/Changelog table does not match with the value of the
              highest partition of the corresponding PSA/Changelog table.

               

              Implement SAP Note 1762200

              Execute RS_PSA_PART_HIGH_VALUE_CHECK in repair mode

               

              When performing an F4 (lookup) on Plant in BEx Analyser, the following error is displayed:

              • Could not generate the data object; the type does not exist
              • An exception with the type CX_SY_CREATE_DATA_ERROR
                occurred
              • Brain 670

               

              Program error in class SAPMSSY1 method :
              UNCAUGHT_EXCEPTION

              Program errorImplement OSS Note: 1679791

              Metadata repository graphical display not available

              Program error

               

              • Complete solution is outstanding.
              • Implement OSS note: 1706675???

              This note was implemented but did not resolve the issues, and also produce new errors. Need a more extensive analysis, correction and test.

              Start Routine Fails with CX_SY_DYN_CALL_ILLEGAL_TYPE

              $
              0
              0

              Dear Team,

               

              I would like to share an issue which I faced in my BW development.

               

              Issue Detail:

              When you run the DTP from PSA to DSO, the start routine ends with the message saying that 'Error in start routine sub routine'

              and the long text says 'Call of the method START_ROUTINE of the class LCL_TRANSFORM failed; wrong type for parameter SOURCE_PACKAGE'

              But when you actually debug the DTP, you would end up that control is not even going inside the method me->start_routine.1.PNG

              2.PNG

               

               

              Analysis:

              Since the debugging control is not going inside the start_routine, we can be pretty sure that there is no issue with the start_routine code which we have written.

              The source_package and the target+package of debugging screen seems to be identical to our transformation, this leads further confusion of the issue.

               

               

              Solution:

              A simple switch that solves the issue is - go to RSA1, and re-activate the transformation from PSA to DSO.

              This will regenerate the Transformation ID and the standard Source_package and Target_package fields.

               

               

              The solution is simple, yet useful. I couldn't find this in SDN, hence thought to post.

              Viewing all 333 articles
              Browse latest View live


              <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>