Quantcast
Channel: SCN : Blog List - SAP Business Warehouse
Viewing all 333 articles
Browse latest View live

All about Field-symbols..

$
0
0

Hi

 

This blog describes the advantages of using the field-symbols .

 

Internal table processing is essential part of any ABAP program. Generally, we use the explicit work area to process the internal table like appending & modifying records. We can reduce the time and improve the performance of the program by using the field-symbols

The below program will fetch the data from kna1 table with the field kunnr and name1.

It will modify the name1 field by adding “Mr.” infront of the name1 record.

 

wa_prog.jpg

 

Whenever we execute the loop at statement, the records from internal table will move to the work area and it will perform the required calculation and then the modified data will be updated to the internal table via modify statement . So this to and fro movement of data from internal table to the work area will take time. If there exists some milion number of records then it will have impact on  loading performance.

 

flow_chart1.jpg

Field-symbols:

Field symbols are placeholders or symbolic names for other fields. They do not physically reserve space for a field, but point to its contents. A field symbol can point to any data object. The data object to which a field symbol points is assigned to it after it has been declared in the program.

If we use the field-symbols,the to and fro movements of the records will not take place instead the records are modified directly in the internal table, as a result if there exists some millions of records then that records are modified directly in the internal table. As a result the loading performance will not have any impact.

 

flow_chart2.jpg

fs_flowchart.jpg

 

In the above program, the modify statement is not present in the loop at statement, because the records are already modified in the internal table as a result the modify statement is not written.

Please  debug the above program , you will get a clear idea on it.

 

Hope u got it,


ABAP Runtime Analysis of Transformations & DTPs

$
0
0

Introduction

 

                   We develop various  ABAP Programs, Customized T-codes and Function Modules in our projects as per our requirements. To fine tune our developments, we have to check "What are the Runtimes of our ABAP Programs, Customized T-codes and Customized Function Modules?"

SAP has given us a nice t-code SE30 for this purpose. You can check the runtimes of Standard (ABAP programs, T-codes and FMs)

                   

                    I am going to demonstrate How to do Runtime Analysis of Transformations and DTPs.. in this blog.

 

Step 1: Go to the Transformation ---> Extras from the top menu ---> Display Generated Program, then you see the screen like below.

 

                  Trans runtime.JPG

Step 2 : Copy the technical name(whole string) of the generated report

Step 3 : Go to SE30 t-code and paste it in Program text box  and press execute like below

 

                      SE30.JPG

This will calculate Runtimes in three categories. You can check them by pressing Evaluate button in above screen shot.

 

Finally you can make a note of various Runtimes by observing the below screen.

                        

                           Runtimes.JPG

Result : The above  Program has spent majority of time in ABAP(Presentation), 11.5% at Database level(Datsbase Systems like MSSQL, Oracle etc) and 7.7% at System level(Application)

 

Similarly, you can copy the string from DTP-->Extras-->Generated Program Filter and check the runtimes by following above steps

 

Note : Suppose you are analyzing some FM / t-code, when you press Execute as per Step 3, you will literally get into the execution of that FM / t-code and you see the result-->Then you need to press Back button on top to reach SE30 screen -->You will see Runtime Analysis of so and so FM / t-code has been finished on the Status bar at bottom of the screen. Finally you can click on Evaluation button to see various runtimes


Moving Tasks and Merging Requests in The Transport Organizer

$
0
0

Moving a Task :

If we want to move a task to another request. For Example, we want to move the  task XXXX1149 from the request XXXX1148 (Request 1) to the request XXXX1152(Request 2).

Select the task we want to move and goto the menu “Utility” and select “Reassign Task”

1.JPG

Provide the Target Request No. which the selected task will be moved .

 

2.JPG

 

The task will be moved to the Target Request.

3.JPG

Note : The original request is not deleted

Merging a Request

 

If we want to merge the contents of a request into another request, and delete the first request. For example, we want to move all the contents of the request XXXX1152 ( Request 2) into the request XXXX1148 (Request 1), and delete the request XXXX1152 (Request 2).

Select the request that  we want to move and goto the menu “Utility” and select “Merge Requests”

4.JPG

Provide the request no.which the contents of the request will be merged in.

5.JPG

All the contents of the of XXXX1152 request will be moved into the XXXX1148 request and the  XXXX1152 request will be deleted.

6.JPG

BW 7.3 : Data Flow Migration Tool

$
0
0

In SAP BW 7.3, we have a new wizard to migrate entire data flow from 3.X to 7.X. Prior to BW 7.3 we have to migrate each object separately.

 

In 7.3 we can migrate entire data flow to 7.0

 

For Migration Process:

 

Step 1. RSA1-->From Info Provider Context menu-->Migrate Data Flow.


Capture1.PNG

Step 2. Migration Project Name.

 

Capture2.PNG

 

Step 3.Select 3.X Objects

Capture3.PNG

 

Step 4.Save &  Click On Migrate.

 

Step 5.Status of the Migration Project.

Capture5.PNG

 

Step 6.Selection of Steps & Click on Migrate/Recover.

Capture6.PNG

Step 7.Run Migration.

Capture7.PNG

Step 8.Migration Complete : NO Error.

Capture8.PNG

 

Migration Completed.


For Migration Recovery Process:

 

Step 1:

Capture9.1.PNG

Step 2: Uncheck the Objects.

 

Capture10.PNG

 

 

Capture11.PNG

 

 

Step 3: Run the Recovery.

Capture12.PNG

Recovery Process Completed.

Program for loading sample data into infocube

$
0
0

Struggling for sample data to load into newly developed Infocube,……

 

CUBE_SAMPLE_CREATE is an ABAP program, allows us to fill  the infocube with random data without using any flat files, source system configuration , transformations etc...

 

Below are the steps to create sample data.

1.  

1.        1.    Goto SE38 transaction and provide the program name “CUBE_SAMPLE_CREATE” and execute.

     2013-07-18_234504.jpg

     

1.     2.     Provide the infocube name and provide the no. of records that we want to update.

          We will find three options in the below screen

 

     A.    Generated Values:   if we go for this option, System will generate random values for all Characterstics and keyfigures and update the infocube.

     B.    Vals from Master Data Tables :  System will picked the values from Masterdata Tables for characterstics and generate default values for keyfigures.Both values will update to the infocube.

     C.    Ready for input ALV : if we want to enter the data manually we will go for this option.

 

2.jpg

3.     Click on “Execute Directly” button. Then The data will be loaded with sample data in a single request.

Common Production Failures Encountered at BW / BI Production Support

$
0
0

Hi,

This is about Common Production Failures Encountered in Production it might be helpful

1 Transactional RFC Error(trfc) – Non Updated IDOCs in the Source System.


  1. 1.1 Why does the error occur?
    • tRFC – Transact Remote Function Call Error, occurs whenever LUW’s (Logical Unit of
    Work’s) are not transferred from the source system to the destination system.
    1.2 What happens when this error occur?
    • Message appears in the bottom of the “Status” tab in RSMO. The error message would
    appear like “tRFC Error in Source System” or “tRFC Error in Data Warehouse” or simply
    “tRFC Error” depending on the system from where data is being extracted.
    • Sometimes IDOC are also stuck on R/3 side as there were no processors available to
    process them.
    1.3 What can be the possible actions to be carried out?
    • Once this error is encountered, we could try to Click a complete Refresh “F6” in RSMO,
    and check if the LUW’s get cleared manually by the system.
    • If after “couple” of Refresh, the error is as it is, then follow the below steps quickly as it
    may happen that the load may fail with a short dump.
    • Go to the menu Environment -> Transact. RFC -> In the Source System, from RSMO. It
    asks to login into the source system.
    • Once logged in, it will give a selection screen with “Date”, “User Name”, TRFC options.
    • On execution with “F8” it will give the list of all Stuck LUW’s. The “Status Text” will appear Red for
    the Stuck LUW’s which are not getting processed. And the “Target System” for those LUWs should
    be “WP1CL015”, that’s the Bose BW Production system. Do not execute any other IDOC which is
    not related have the “Target System” as “WP1CL015”.
    • Right Click and “Execute” or “F6” after selection, those LUW’s which are identified properly. So that
    they get cleared, and the load on BW side gets completed successfully.
    • When IDocs are stuck go to R/3, use Tcode BD87 and expand ‘IDOC in inbound Processing’ tab for
    IDOC Status type as 64 (IDoc ready to be transferred to application). Keep the cursor on the error
    message (pertaining to IDOC type RSRQST only) and click Process tab (F8) . This will push any
    stuck Idoc on R/3.
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.


    2 Time Stamp Error.
    2.1 Why the error does occur?
    • The “Time Stamp” Error occurs when the Transfer Rules/Structure (TR/TS) are internally inactive in
    the system.
    • They can also occur whenever the DataSources are changed on the R/3 side or the DataMarts are
    changed in BW side. In that case, the Transfer Rules (TR) is showing active status when checked.
    But they are actually not, it happens because the time stamp between the DataSource and the
    Transfer Rules are different.
    2.2 What happens when this error occur?
    • The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
    in the PC.
    • Check the Transfer Rules in RSA1, Administrator Workbench.
    2.3 What can be the possible actions to be carried out?
    • Whenever we get such an error, we first need to check the Transfer Rules (TR) in the Administrator
    Workbench. Check each rule if they are inactive. If so then Activate the same.
    • You need to first replicate the relevant data source, by right click on the source system of D/s ->
    Replicate Datasources.
    • During such occasions, we can execute the following ABAP Report Program
    “RS_TRANSTRU_ACTIVATE_ALL”. It asks for Source System Name, InfoSource Name, and 2
    check boxes. For activating only those TR/TS which are set by some lock, we can check the option
    for “LOCK”. For activating only those TR/TS which are Inactive, we check for the option for “Only
    Inactive”.
    • Once executed it will activate the TR/TS again within that particular InfoSource even though they are
    already active.
    • Now re-trigger the InfoPackage again.
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.


    3 Error occurred due to Short Dump.
    3.1 Why does the error occur?
    • Whenever a Job fails with an error “Time Out” it means that the job has been stopped
    due to some reason, and the request is still in yellow state. And as a result of the same
    it resulted in Time Out error. It will lead to a short dump in the system. Either in R/3 or
    in BW.
    • Short dump may also occur if there is some mismatch in the type of incoming data. For
    example say date field is not in the format which is specified in BW, then it may happen
    that instead of giving an error it may give a short dump. Every time we trigger the load.
    3.2 What happens when this error occur?
    • We would get a Time Out Error after the time which is specified in the Infopackage ->
    Time Out settings (which may or may not be same for all InfoPackages). But by that
    time in between, we may get a short dump in the BW system or in the Source System
    R/3.
    • The message appears in the Job Overview in RSMO, or in “Display Message” option of
    the Process in the PC.
    3.3 What can be the possible actions to be carried out?
    • Usually “Time Out” Error results in a Short Dump. In order to check the Short Dump we go to the
    following, Environment -> Short Dump -> In the Data Warehouse / -> In the Source System.
    • Alternatively we can check the Transaction ST22, in the Source System / BW system. And then
    choose the relevant option to check the short dump for the specific date and time. Here when we
    check the short dump, make sure we go through the complete analysis of the short dump in detail
    before taking any actions.
    • In case of Time Out Error, Check whether the time out occurred after the extraction or not. It may
    happen that the data was extracted completely and then there was a short dump occurred. Then
    nothing needs to be done.
    • In order to check whether the extraction was done completely or not, we can check the “Extraction”
    in the “Details” tab in the Job Overview. Where in we can conclude whether the extraction was done
    or not. If it is a “full load” from R/3 then we can also check the no. of records in RSA3 in R/3 and
    check if the same no of records are loaded in BW.
    • In the short dump we may find that there is a Runtime Error, "CALL_FUNCTION_SEND_ERROR"
    which occurred due to Time Out in R/3 side.
    • In such cases following could be done.
    • If the data was extracted completely, then change the QM status from yellow to green. If “CUBE” is
    getting loaded then create indexes, for ODS activate the request.
    • If the data was not extracted completely, then change the QM status from yellow to red. Re-trigger
    the load and monitor the same.
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.


    4 Job Cancellation in R/3 Source System.
    4.1 Why does the error occur?
    • If the job in R/3 system cancels due to some reasons, then this error is encountered. This may be
    due to some problem in the system. Some times it may also be due to some other jobs running in
    parallel which takes up all the Processors and the jobs gets cancelled on R/3 side.
    • The error may or may not be resulted due to Time Out. It may happen that there would be some
    system hardware problem due to which these errors could occur.
    4.2 What happens when this error occurs?
    • The Exact Error message is "Job termination in source system". The exact error message may also
    differ, it may be “The background job for data selection in the source system has been terminated”.
    Both the error messages mean the same. Some times it may also give “Job Termination due to
    System Shutdown”.
    • The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
    in the PC.
    4.3 What can be the possible actions to be carried out?
    • Firstly we check the job status in the Source System. It can be checked through Environment -> Job
    Overview -> In the Source System. This may ask you to login to the source system R/3. Once logged
    in it will have some pre-entered selections, check if they are relevant, and then Execute. This will
    show you the exact status of the job. It should show “X” under Canceled.
    • The job name generally starts with “BIREQU_” followed by system generated number.
    • Once we are confirm that this error has occurred due to job cancellation, we then check the status of
    the ODS, Cube under the manage tab. The latest request would be showing the QM status as Red.
    • We need to re-trigger the load again in such cases as the job is no longer active and it is cancelled.
    We re-trigger the load from BW.
    • We first delete the Red request from the manage tab of the InfoProvider and then re-trigger the
    InfoPackage.
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.


    5 Incorrect data in PSA.
    5.1 Why the error does occur?
    • It may happen some times that the incoming data to BW is having some incorrect format, or few
    records have few incorrect entries. For example, expected value was in upper case and data is in
    lower case or if the data was expected in numeric form, but the same was provided in Alpha
    Numeric.
    • The data load may be a Flat File load or it may be from R/3. Mostly it may seem that the Flat File
    provided by the users may have incorrect format.
    5.2 What happens when this error occur?
    • The error message will appear in the job overview and will guide you what exactly we need to do for
    the error occurred.
    The message on the bottom of the “Header” tab of the Job Overview in RSMO will have “PSA Pflege”
    written on it, which will give u direct link to the PSA data
    5.3 What can be the possible actions to be carried out?
    • Once confirmed with the error, we go ahead and check the “Detail” tab of the Job Overview to check
    which Record, field and what in the data has the error.
    • Once we make sure from the Extraction, in the Details tab in the Job Overview that the data was
    completely extracted, we can actually see here, which record, which field, has the erroneous data.
    Here we can also check the validity of the data with the previous successful load PSA data.
    • When we check the data in the PSA, it will show the record with error with traffic signal as “Red”. In
    order to change data in PSA, we need to have the request deleted from Manage Tab of the
    InfoProvider first, only then it will allow to change the data in PSA.
    • Once the change in the specific field entry in the record in PSA is done, we then save it. Once data
    in PSA is changed. We then again reconstruct the same request from the manage tab. Before we
    could reconstruct the request, it needs to have QM status as “Green”.
    • This will update the records again which are present in the request
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.


    6 ODS Activation Failed.
    6.1 Why does the error occur?
    • During data load in ODS, It may happen sometimes that the data gets extracted and loaded
    completely, but then at the time of the ODS activation it may fail giving status 9 error.
    • Or due to lack of resources, or cause of an existing failed request in the ODS. For Master Data it is
    fine if we have an existing failed request.
    • This happens as there are Roll back Segment errors in Oracle Database and gives an error ORA-
    00060. When activation of data takes place data is read in Active data table and then either Inserted
    or Updated. While doing this there are system dead locks and Oracle is unable to extend the extents.
    6.2 What happens when this error occur?
    • The exact error message would be like “Request REQU_3ZGI6LEA5MSAHIROA4QUTCOP8, data
    package 000012 incorrect with status 9 in RSODSACTREQ”. Some times it may accompany with
    “Communication error (RFC call) occurred” error. It is actually due to some system error.
    • The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
    in the PC.
    • The exact error message is “ODS Activation Failed”.
    6.3 What can be the possible actions to be carried out?
    • Whenever such error occurs the data is may or may not be completely loaded. It is only while
    activation it fails. Hence when we see the details of the job, we can actually see which data package
    failed during activation.
    • We can once again try to manually Activate the ODS, here do not change the QM status as in
    Monitor its green but within the Data Target it red. Once the data is activated QM status turns into
    Green .
    • For successful activation of the failed request, click on the “Activate” button at the bottom, which will
    open another window which will only have the request which is/are not activated. Select the request
    and then check the corresponding options on the bottom. And then Click on “Start”
    • This will set a background job for activation of the selected request.
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.
    • In case the above does not work out, we check the size of the Data Package specified in the
    InfoPackage. In InfoPackage -> Scheduler -> DataS. Default Data Transfer. Here we can set the size
    of the Data Package. Here we need to “reduce” the maximum size of the data package. So that
    activation takes place successfully.
    • Once the size of the Data Package is reduced we again re trigger the load and reload the complete
    data again.
    • Before starting the manual activation, it is very important to check if there was an existing failed
    “Red” Request. If so make sure you delete the same before starting the manual activation.
    • This error is encountered at the first place and then rectified as at that point in time system is not
    able to process the activation process via 4 different Parallel processes. This parameter is set in
    RSCUSTA2 transaction. Later on the resources are free so the activation completes successfully.


    7 Caller 70 is missing.
    7.1 Why does the error occur?
    • This error normally occurs whenever BW encounters error and is not able to classify them. There
    could be multiple reasons for the same
    o Whenever we are loading the Master Data for the first time, it creates SID’s. If system is
    unable to create SID’s for the records in the Data packet, we can get this error message.
    o If the Indexes of the cube are not deleted, then it may happen that the system may give the
    caller 70 error.
    o Whenever we are trying to load the Transactional data which has master data as one of the
    Characteristics and the value does not exist in Master Data table we get this error. System
    can have difficultly in creating SID’s for the Master Data and also load the transactional data.
    o If ODS activation is taking place and at the same time there is another ODS activation
    running parallel then in that case it may happen that the system may classify the error as
    caller 70. As there were no processes free for that ODS Activation.
    o It also occurs whenever there is a Read/Write occurring in the Active Data Table of ODS.
    For example if activation is happening for an ODS and at the same time the data loading is
    also taking place to the same ODS, then system may classify the error as caller 70.
    o It is a system error which can be seen under the “Status” tab in the Job over View.
    7.2 What happens when this error occur?
    • The exact error message is “System response "Caller 70" is missing”.
    • It may happen that it may also log a short dump in the system. It can be checked at "Environment ->
    Short dump -> In the Data Warehouse".
    7.3 What can be the possible actions to be carried out?
    • If the Master Data is getting loaded for the first time then in that case we can reduce the Data
    Package size and load the Info Package. Processing sometimes is based on the size of Data
    Package. Hence we can reduce the data package size and then reload the data again. We can also
    try to split the data load into different data loads
    • If the error occurs in the cube load then we can try to delete the indexes of the cube and then reload
    the data again.
    • If we are trying to load the Transactional and Master Data together and this error occurs then we can
    reduce the size of the Data Package and try reloading, as system may be finding it difficult to create
    SID’s and load data at the same time. Or we can load the Master Data first and then load
    Transactional Data
    • If the error is happening while ODS activation cause of no processes free, or available for processing
    the ODS activation, then we can define processes in the T Code RSCUSTA2.
    • If error is occurring due to Read/Write in ODS then we need to make changes in the schedule time of
    the data loading.
    • Once we are sure that the data has not been extracted completely, we can then go ahead and delete
    the red request from the manage tab in the InfoProvider. Re-trigger the InfoPackage again.
    • Monitor the load for successful completion, and complete the further loads if any in the Process
    Chain.


    8 Attribute Change Run Failed – ALEREMOTE was locked.
    8.1 Why does the error occur?
    • During Master Data loads, some times a lock is set by system user ALEREMOTE.
    • This normally occurs when HACR is running for some other MD load, and system tries to carry out
    HACR for this new MD. This is a scheduling problem.
    8.2 What happens when this error occur?
    • The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
    in the PC.
    • The exact error message would be like, “User ALEREMOTE locked the load of master data for
    characteristic 0CUSTOMER”. Here it is specifically for the 0CUSTOMER load. It may be different
    related to Master Data InfoObject which is getting loaded.
    8.3 What can be the possible actions to be carried out?
    • Check the error message completely and also check the long text of the error message, as it will tell
    you the exact Master Data which is locked by user ALEREMOTE.
    • The lock which is set is because of load and HACR timing which clashed. We first need to check
    RSA1 -> Tools -> HACR, where in we would get the list of InfoObjects on which HACR is currently
    running. Once that is finished only then, go to the TCode SM12. This will give you few options and
    couple of default entries. When we list the locks, it will display all the locks set. Delete the lock for the
    specific entry only else it may happen that some load which was running may fail, due to the lock
    released.
    • Now we choose the appropriate lock which has caused the failure, and click on Delete. So that the
    existing lock is released. Care should be taken that we do not delete an active running job.
    Preferable avoid this solution
    • When HACR finishes for the other Master Data, trigger Attribute change run for this Master Data.


    9 SAP R/3 Extraction Job Failed.
    There are certain jobs which are triggered in R/3 based upon events created there. These events are
    triggered from SAP BW via ABAP Program attached in Process Chains. This extract job also triggers along
    with it a extract status job. The extract status job will send the status back to BW with success, failure. Hence
    it is important that the extract job, and the extract status job both get completed. This is done so that on
    completion of these jobs in R/3, extraction jobs get triggered in R/3 via Info pack from BW. Error may occur
    in the extract job or in the extract status job.
    9.1 What happens when this error occur?
    • The exact error message normally can be seen in the source system where the extraction occurs. In
    BW the process for program in the PC will fail.
    • This Process is placed before the InfoPackage triggers, hence if the extraction program in R/3 is still
    running or is not complete, or is failed, the InfoPackage will not get triggered. Hence it becomes very
    important to monitor such loads through RSPC rather than through RSMO.
    9.2 What can be the possible actions to be carried out?
    • We login to the source system and then check the Tx Code SM37, for the status of the job running in
    R/3. Here it will show the exact status of the running job.
    • Enter the exact job name, user, date, and choose the relevant options, then execute. It will show a
    list of the job, which is Active with that name. You may also find another job Scheduled for the next
    load, Cancelled job if any, or previous finished job. The active job is the one which is currently
    running.
    • Here if the job status for the “Delay (sec.)” is increasing instead of “Duration(sec.)” then it means
    there is some problem with the extraction job. It is not running, and is in delay.
    • It may happen sometimes that there is no active job and there is a job which is in finished status with
    the current date/time.
    • The extract job and the status job both needs to be checked, because it may happen that the extract
    job is finished but the extract status job has failed, as a result of which it did not send success status
    to BW. But the extraction was complete. In such cases, we manually change the status of the Extract
    Program Process in the PC in BW to green with the help of the FM “ZRSPC_ABAP_FINISH”.
    Execute the FM with the correct name of the Program process variant and the status “F”. This will
    make the Process green triggering the further loads. Here we need to check if there is no previous
    Extract Program Process is running in the BW system. Hence we need to check the PC logs in detail
    for any previous existing process pending.
    • Monitor the PC to complete the loads successfully.
    • If in case we need to make the ABAP Process within the PC to turn “RED” and retrigger the PC, then
    we execute the FM “ZRSPC_ABAP_FINISH” with the specific variant and Job Status as “R” – which
    will turn the ABAP process RED.
    • This usually needs to be done when the Extraction Job was cancelled in R/3 due to some reason &
    we have another job in Released state and the BW ABAP Process is in Yellow state. We can then
    make the ABAP Process RED via the FM, and then re-trigger the PC.


    10 File not found (System Command for file check failed).
    10.1 Why the error does occur?
    • The system command process is placed in a PC before the infopackage Process. Hence it will check
    for the Flat File on the application server before the infopackage is triggered. This will ensure that
    when the load starts it has a Flat File to upload.
    • It may happen that the file is not available and the system command process fails. In that case it will
    not trigger the InfoPackage. Hence it is very important to monitor the PC through RSPC.
    10.2 What happens when this error occur?
    • The error message will turn the System Command Process in the PC “Red” and the UNIX Script
    which has failed will have a specific return code which determines that the script has failed.
    10.3 What can be the possible actions to be carried out?
    • Whenever the system command process fails it indicated that the file is not present. We right click on
    the Process and “Display Message” we get to see the failed script. Here we need to check the return
    code.Here if exit status is –1 then failure i.e. Process becomes Red, else it becomes Green in PC.
    • We need to check the script carefully for the above mentioned exit status. And then only conclude
    that the file was really not available.
    • Once confirmed that the file is not available we need to take appropriate actions.
    • We need to identify the person who is responsible for FTPing the file on the Application server. A
    mail already goes to the responsible person, via the error message in the Process. But we also need
    to send a mail, regarding the same.
    • The Process Chains which are having the system command Process in them, and the corresponding
    actions to be taken.


    11 Table space issue.

    11.1 Why does the error occur?
    • Many a times, particularly with respect to HACR while the Program is doing realignment of
    aggregates it needs lot of temporary table space [PSATEMP]. If there is a large amount of data to be
    processed and if Oracle is not able to extend the table space it gives a dump.
    • This normally happens if there are many aggregates created on the same day or there is a large
    change in the incoming Master data / Hierarchy, so that large amount of temporary memory is
    needed to perform the realignment.
    • Also whenever the PSAPODS (Which houses the many tables) is full, the data load / ODS Activation
    stops and hence we may get failures.
    11.2 What happens when this error occur?
    • The Error ORA - 01653 and ORA - 01688 – Relates to issues with table space. It will give error as
    the ORA number which asks to increase the table space.
    11.3 What can be the possible actions to be carried out?
    • In case the table space is full then we need to contact the Basis and accordingly ask for a increase in
    the size of the table space.
    • The increase of the table space is done by changing some parameters allocating more space which
    is defined for individual tables.


    12 How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that
    step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that
    will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display
    messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following
    selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the
    function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described
    as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the
    chain can run to the end.
    ABAP PROGRAM:
    *&---------------------------------------------------------------------*
    *& Report ZRSPC_PROCESS_FINISH *
    *& *
    *&---------------------------------------------------------------------*
    ************************************************************************
    * Author: Jesper Christensen
    * Date: Mar 22nd 2006
    * Type: Executable Program
    * Purpose/Description : Restart process chain after a failed request
    *
    ************************************************************************
    * MODIFICATION LOG
    ************************************************************************
    * Date | Change Number | Initials | Description
    ************************************************************************
    * 03/22/06 JMCHRIS Program created
    *
    *
    ************************************************************************
    REPORT zrspc_process_finish .
    PARAMETERS: VARIANT TYPE rspc_variant OBLIGATORY,
    INSTANCE TYPE rspc_instance OBLIGATORY,
    DATE TYPE SY-DATUM OBLIGATORY,
    state TYPE rspc_state OBLIGATORY default 'G'.
    DATA : logid TYPE rspc_logid,
    chain TYPE rspc_chain,
    type TYPE rspc_type,
    p_vari TYPE rspc_variant,
    instan TYPE rspc_instance,
    jobcount TYPE btcjobcnt,
    batchdat TYPE btcreldt,
    batchtim TYPE btcreltm.
    DATA: LS_PCLOG LIKE RSPCPROCESSLOG.
    * select the process log
    SELECT SINGLE * FROM RSPCPROCESSLOG INTO LS_PCLOG
    where variante = variant
    and instance = instance
    and batchdate = date.
    if sy-subrc = 0.
    * Set the status
    CALL FUNCTION 'RSPC_PROCESS_FINISH'
    EXPORTING
    i_logid = LS_PCLOG-log_id
    * i_chain = LS_PCLOG-chain
    i_type = LS_PCLOG-type
    i_variant = LS_PCLOG-variante
    i_instance = LS_PCLOG-instance
    i_state = state
    * i_job_count = jobcount
    i_batchdate = LS_PCLOG-batchdate
    * i_batchtime = batchtim
    EXCEPTIONS
    error_message = 1.
    IF sy-subrc <> 0.
    MESSAGE ID sy-msgid TYPE 'I' NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    else.
    message E000(YBW_USR_MON) with
    'Process selected does not exist ' ' - Check you entry'.
    endif.

 

Regards

Naresh

 

 

All_about_Process_Chains..._in_SAP_BW_Step_By_Step_and_Tips[1]

$
0
0

Hi,

 

This is about All_about_Process_Chains..._in_SAP_BW_Step_By_Step_and_Tips

 

1.)    Call transaction RSPC


RSPC is the central transaction for all your process chain maintenance. Here you find on the left existing process chains sorted by “application components”.  The default mode is planning view. There are two other views available: Check view and protocol view.
2.)    Create a new process chain
To create a new process chain, press “Create” icon in planning view. In the following pop-Up window you have to enter a technical name and a description of your new process chain.


The technical name can be as long as up to 20 characters. Usually it starts with a Z or Y. See your project internal naming conventions for it.
3.)    Define a start process
After entering a process chain name and description, a new window pop-ups. You are asked to define a start variant.



That’s the first step in your process chain! Every process chain does have one and only one starting step. A new step of type “Start process” will be added. To be able to define unique start processes for your chain you have to create a start variant. These steps you have to do for any other of the subsequent steps. First drag a process type on the design window. Then define a variant for this type and you have to create a process step. The formula is:
Process Type + Process Variant = Process Step!
If you save your chain, process chain name will be saved into table RSPCCHAIN. The process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a modified version.So press on the “create” button, a new pop-up appears:


Here you define a technical name for the start variant and a description. In the n ext step you define when the process chain will start. You can choose from direct scheduling or start using meta chain or API. With direct scheduling you can define either to start immediately upon activating and scheduling or to a defined point in time like you know it from the job scheduling in any SAP system. With “start using meta chain or API” you are able to start this chain as a subchain or from an external application via a function module “RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or create a new one and you have successfully created the first step of your chain.
4.)    Add a loading step
If you have defined the starting point for your chain you can add now a loading step for loading master data or transaction data. For all of this data choose “Execute infopackage” from all available process types. See picture below:


You can easily move this step with drag & drop from the left on the right side into your design window.A new pop-up window appears. Here you can choose which infopackage you want to use. You can’t create a new one here. Press F4 help and a new window will pop-up with all available infoapckages sorted by use. At the top are infopackages used in this process chain, followed by all other available infopackages not used in the process chain. Choose one and confirm. This step will now be added to your process chain. Your chain should look now like this:


How do you connect these both steps? One way is with right mouse click on the first step and choose Connect with -> Load Data and then the infopackage you want to be the successor.



Another possibility is to select the starting point and keep left mouse button pressed. Then move mouse down to your target step. An arrow should follow your movement. Stop pressing the mouse button and a new connection is created. From the Start process to every second step it’s a black line.
5.)    Add a DTP process In BI 7.0 systems you can also add a DTP to your chain. From the process type window ( see above.) you can choose “Data Transfer Process”. Drag & Drop it on the design window. You will be asked for a variant for this step. Again as in infopackages press F4 help and choose from the list of available DTPs the one you want to execute. Confirm your choice and a new step for the DTP is added to your chain. Now you have to connect this step again with one of its possible predecessors. As described above choose context menu and connect with -> Data transfer process. But now a new pop-up window appears.
 
Here you can choose if this successor step shall be executed only if the predecessor was successful, ended with errors or anyhow if successful or not always execute. With this connection type you can control the behaviour of your chain in case of errors. If a step ends successful or with errors is defined in the process step itself. To see the settings for each step you can go to Settings -> Maintain Process Types in the menu. In this window you see all defined (standard and custom ) process types. Choose Data transfer process and display details in the menu. In the new window you can see:


DTP can have the possible event “Process ends “successful” or “incorrect”, has ID @VK@, which actually means the icon and appears under category 10, which is “Load process and post-processing”. Your process chain can now look like this:



You can now add all other steps necessary. By default the process chain itself suggests successors and predecessors for each step. For loading transaction data with an infopackage it usually adds steps for deleting and creating indexes on a cube. You can switch off this behaviour in the menu under “Settings -> Default Chains". In the pop-up choose “Do not suggest Process” and confirm.


Then you have to add all necessary steps yourself.
6.)    Check chain
Now you can check your chain with menu “Goto -> Checking View” or press the button “Check”. Your chain will now be checked if all steps are connected, have at least one predecessor. Logical errors are not detected. That’s your responsibility. If the chain checking returns with warnings or is ok you can activate it. If check carries out errors you have to remove the errors first.
7.)    Activate chain
After successful checking you can activate your process chain. In this step the entries in table RSPCPROCCESSCHAIN will be converted into an active version. You can activate your chain with menu “Process chain -> Activate” or press on the activation button in the symbol bar. You will find your new chain under application component "Not assigned". To assign it to another application component you have to change it. Choose "application component" button in change mode of the chain, save and reactivate it. Then refresh the application component hierarchy. Your process chain will now appear under new application component.
8.)    Schedule chain
After successful activation you can now schedule your chain. Press button “Schedule” or menu “Execution -> schedule”. The chain will be scheduled as background job. You can see it in SM37. You will find a job named “BI_PROCESS_TRIGGER”. Unfortunately every process chain is scheduled with a job with this name. In the job variant you will find which process chain will be executed. During execution the steps defined in RSPCPROCESSCHAIN will be executed one after each other. The execution of the next event is triggered by events defined in the table.  You can watch SM37 for new executed jobs starting with “BI_” or look at the protocol view of the chain.
9.)    Check protocol for errors
You can check chain execution for errors in the protocol or process chain log. Choose in the menu “Go to -> Log View”. You will be asked for the time interval for which you want to check chain execution. Possible options are today, yesterday and today, one week ago, this month and last month or free date. For us option “today” is sufficient.
Here is an example of another chain that ended incorrect:
 


On the left side you see when the chain was executed and how it ended. On the right side you see for every step if it ended successfully or not. As you can see the two first steps were successfull and step “Load Data” of an infopackage failed. You can now check the reason with context menu “display messages” or “Process monitor”. “Display messages” displays the job log of the background job and messages created by the request monitor. With “Process monitor” you get to the request monitor and see detailed information why the loading failed. THe logs are stored in tables RSPCLOGCHAIN and RSPCPROCESSLOG. Examining request monitor will be a topic of one of my next upcoming blogs.

10.) Comments
Here just a little feature list with comments.
- You can search for chains, but it does not work properly (at least in BI 7.0 SP15).
- You can copy existing chains to new ones. That works really fine.
- You can create subchains and integrate them into so-called meta chains. But the application component menu does not reflect this structure. There is no function available to find all meta chains for a subchain or vice versa list all subchains of a meta chain. This would be really nice to have for projects.
- Nice to have would be the possibility to schedule chains with a user defined job name and not always as "BI_PROCESS_TRIGGER".
But now it's your turn to create process chains.

Process Chain Tips :

Overview

Process chain:

A Process chain is a sequence of processes that wait in the background for an event. Some of these processes trigger a separate event that can start other processes in turn.

If you use Process chains, you can

automate the complex schedules in BW with the help of the event-controlled processing,

visualize the schedule by using network applications, and

centrally control and monitor the processes.

 

This article will provide you a few (Seven) tips in the management of Process chain.

1.      Transaction code used in Process chain management.

2.      How to copy the Process Chain?

3.      How to Get the Process Chain name from the Job (SM37)?

4.      How to add Process chain in RSPCM?

5.      How to remove Process chain from schedule?

6.      How to move the Process chain from one directory to another?

7.      How to find the Parent Meta chain of the process chain with “Start Using Meta Chain or API” selection?

 

Transaction code used in Process chain management.

The few transaction codes used to manage the Process chain.

RSPC => Process Chain Maintenance

RSPCM => Monitor daily process chains

RSPC1 => (Single) Process Chain Display

RZ20 => CCMS Monitoring

BWCCMS => CCMS Monitor for BW

How to copy the Process Chain?

The below steps explain the procedure to copy the existing Process chain within the system.

&#0;.                       From the Process chain “Planning View” , Transaction Code => RSPC.

&#0;.                       Open the interested Process Chain, Double click on the Process Chain.

&#0;.                       To copy the process chain, enter the transaction code “COPY”.

 

 

Enter the Name and Description for new Process Chain in the prompted window.

 

 

 

 

Thus the process chain is copied, Activate and Schedule with the required modification.

 

How to Get the Process chain name from the Job (SM37)?

The below steps explains the procedure to get the Process chain name from the job name.

&#0;.                       Go to SM37.

&#0;.                       All those jobs that have name BI_PROCESS_TRIGGER are created and triggered by process chains.

&#0;.                       Select the interested job and click on Step

 

 

 

 

 

 

Select the line and from “Goto” Menu path click on “Variant”.

 

 

 

&#0;.                       The process chain name is displayed in the output screen.

 

 

 

How to add Process chain in RSPCM?

The RSPCM “Monitor Daily Process chain” is the central environment where we could monitor all the interested process chain.

A single window monitoring for all the interested process chain.

The below steps explains the procedure to add the Process chain into the “Monitor Daily Process chain” (RSPCM).

&#0;.                       Go to Transaction code “RSPCM”

&#0;.                       Click on “Add chain” button from Application tool bar and add the interested process chain.

 

 

How to remove Process chain from schedule?

The below steps explains the procedure to remove the (already scheduled) Process chain from the Schedule.

&#0;.                       Select the interested Process Chain from transaction code RSPC.

&#0;.                       For SAP BW 3.x, Go to Menupath “Process Chain” and select “Remove from Schedule”.

 

 

 

 

 

SAP BW 3.x

&#0;.                       For SAP BI 7.0, From the Menupath “Execution” click on “Remove from Schedule”.

 

 

 

SAP BI 7.0

How to move the Process chain from one directory (Display Grouping) to another?

The below steps explains the procedure to move the Process chain from one Display Group to another.

&#0;.                       Select the interested Process Chain from transaction code RSPC.

&#0;.                       Click on “Display Components”

 

 

&#0;.                       Select Desired Directory/Display Grouping.

 

 

 

 

&#0;.                       Thus process chain into the allocated “Display Group”

&#0;.                      

 

How to find the Parent Meta chain of the Process chain with “Start Using Meta Chain or API” selection?

The below steps explains the procedure to find the Parent Meta chain from the Child Process chain (with “Start Using Meta Chain or API” selection).

Table “RSPCCHAIN” is used to find the Meta chain.

&#0;.                       Go to table “RSPCCHAIN” .

&#0;.                       Provide the Process chain name in the selection field “Process chain“.

&#0;.                       In result the Meta chain name is displayed.

 

 

Regards

Koti

 

SAP BI Datawarehousing - Fundamentals

$
0
0

The Goal of SAP-BI

Ø  offer complete end-to end DWH solutions.

Types of Data

Basically, there are two types of data in a standard business practice.

1)      Master Data 

2)      Transaction Data

 

Master Data

·         It is the data which is not going to change frequently.

·         Maintains uniqueness (no duplicates).

·         It represents all real life entities such as customers, vendors, materials, Plants etc. (Primary Key)

Transaction Data

·         It is the data which is going to change very frequently

·         Allows duplication

·         Maintains foreign Key (reference to the master data)

 

OLTP (Online Transaction Processing)

·         Current data in detailed view

·         Read and Write possibility

·         Less volume of data

·         Flat reporting

 

OLAP (Online Analytical Processing)

·         Historical data in summarized view

·         Read possibility

·         Huge volume of data

·         Multi -dimensional reporting

 

Entity Relation Ship Model

o   Entity

·         Any object which can perform work by itself.

·         All real life objects such as customers, vendors, plants, materials, etc……

·         Every entities maintain its own attributes name, age, address, phone

Attribute nothing but properties or behavior of entity

Relationship

It is an association ship between two or more entities.

Relationships are 3 types.

  1. one to one          
  2. One to many     
  3. Many to many

Schema

The representation of database tables and their relationships is called Schema.

 

ER model

Based on entities and Relationships between entities, we design the database by using

ER model.

Ex:

Customer –Entity

Customer no, Customer name, Customer add-  attributes / properties.

Note:

·         OLTP applications designed with ER model

·         ER model is normalized

·         It is 2 dimensional.

 

Multi- Dimensional Modeling:

 

In MDM, all real life objects such as customers, vendors, materials, plants etc are mapped to Dimensions.

Dimension is an angle of viewing or analyzing the data.

 

Star Schema

·         A fact table at the center surrounded by several dimension tables seems to be a star.  Hence the schema is called Star Schema.

·         The model based on the star schema is called Cube.

·         In a star Schema model, the fact table Maintains million to billion of records (duplicate records) .

·         On the other hand, dimension tables are usually small. This means that dimension table contains a few thousands to few million records (No duplicate records).

·         In a star schema model the fact table contain transaction data and the dimension table contains Master Data

Fact Table:  The collection of facts or measures or key figures is called a Fact Table.

                      Generally Fact Table handles Transaction data and it is very large.

Dimension Table: The collection of characteristics is called a Dimension Table.

                                Dimension Table handles Master data and it is small.

The model based on the Star schema is called Cube.

Limitations of Star Schema

Master Data is not reused.   So master data is maintained Redundancy (MD is inside cube

Degraded Performance (Table maintains Alpha + Numeric keys)

Limited Analysis. (We can analyze data in 16 angles).

248 char

Extended Star Schema

Star Schema + SID technology.

In Extended Star Schema, suppose it maintain attr, text, hier in one table, we get demoralized problem so maintain tables separately in case of master data

Data Design:

Master Data  is outside so it can be reused for others.

Performance improving.

More analysis (16*248) Each dimension contain 248 SIDS.

 

SID (surrogate id)

Every characterstic will have  sid table But key figures not:-

      The characterstic contain alpha + numeric so to convert alphanumeric to numeric, use Sid.  But in key figures always contain numeric so no need to contain sid

 

Advantages of Extended Star Schema:

·         Faster loading of data/ faster access to reports

·         Sharing of master data

·         Easy loading of time dependent objects

 

Differences between Classical Star Schema and Extended Star Schema:

·         In Classic star schema, dimension and master data table are same. But in Extend star schema, dimension and master data table are different. (Master data resides outside the Info cube and dimension table, inside Info cube).

·         In Classic star schema we can analyze only 16 angles (perspectives) whereas in extended star schema we can analyze in 16*248 angles. Plus the performance is faster to that extent.


MESSAGE_TYPE_X of type RSQBW dump after adding an InfoObject Compounding tab

$
0
0

It seems pretty obvious that following an addition of an infoobject to the compounding tab, it is necssary to adjust the relevant infoproviders and transformations to include the new compounding characteristics.

However, I ran into the following scenario on a BW 7.01 SP 10 system which seems to suggest it is necessary to adjust the Infoproviders/Transformations first, and only then change the compounding tab. As I didn't find mention of this specific case, I thought it might be worth sharing.

 

I was required to replace the superior infoobject YSUPER with the new infoobject ZSUPER in the infoobject YOBJECT. YOBJECT was included in a loading model which worked as follows:

 

DSO->Infoset->infosource->cube

 

Obviously with other DSOs under the infoset, and with relevant transformations between the infoproviders. All of them included YSUPER.

After adding ZSUPER into the compounding tab, the DSO and the cube activated as expected after adding ZSUPER into them.

However, the transformations, infoset and infosource all caused a dump with MESSAGE_TYPE_X of type RSQBW upon activation.

 

What I did at that point was to change the YOBJECT back to the state where it included YSUPER in the compounding tab, and then add ZSUPER to all relevant Infoproviders and transformations. Only then did I replace YSUPER with ZSUPER in the compounding tab. All relevant entities now activated without the dump.

Use of Into corresponding fields.

$
0
0

Hi...

 

Usage of Into corresponding fields :

Internal table:

Internal tables provide a means of taking data from a fixed structure and storing it in working memory in ABAP. The data is stored line by line in memory, and each line has the same structure.

 

1.jpg

         

 

Select statement will fetch the data from the column erdat, kunnr, name1 and land1 from table KNA1 and places the data in the internal table , here the data will be inserted based on the fields created in the structure. 

 

2.jpg

 

The above pic shows the internal table is filled with the data with select statement,  but here the data of erdat is stored in the kunnr field of the internal table and the kunnr data is filled in erdat field of the internal table. Please refer the above pic clearly.

Whenever we execute the above statement, we get the below dump.

 

3.jpg

 

 

 

The above dump is displayed because we are placing the data of one field in another field as a result we are getting the dump. The dump message clearly explains the message.

So to avoid this situation we use the keyword “INTO CORRESPONDING FIELDS “ to place the data in the required fields.  The  program demonstrates the into corresponding fields keyword, whenever we use that keyword the data will be placed in the corresponding fields irrespective of the order present in the select statement.

 

 

4.jpg

 

 

 

Please observe closely the data movement of the data when we write the select statement, as i have drawn the curve showing the direction of the records to be stored.

 

 

Hope u got it,

Mapping of the Objects in Multiprovider

$
0
0

Requirement:


Business wants to analyze Partner and Membership details for Single Partner .

 

To achieve above requirement we get the data from different infocubes. Those infocubes are insert one multiprovider.

 

Please find the screenshot below...

 

Multiprovider Mapping:

 

 

0CALMONTH Mapping:

 

 

Partner Mapping:

 

 


 

Dimension ID Mapping:

 

 

Membership Id Mapping:

 

 

Like this way we need to assign the info objects in Multiprovider.

 

 

Thanks,

Purushotham.

List of Function Modules used in SAP BI/BW

$
0
0

Hi

 

Please find the important function modules used in SAP BI/BW.

 

1.RRMX_WORKBOOK_DELETE: Delete BW Workbooks permanently from Roles & Favorites

 

2.RRMX_WORKBOOK_LIST_GET: Get list of all Workbooks

 

3.RRMX_WORKBOOK_QUERIES_GET: Get list of queries in a workbook

 

4.RRMX_QUERY_WHERE_USED_GET: Lists where a query has been used

 

5.RRMX_JUMP_TARGET_GET: Get list of all Jump Targets

 

6.RRMX_JUMP_TARGET_DELETE: Delete Jump Targets

 

7.MONI_TIME_CONVERT: Used for Time Conversions.

 

8.CONVERT_TO_LOCAL_CURRENCY: Convert Foreign Currency to Local Currecny.

 

9.CONVERT_TO_FOREIGN_CURRENCY: Convert Local Currency to Foreign Currency.

 

10.TERM_TRANSLATE_TO_UPPER_CASE: Used to convert all texts to UPPERCASE

 

11.UNIT_CONVERSION_SIMPLE: Used to convert any unit to another unit. (Ref. table: T006)

 

12.TZ_GLOBAL_TO_LOCAL: Used to convert timestamp to local time

 

13.FISCPER_FROM_CALMONTH_CALC: Convert 0CALMONTH or 0CALDAY to Financial Year or Period

 

14.RSAX_BIW_GET_DATA_SIMPLE: Generic Extraction via Function Module

 

15.RSAU_READ_MASTER_DATA: Used in Data Transformations to read master data InfoObjects

 

16.RSDRI_INFOPROV_READ

 

17.RSDRI_INFOPROV_READ_DEMO

 

18.RSDRI_INFOPROV_READ_RFC: Used to read Infocube or ODS data through RFC

 

19.DATE_COMPUTE_DAY

 

20.DATE_TO_DAY: Returns a number what day of the week the date falls on.

 

21.DATE_GET_WEEK: Will return a week that the day is in.

 

22.RP_CALC_DATE_IN_INTERVAL: Add/Subtract Years/Months/Days from a Date.

 

23.RP_LAST_DAY_OF_THE_MONTHS

 

24.SLS_MISC_GET_LAST_DAY_OF_MONTH: Determine Last Day of the Month.

 

25.RSARCH_DATE_CONVERT: Used for Date Converstions. We can use in Info Package routines.

 

26.RSPC_PROCESS_FINISH: To trigger an event in process chain

 

27.DATE_CONVERT_TO_FACTORYDATE: Returns factory calendar date for a date

 

28.CONVERSION_EXIT_PDATE_OUTPUT: Conversion Exit for Domain GBDAT: YYYYMMDD - DD/MM/YYYY

 

29.CONVERSION_EXIT_ALPHA_INPUT: Conversion exit ALPHA, external->internal

 

30.CONVERSION_EXIT_ALPHA_OUTPUT: Conversion exit ALPHA, internal->external

 

31.RSPC_PROCESS_FINISH: Finish a process (of a process chain)

 

32.RSAOS_METADATA_UPLOAD: Upload of meta data from R/3

 

33.RSDMD_DEL_MASTER_DATA: Deletion of master data

 

34.RSPC_CHAIN_ACTIVATE_REMOTE: To activate a process chain after transport

 

Thanks,

Purushotham.

Find out the list of datasources where the R/3 field is used in BW

$
0
0

When we executing a gap analysis whether  all necessary R/3 fields already extracted and transferred to BW and we want to find out the list of datasources where the R/3 field is used.

We can use the RSOLTPSOURCEFIE table.

 

For Example : R/3 field is MATNR and if we want to know in which datasources MATNR is used.

 

Goto RSOLTPSOURCEFIE table.

 

Provide the Soursystem name in “LOGSYS” field and MATNR in “FIELDNM and execute.

 

1.jpg

Then we will get the list of all datasources which use R/3 field MATNR.

 

2.jpg

 

.

Find out to which bw infoobjects the particular r/3 field is mapped

$
0
0

If we want to see, per datasource/transfer structure, to which BW infoobject the R/3 field is mapped,

 

we can be viewed by using RSTSFIELD table.

 

Goto  RSTSFIELD table.

 

Provide the Source System name and R/3 field (for ex : MATNR) in “LOGSYS” and  “FIELDNM” fields and Execute

 

In the below output screen we can see that R/3 field "MATNR" is mapped to infoobject "0MATERIAL" within several transferstructures/datasources.

 

 

3.jpg

Fetch ECC table data into BW using RFC function Module

$
0
0

Business Requirement:

In ECC some mapping tables are available.

Copy those table data into BW.

Implementation Logic in BW

This can be achieved in 2 ways:

  1. 1. create table based extractor in ECC and load data into BW.
  2. 2. Copy table data dirctly from ECC to BW using RFC function Module 'RFC_GET_TABLE_ENTRIES'.

I tried to explain second approch here.

Logic

   CALL FUNCTION 'RFC_GET_TABLE_ENTRIES' DESTINATION  destination   ****** destination is RFC destination/ system from which data to be fetched

EXPORTING
*       BYPASS_BUFFER           = ' '
*       FROM_KEY                        = ' '
*       GEN_KEY                           = ' '
*       MAX_ENTRIES                  = 0
         table_name                         = table_name 
****** Name of table of which we want to fetch the data
*       TO_KEY                               = ' '
*     IMPORTING
*       NUMBER_OF_ENTRIES   =
    TABLES
      entries                                    = lt_tab1
   EXCEPTIONS
     internal_error                           = 1
     table_empty                             = 2
     table_not_found                      = 3
     OTHERS                                  = 4
            .
  IF sy-subrc <> 0.
    CASE sy-subrc.

      WHEN 1.
        MESSAGE e001.
        RAISE internal_error.
      WHEN 2.
        MESSAGE e002.
        RAISE table_empty.
      WHEN 3.
        MESSAGE e003.
        RAISE table_not_found.
      WHEN OTHERS.
        MESSAGE e004.
        RAISE others.
    ENDCASE.
  ENDIF.
**** data returned in FM is in string format, we need to create table structure dynamically and then use that data

* create field structure

  CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
    EXPORTING
*     I_BUFFER_ACTIVE                =
      i_structure_name                    = table_name
*     I_CLIENT_NEVER_DISPLAY = 'X'
*     I_BYPASSING_BUFFER        =
*     i_internal_tabname                 =
    CHANGING
      ct_fieldcat                                = lt_fcat
    EXCEPTIONS
      inconsistent_interface = 1
      program_error             = 2
      OTHERS                     = 3.
  IF sy-subrc <> 0.
    CASE sy-subrc.
      WHEN 1.
        MESSAGE e005.
      WHEN 2.
        MESSAGE e006.
      WHEN OTHERS.
        MESSAGE e007.
    ENDCASE.
  ENDIF.

* Generate Dynamic table for field catalog
  CALL METHOD cl_alv_table_create=>create_dynamic_table
    EXPORTING
      it_fieldcatalog = lt_fcat
    IMPORTING
      ep_table        = lt_dyn_table.

  ASSIGN lt_dyn_table->* TO <l_dyn_table>.
  CREATE DATA ls_dyn_struct LIKE LINE OF <l_dyn_table>.
  ASSIGN ls_dyn_struct->* TO <l_dyn_struct>.

  LOOP AT lt_tab1 INTO wa_tab1.
    <l_dyn_struct> = wa_tab1.
    APPEND <l_dyn_struct> TO <l_dyn_table>.
  ENDLOOP.


  MODIFY table_name FROM TABLE <l_dyn_table>.
  IF sy-subrc = 0.
    l_success = 1.
  ENDIF.

Advantages of this approach:

  1. 1. We can store data in transparent table (se11 z table).
  2. 2. IF data volume is fixed and small it takes less time than normal extraction process

Disadvantages of this approach:

  1. 1. If data volume is high and increasing over period of time, internal table memory errors may occur.

How the kill the job , when DTP request running for long hours.

$
0
0

Solution: change the status of the request in the underlying table RSBKDTPSTAT from "active" to  "delete".

 

  1. 1. se16
  2. 2. Provide the table RSBKDTPSTAT
  3. 3. Provide the dtp request id in the selections
  4. 4. change the status of the fields "USTATE" AND " TSTATE" from 5 (active) to 4 (delete).
  5. 5. Refresh the monitor screen of the dtp request to reflect the status in red.

 

Below is the llist of underlying tables where DTP's and DTP requests info is stored.

Table

Description

RSBKDTP

BW: Data Transfer Process Header Data

RSBKDTPH

DTP: Historic Versions

RSBKDTPSTAT

Status Information on Data Transfer Process

RSBKDTPT

Texts on Data Transfer Processes

RSBKDTPTH

Texts on Data Transfer Processes

RSDDSTATDTP

Table for WHM Statistics. Details DTP

RSOACUBE_DTP

BW: OLTP Direct Access: Directory of Assigned Remote DTPs

RSBKDATAPAKSEL

DTP: Data Package Selections

RSBKSELECT

Selections for DTP Request (Summary)

RSBKREQUEST

DTP Request

RSBKREQUEST_V

View of DTP Request

RSBKBP

Breakpoints

RSBKDATAINFO

Information on DTP Runtime Buffers

RSBKDATAPAKID

DTP: Status Table for Data Packages

RSBKSUBSTEP

Properties of Substeps in a DTP

Avoid the SID Generation Error While Activating Data in a DSO

$
0
0

Hi,

 

This is about Avoid the SID Generation Error While Activating Data in a DSO

 

 

You might run into one of these error messages while activating data in a DataStore object (DSO) either manually or from a process chain:
 “Activation of M records from DataStore object terminated”
 “Resource error. No batch process available. Process terminated”
 “Time limit exceeded. No return of the split processes”
When you create a DSO, the system sets a system ID of SIDs Generation upon Activation by default. It is a check box option in the edit mode settings of the DSO. If this option is checked, the system checks the SID values for all of the characteristics in the DSO. If a SID value for the particular characteristic doesn’t exist, then the system generates the SIDs. So the SIDs Generation upon Activation option helps to improve the performance of the query as the system doesn’t have to generate SIDs at query runtime.

 

 

 

The general understanding is that the error messages in above Figure during activation of a DSO are due to the SIDs Generation upon Activation setting. However, we will show that the error messages are not due to this setting, but rather to incorrect parameterization of the processes to activate requests. This means that several background processes were running simultaneously (i.e., activation of requests in DSO and SID creation), resulting in the termination of the request. If a process chain is used for activation of a DSO, all the above processes still run simultaneously in the background. You can use transaction RZ04 to check how many background processes are available in the system at the time of load.

 

 

 

You can change the runtime parameters for this affected DSO by going to transaction RSODSO_SETTINGS. Note that transaction RSCUSTA2 is obsolete in SAP NetWeaver BI 7.0. In the RSODSO_SETTINGS screen select the DSO in question, Click on the Change button to change the runtime parameter. On the Maintenance of Runtime Param. screen click on the Change Process Params. button under Parameter for Activation as the issue right now is an activation error.

 

Alternatively, you can get to this screen from the context menu of the DSO by selecting Manage, which is the activation request that failed. Click on the Activate button just as you would to activate a request that is loaded to the activation queue. The Activate Data in DSO… window pops up. Click on the Activate in Parallel button. A pop-up window displays the process type ODSACTIVAT.

 

Maximum Wait Time for Process is set to 300 seconds by default, but you can increase it to a higher value if you think the system workload will be high. If you choose Dialog process as an option. then SAP recommends that the wait time in SAP NetWeaver BI should be three times higher than in R/3 (SAP Note 1118205). SAP Note 192658 also recommends that you set the maximum runtime for the work process as 3,600 seconds. After you click on the Change Process Params. button, you see the settings window.

 


Enter the Number of Processes. Under Parallel Processing, select Dialog. Select parallel generators in Server Group for Parallel Dialog Processing and then click on the save icon. You can re-initiate the failed activation again and the data should be activated now without any issues.

 


After the successful activation of data in the DSO, you can revert back to the normal settings for the DSO if necessary to avoid having too many dialog processes. If the activation of data in the DSO is done through process chains (transaction RSPC), you can access the settings. If many DSOs are failing in activation and the above parameters have to be changed for all of these DSOs, then SAP recommends updating the table RSBATCHPARALLEL directly. However, one should have security access to change the data in the tables directly and the steps we describe are easier to perform for individual DSOs.

 

 

You might run into one of these error messages while activating data in a DataStore object (DSO) either manually or from a process chain:
 “Activation of M records from DataStore object terminated”
 “Resource error. No batch process available. Process terminated”
 “Time limit exceeded. No return of the split processes”
When you create a DSO, the system sets a system ID of SIDs Generation upon Activation by default. It is a check box option in the edit mode settings of the DSO. If this option is checked, the system checks the SID values for all of the characteristics in the DSO. If a SID value for the particular characteristic doesn’t exist, then the system generates the SIDs. So the SIDs Generation upon Activation option helps to improve the performance of the query as the system doesn’t have to generate SIDs at query runtime.

 

 

 

The general understanding is that the error messages in above Figure during activation of a DSO are due to the SIDs Generation upon Activation setting. However, we will show that the error messages are not due to this setting, but rather to incorrect parameterization of the processes to activate requests. This means that several background processes were running simultaneously (i.e., activation of requests in DSO and SID creation), resulting in the termination of the request. If a process chain is used for activation of a DSO, all the above processes still run simultaneously in the background. You can use transaction RZ04 to check how many background processes are available in the system at the time of load.

 

 

 

You can change the runtime parameters for this affected DSO by going to transaction RSODSO_SETTINGS. Note that transaction RSCUSTA2 is obsolete in SAP NetWeaver BI 7.0. In the RSODSO_SETTINGS screen select the DSO in question, Click on the Change button to change the runtime parameter. On the Maintenance of Runtime Param. screen click on the Change Process Params. button under Parameter for Activation as the issue right now is an activation error.

 

Alternatively, you can get to this screen from the context menu of the DSO by selecting Manage, which is the activation request that failed. Click on the Activate button just as you would to activate a request that is loaded to the activation queue. The Activate Data in DSO… window pops up. Click on the Activate in Parallel button. A pop-up window displays the process type ODSACTIVAT.

 

Maximum Wait Time for Process is set to 300 seconds by default, but you can increase it to a higher value if you think the system workload will be high. If you choose Dialog process as an option. then SAP recommends that the wait time in SAP NetWeaver BI should be three times higher than in R/3 (SAP Note 1118205). SAP Note 192658 also recommends that you set the maximum runtime for the work process as 3,600 seconds. After you click on the Change Process Params. button, you see the settings window.

 


Enter the Number of Processes. Under Parallel Processing, select Dialog. Select parallel generators in Server Group for Parallel Dialog Processing and then click on the save icon. You can re-initiate the failed activation again and the data should be activated now without any issues.

 


After the successful activation of data in the DSO, you can revert back to the normal settings for the DSO if necessary to avoid having too many dialog processes. If the activation of data in the DSO is done through process chains (transaction RSPC), you can access the settings. If many DSOs are failing in activation and the above parameters have to be changed for all of these DSOs, then SAP recommends updating the table RSBATCHPARALLEL directly. However, one should have security access to change the data in the tables directly and the steps we describe are easier to perform for individual DSOs.

 

 

 

Regards

 

Sudhakar

To upload Sales Order data from SAP R/3 to BI using generic DataSources

$
0
0

Hi Guys,

 

 

Below content will surely help you for your reference.

 

To upload Sales Order data from SAP R/3 to BI using generic DataSources

The Following Steps to be followed in R/3

Step 1:Goto T-code RSO2. Give the DataSource name in Transactional data and click on Create  

Step 2:

  1. Write the Application Component name
  2. Fill in the text column with description
  3. In “Extraction from DB view” give the table/View name from which you want to extract data.
  4. SAVE  

  Note: Click on F4 to chose Application Component name    

Step 3: Clicking on SAVE will lead to the below screen.  

Note: We can do “Selection”, “Hide” the fields etc as shown in the below screen shot

Step 4:Go to T-code RSA6 to know if the DataSource is successfullyactivated 

  Note: Only Datasources which are activated will be displayed in RSA6 transaction  

Following Steps to be followed in BI

Note: The RFC connection should be configured before you replicate data to BI system

Step1:  

  1. Goto RSA1 => DataSources
  2. Right click on Sales and Distribution tree => Replicate Metadata
  3. Activate

 

Step 2: Once theDataSource is replicated, you get a below popup, we need to choose “as DataSource” as we are using BI 7.0 

Step 3: Create InfoPackage ontheDataSource  

InfoPackage: It acts as a link to extract data from Source system and gives it to PSA

Step 4: Goto “Schedule” andclick on“Start”  

Step 5: Click on icon “Monitor” on the toolbar to view if the data is successfully loaded to PSA  

PSA: Persistent Staging Area is a staging area where you can load the data temporarily before loading to target

 

In the below screen shot, you can see status – “Request successfully loaded to PSA

Below Screen shot shows the data loaded to PSA 

Following Steps to build the Target system

Step 6:

  1. Right click on the DataSource, clk on “Create InfoCube

  Step 7: Create Dimension table. Dimension table will have all “Primary Keys”. In our example VBELN – Sales Doc No is the primary key 

Note: We need to define at least one KeyFigure

Step 8: Create DTP  

DTP: DTP is used to transfer data from PSA to Data target. In our case the target is InfoCube.

The below screen shot shows the Source and Target DTP  

Note: In our case the source is DataSource and Target is InfoCube  

Select Extraction Mode as “Full” as we are loading the data for the first time  

Step 9:

  1. Save and Activate
  2. Go to “Execute” tab and click on Execute

Below screen shot shows the DTP monitor. Green indicates that the data is successfully loaded to the target system  

Step 10: To see if the data is successfully loaded to target system,

Rtclk on the Infocube => Manage  

Note: When you create DTP a Request Id will be generated as shown in the below screen shot

Step 11: Go to “Contents” => “InfoCube Content” to see the output  

Thanks,

Priya

 

 

Infocubes - Concepts (SAP BW)

$
0
0

Infocubes - Concepts (SAP BW)

 

1. INFOCUBE-INTRODUCTION:


The central objects upon which the reports and analyses in BW are based are called InfoCubes & we can seen as InfoProviders. an InfoCube is a multidimensional data structure and a set of relational tables that contain InfoObjects.

2. INFOCUBE- STRUCTUREStructure of InfoCube is considered as ESS-Extended Star Schema/Snow Flake Schema, that contains
• 1 Fact Table
• n Dimension Tables
• n Surrogate ID (SID) tables
• n Fact Tables
• n Master Data Tables
Fact Table with KeyFigures
n Dimension Tables with characteristics
n Surrogate ID (SID) tables link Master data tables & Hierarchy Tables
n Master Data Tables are time dependent and can be shared by multiple InfoCubes. Master data table contains Attributes that are used for presenting and navigating reports in SAP(BW) system.

3. INFOCUBE TYPES:

 

• Basic Cubes reside on same Data Base
• Remote Cubes Reside on remote system
• SAP remote cube resides on other R/3 System uses SAPI
• General remote Cube resides on non SAP System uses BAPI
• Remote Cube wit Services reside on non SAP system

BASIC CUBE: 2 TYPES: These are physically available in the same BW system in which they are specified or their meta data exist.
STANDARD INFOCUBE: FREQUENTLY USEDStandard InfoCube are common & are optimized for Read Access, have update rules, that enable transformation of Source Data & loads can be scheduled

TRANSACTIONAL INFOCUBE:The transactional InfoCubes are not frequently used and used only by certain applications such as SEM & APO. Data are written directly into such cubes bypassing UpdateRules

REMOTE CUBES: 3 TYPES:Remote cubes reside on a remote system. Remote Cubes gather metadata from other BW systems, that are considered as Virtual Cubes. These are the remote cube types:

SAP REMOTE CUBE:the cube resides on non SAP R/3 system & communication is via the service API(SAPI)

GENERAL REMOTE CUBE:Cube resides on non SAP R/3 Source System & communication is via BAPI.

REMOTE CUBE WITH SERVICES:Cube resides on any remote system i.e. SAP or non SAP & is available via user defined function module.

INFOCUBE TABLES- F,E,P,T,U,NTransaction Code: LISTSCHEMA
LISTSCHEMA>enter name of the InfoSource OSD_C03 & Execute. Upon execution the primary (Fact) table is displayed as an unexpanded node. Expand the node and see the screen.
These are the tables we can see under expanded node:


5. INFOCUBE-UTILITIES
PARTITIONING
Partitioning is the method of dividing a table into multiple, smaller, independent or related segments(either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of fields in  the table.
For Partitioning a data set, at least among 2 partitioning criteria 0CALMONTH & 0FISCPER must be there.

ADVANTAGES OF PARTITIONING:• Partitioning allows you to perform parallel data reads of multiple partitions speeding up the query execution process.
• By partitioning an InfoCube, the reporting performance is enhanced because it is easier to search in smaller tables, so maintenance becomes much easier.
• Old data can be quickly removed by dropping a partition.
you can setup partitioning in InfoCube maintenance extras>partitioning.

CLASSIFICATION OR TYPES OF PARTITIONING
PHYSICAL PARTITIONING/TABLE/LOW LEVEL
Physical Partitioning also called table/low level partitioning is restricted to Time Characteristics and is done at Data Base Level, only if the underlying database allows it.
Ex: Oracle, Informix, IBM, DB2/390
Here is a common way of partitioning is to create ranges. InfoCube can be partitioned on a time slice like Time Characteristics as below.
• FISCALYEAR( 0FISCYEAR)
• FISCAL YEAR VARIANT( 0FISCVARNT)
• FISCALYEAR/PERIOD(0FISCPERIOD)
• POSTING PERIOD(OFISCPER3)
By this physical partitioning old data can be quickly removed by dropping a partition.
note: No partitioning in B.I 7.0, except DB2 (as it supports)

LOGICAL PARTITIONING/HIGH LEVEL PARTITIONINGLogical partitioning is done at MultiCubes(several InfoCubes joined into a MultiCube) or MultiProvider level i.e. DataTarget level . in this case related data are separated & joined into a MultiCube.
Here Time Characteristics only is not a restriction, also you can make position on Plan & Actual data, Regions, Business Area etc.
Advantages:
• As per the concept, MultiCube uses parallel sub-queries, achieving query performance  ultimately.
• Logical partitioning do not consume any additional data base space.
• When a sub-query hits a constituent InfoProvider, a reduced set of data is loaded into smaller  InfoCube from large InfoCube target, even in absence of MultiProvider.

EXAMPLES ON PARTITIONING USING 0CALMONTH & 0FISCYEARTHERE ARE TWO PARTITIONING CRITERIA:
calendar month (0CALMONTH)
fiscal year/period (0FISCPER)
At an instance we can partition a dataset using only one type among the above two criteria:
In order to make partition, at least one of the two InfoObjects must be contained in the InfoCube.
If you want to partition an InfoCube using the fiscal year/period (0FISCPER) characteristic, you have to set the fiscal year variant characteristic to constant.
After  activating InfoCube, fact table is created on the database with one of the number of partitions corresponding to the value range.
You can set the valuerange yourself.
Partitioning InfoCubes using Characteristic 0CALMONTH:
Choose the partitioning criterion 0CALMONTH and give the value range as
From=01.1998
to=12.2003
So how many partitions are created after partitioning?
6 years * 12 months + 2 = 74 partitions are created
2 partitions for values that lay outside of the range, meaning < 01.1998 or >12.2003.
You can also determine how many partitions are created as a maximum on the database for the fact table of the InfoCube.
You choose 30 as the maximum number of partitions.
Resulting from the value range:
6 years *12 calendar months + 2 marginal partitions (up to 01.1998, from 12.2003)= 74 single values.
The system groups three months at a time together in a partition
4 Quarters Partitions = 1 Year
So, 6 years * 4 partitions/year + 2 marginal partitions = 26 partitions are created on the database.
The performance gain is only gained for the partitioned InfoCube if the time dimension of the InfoCube is consistent.
This means that all values of the 0CAL* characteristics of a data record in the time dimension must fit each other with a partitioning via 0CALMONTH.
Note: You can only change the value range when the InfoCube does not contain any data.

PARTITIONING INFOCUBES USING THE CHARACTERISTIC 0FISCPERMandatory thing here is, Set the value for the 0FISCVARNT characteristic to constant.

STEPS FOR PARTITIONING AN INFOCUBE USING 0CALDAY & 0FISCPER:Administrator Workbench
   >InfoSet maintenance
     >double click the InfoCube
        >Edit InfoCube
           >Characteristics screen
              >Time Characteristics tab
                 >Extras
                    >IC Specific Properties of InfoObject
                       >Structure-Specific Properties dialog box
                         >Specify constant for the characteristic 0FISCVARNT
                           >Continue
                              >In the dialog box enter the required details

Partition Errors:
F fact tables of partitioned InfoCube have partitions that are empty, or the empty partitions do not have a corresponding entry in the related package dimension.
Solution1: the request SAP_PARTITIONS_INFO_GET_DB4 helps you to analyze these problems. The empty partitions of the f fact table are reported . In addition, the system issues an information manage. If there is no corresponding entry for a partition in the InfoPackage dim table(orphaned).
When you compressed the affected InfoCube, a database error occurred in drop partition, after the actual compression. However, this error was not reported to the application. The logs in  the area of compression do not display any error manages. The error is not reported in the developer trace (TRANSACTION SM50), the system log ( TRANSACTION SM21) and the job overview (TRANSACTION SM37) either.
The application thinks that the data in the InfoCube is correct, the data of the affected requests or partitions is not displayed in the reporting because they do not have a corresponding entry in the package dimension.
Solution2: use the report SA P_DROP_FPARTITIONS</Z1) to remove the orphaned or empty partitions from the affected f fact tables, as described in note 1306747, to ensure that the database limit of 255 partitions per database table is not reached unnecessarily.

REPARTITIONING:Repartitioning is a method of partitioning, used for a cube which is already partitioned that has loaded data. Actual & Plan data versions come here. As we know, the InfoCube has actual data which is already loaded as per plan data after  partition. If we do repartition, the data in the cube will be not available/little data due to data archiving over a period of time.
You can access repartitioning in the Data Warehousing Work Bench using Administrator>Context Menu of your InfoCube.
REPARTITIONING - 3 TYPES: A) Complete repartitioning,
B) Adding partitions to an e fact table that is already partitioned and
C) Merging empty or almost empty partitions of an e fact table that is already partitioned

REPARTITIONING - LIMITATIONS- ERRORS:SQL 2005 partitioning limit issue: error in SM21 every minute as we reached the limit for number of partitions per SQL 2005(i.e. 1000)

COMPRESSION OR COLLAPSE:Compression reduces the number of records by combining records with the same key that has been loaded in separate requests.
Compression is critical, as the compressed data can no longer deleted from the InfoCube using its request ID's. You must be certain that the data loaded into the InfoCube  is correct.
The user defined partition is only affecting the compressed E-Fact Table.
By default  F-Fact Table contains data.
By default SAP allocates a Request ID for each posting made.
By using Request ID, we can delete/select the data.
As we know that  E-Fact Table is compressed & F-Fact Table is uncompressed.
When compressed, data from F-Fact Table transferred to E-Fact Table and all the request ID's are lost / deleted / set to null.
After compression, comparably the space used by E-Fact Table is lesser than F-Fact Table.
F-Fact Table is compressed  uses BITMAP Indexes
E-Fact Table is uncompressed uses B-TREE Indexes

INDEX/INDICES
PRIMARY INDEX
The primary Index is created automatically when the table is created in the database.
SECONDARY INDEX(Both Bitmap & B-Tree are secondary indices)
Bitmap indexes are created by default on each dimension column of a fact table
& B-Tree indices on ABAP tables.

RECONSTRUCTION:Reconstruction is the process by which you load data into the same cube/ODS or different cube/ODS from PSA. The main purpose is that after deleting the requests by Compression/Collapse by any one, so if we want the requests that are deleted (old/new) we don't need to go to source system or flat files for collecting requests, we get them from PSA.
Reconstruction of a cube is a more common requirement and is required when:1) A change to the structure of a cube:  deletion of characteristics/key figures, new characteristics/key figures that can be derived from existing chars/key figures
2) Change to update rules
3) Missing master data and request has been manually turned green - once master data has been maintained and loaded the request(s) should be reconstructed.

KEY POINTS TO REMEMBER WHILE GOING FOR RECONSTRUCTION:• Reconstruction must occur during posting free periods.
• Users must be locked.
• Terminate all scheduled jobs that affect application.
• Deactivate the start of RSBWV3nn update report.

WHY ERRORS OCCUR IN RECONSTRUCTION?Errors occur only due to document postings made during reconstruction run, which displays incorrect values in BW, because the logic of before and After images are no longer match.

STEPS FOR RECONSTRUCTIONTransaction Codes:
LBWE  : LO DATA EXTRACTION: CUSTOMIZING COCKPIT
LBWG  : DELETE CONTENTS OF SETUP TABLES
LBWQ  : DELTA QUEUED
SM13   : UPDATE REQUESTS/RECORDS
SMQ1  : CLEAR EXTRACTOR QUEUES
RSA7  : BW DELTA QUEUE MONITOR
SE38/SA38  : DELETE UPDATE LOG

STEPS:1. Mandatory - User locks :
2. Mandatory - (Reconstruction tables  for application 11 must be empty)
Enter  transaction - LBWG & application = 11 for SD sales documents.
3. Depending on the selected update method, check below queues:
SM13 – serialized or un-serialized V3 update
LBWQ – Delta queued
Start updating the data from the Customizing Cockpit (transaction LBWE) or
start the corresponding application-specific update report RMBWV3nn (nn = application  number) directly  in transaction SE38/SA38 .
4. Enter RSA7 & clear delta queues of  PSA, if it contains data in queue
5. Load delta data from R/3 to BW
6. Start the reconstruction for the desired application.
If you are carrying out a complete reconstruction, delete the contents of the  corresponding data targets in  your BW (cubes and ODS objects).
7. Use Init request (delta initialization with data transfer) or a full upload to load the data  from the reconstruction into BW.
8. Run the RMBWV3nn update report again.

ERRORS ON RECONSTRUCTION:Below you can see various errors on reconstruction. I had read SAP Help Website and SCN and formulated a simple thesis to make the audience, easy in understanding the concepts
ERROR 1: When I completed reconstruction, Repeated documents are coming. Why?
Solution: The reconstruction programs write data additively into the set-up tables.
If a document is entered twice from the reconstruction, it also appears twice in the set-up table. Therefore, the reconstruction tables may contain the same data from your current reconstruction and from previous reconstruction runs (for example, tests). If this data is loaded into BW, you will usually see multiple values in the queries (exception: Key figures in an ODS object whose update is at “overwrite”).

ERROR 2: Incorrect data in BW, for individual documents for a period of reconstruction run. Why?
Solution: Documents were posted during the reconstruction.
Documents created during the reconstruction run then exist in the reconstruction tables as well as in the update queues. This results in the creation of duplicate data in BW.
Example: Document 4711, quantity 15
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
‘ ‘ 4711 15 delta, new record
‘ ‘ 4711 15 reconstruction
Query result:
4711 30
Documents that are changed during the reconstruction run display incorrect values in BW because the logic of the before and after images no longer match.
Example: Document 4712, quantity 10, is changed to 12.
Data in the PSA:
ROCANCEL DOCUMENT QUANTITY
X 4712 10- delta, before image
‘ ‘ 4712 12 delta, image anus
‘ ‘ 4712 12 reconstruction
Query result:
4712 14

ERROR 3: After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The reconstruction ignores the data in the update queues. A newly-created document is in the update queue awaiting transmission into the delta queue. However, the reconstruction also processes this document because its data is already in the document tables. Therefore, you can use the delta initialization or full upload to load the same document from the reconstruction and with the first delta after the reconstruction into BW.
After you perform the reconstruction and restart the update, you find duplicate documents in BW.
Solution: The same as point 2; there, the document is in the update queue, here, it is in the delta queue. The reconstruction also ignores data in the delta queues. An updated document is in the delta queue awaiting transmission into BW. However, the reconstruction processes this document because its data is already contained in the document tables. Therefore, you can use the delta initialization or full upload to load the same document from the reconstruction and with the first delta after the reconstruction into BW.

ERROR 4:Document data from time of the delta initialization request is missing from BW.
Solution: The RMBWV3nn update report was not deactivated. As a result, data from the update queue LBWQ or SM13 can be read while the data of the initialization request is being uploaded. However, since no delta queue (yet) exists in RSA7, there is no target for this data and it is lost.

ROLLUPRollup creates aggregates in an InfoCube whenever new data is loaded.

LINE ITEM DIMENSION/DEGENERATE DIMENSIONlf the size of a dimension of a cube is more than 20% of the normal fact table, then we define that dimension as a Line Item Dimension.
Ex: Sales Document Number in one dimension is Sales Cube.
Sales Cube has sales document number and usually the dimension size and the fact table size will be the same. But when you add the overhead of lookups for DIMID/SID's the performance will be very slow.
By flagging is as a Line Item Dimension, the system puts SID in the Fact Table instead of DMID for sales document Number.
This avoids one lookup into dimension table. Thus dimension table is not created in this case. The advantage is that you not only save space because the dimension table is not created but a join is made between the two tables Fact & SID table(diagram 3) instead of 3 Tables Fact, Dimension & SID tables (diagram 2)

Below image is for illustration purpose only( ESS Extended Star Schema)


Dimension Table, DIMID=Primary Key
Fact Table, DIMID-Foreign Key
Dimension Table Links Fact Table And A Group Of Similar Characteristics
Each Dimension Table Has One DIMID & 248 Characteristics In Each Row

LINE ITEM DIMENSION ADVANTAGES:
Saves space by not creating Dimension Table

LINE ITEM DIMENSION DISADVANTAGES:• Once a Dimension is flagged as Line Item, You cannot *** additional Characteristics.
• Only one characteristic is allowed per Line Item Dimension & for (F4) help, the Master Data is displayed, which takes more time.

HIGH CARDINALITY:If the Dimension exceeds 10% of the size of the fact table, then you make this as High Cardinality Dimension. High Cardinality Dimension is one that has several potential occurrences. when you flag a dimension as High Cardinality, the database is adjusted accordingly.
BTREE index is used rather than BITMAP index, Because in general, if the cardinality is expected to exceed one fifth that of a fact table, it is advisable to check this flag
NOTE: SAP converts from BITMAP index to BTREE index if we select dimension as High Cardinality.

 

Errors and Solutions

$
0
0

                                    Errors and Solutions

 

  1. Infopackage CS_ORDER_ATTR_CREATION_DATE failed due to the 797 duplicate record found. 235 recordings used in table /BI0/PCS_ORDER.

Sol: Activated infoobject 0CS_ORDER then pushed data from PSA

 

  1. CO_OM_WBS_1_COMMITMENT_UCA1 data load failed due Record 7393 :Value 7107175.10 not contained in the master data table for 0WBS_ELEMT

Sol: Pulled the required master data from R/3, and pushed the data from PSA. It was completed successfully.

  1. Infopackage:CS_ORDER_ATTR_CREATION_DATE failed due to 37 duplicate record found. 1023 recordings used in table /BI0/PCS_ORDER

Sol: After activating the masterdata for 0CS_ORDER,data was pushed from PSA.

4)

Infopackage ZBWVBAKVBAP_ATTR_CREATION_DATE failed due to the attributes for characteristic ZS_DOC are locked by a change run.

 

Solution:

Pushed data from PSA then activated characteristic ZS_DOC.

 

5)

Infopackage:0MATERIAL_ATTR_DELTA failed due to the attributes for characteristic 0MATERIAL are locked by a change run.

 

Solution:

After getting released from the lock data was pushed from PSA

6)

Infopackage:8Z11VAIT2_DELTA_240404 failed due to record 1726 :No SID found for value '0070547593000010 ' of characteristic ZS_DOC.

 

Solution:

After loading the masterdata for ZS_DOC the process was repeated.

7)

Background job:MD_ACT_DLY_4:00_CSORD_PCTR_ZSDOC failed due to Lock NOT set for: Change run for hierarchies and attributes.

 

Solution:

Repeated the job and completed successfully

8)

0DOC_NUMBER failed in activation due to lock NOT set for: Change run for hierarchies and attributes.

 

Solution:

After getting released from the lock the process was repeated.

9)

8FIAR03_DELTA data load failed due to Record 3046 :Value '100027782x ' (hex. ' ') of characteristic 0AC_DOC_NO contains invalid character.

 

Solution:

Corrected the data in the PSA with '100027782X ' and pushed the data from PSA. It was completed successfully.

10)

CO_OM_WBS_1_COMMITMENT_DX9X_UCA1_COPY data load failed due to Record 7622 :Value 7106979.10 not contained in the master data table for 0WBS_ELEMT

 

Solution:

Loaded the required WBS elements from R/3, pushed the data from PSA. It was completed successfully.

11)

FUNCT_LOC_ATTR data load faiked due to Lock NOT set for: Loading master data attributes.

 

Solution:

After the lock release pushed the data from PSA it was completed usccessfully.

12)

ZBWEQUZ_HIERARCHY_CREATION_DATE data load failed due to 1 duplicate record found. 13613 recordings used in table /BI0/XEQUIPMENT

 

Solution:

Activated the 0Equipment master data and pushed the data from PSA.

13)

Infopackage BWSOITMD failed due to the attributes for characteristic ZS_ORDITM are locked by a change run

 

Solution:

Pushed data from PSA and then activated characteristic ZS_ORDITM.

14)

Infopackage FUNCT_LOC_ATTR failed due to the attributes for characteristic 0FUNCT_LOC are locked by a change run

 

Solution:

Pushed data from PSA then activated characteristic 0FUNCT_LOC.

15)

Infopackage:8Z12VCIT1_DELTA failed due to record 710 :No SID found for value '0111099356000010 ' of characteristic ZS_DOC.

 

Solution:

After loading the master data for ZS_DOC the process was repeated.

16)

CO_OM_WBS_1_COMMITMENT_UCA1 data load failed due to Record 8059 :Value 7107382.10 not contained in the master data table for 0WBS_ELEMT

 

Solution:

Loaded the 0WBS_ELEMT 7107382.10 from R/3 to BW and pushed the data from PSA. It was completed succe3ssfully.

17)

0VENDOR_ATTR data load failed due to the attributes for characteristic 0VENDOR are locked by a change run.

 

Solution:

After the lock release pushed the data from PSA. It was completed successfully.

18)

FUNCT_LOC_ATTR data load failed due to the attributes for characteristic 0FUNCT_LOC are locked by a change run

 

Solution:

After the lock release pushed the data from PSA, it was copmleted successfully

19)

Infopackage BWSOITMD failed due to the attributes for characteristic ZS_ORDITM are locked by a change run.

 

Solution:

Pushed data from PSA then activated characteristic ZS_ORDITM.

20)

Infopackage FUNCT_LOC_ATTR failed due to the attributes for characteristic 0FUNCT_LOC are locked by a change run.

 

Solution:

Pushed data from PSA and then activated characteristic 0FUNCT_LOC.

21)

BWDX9XCOPS01_UCA1_COPY data load failed due to Record 15134 :Value 0001988050000011 not contained in the master data table for ZS_ORDITM.

 

Solution:

Uploaded the dependent master data from R/3 and pushed the datya from PSA. It was completed successfully on 16th Sep '06.

22)

ZZBWCOVP_DX9X_CP_UCA1_COPY data load failed due to Record 3326 :Value 0007828308000020 not contained in the master data table for ZS_ORDITM.

 

Solution:

Uploaded the dependent master data from R/3 and pushed the datya from PSA. It was completed successfully on 16th Sep '06.

23)

ZZBWCOVPD_DX9X_UCA1_COPY data load failed.

 

Solution:

Pushed the data from PSA, and it was completed successfully on 16th Sep '06.

24)

ZZBWCOVPD_DX9X_UCA1_COPY data load failed due to Job cancelled in MP1

 

Solution:

The loade was re-trigged.

25)

8Z12VCIT1_DELTA data load failed due to Record 1578 :No SID found for value '0001990147000011 ' of characteristic ZS_DOC.

 

Solution:

Maitained the master data for ZS_DOC, and pushed the data from PSA it was completed successfully.

26)

Infopackage FUNCT_LOC_ATTR failed due to the attributes for characteristic 0FUNCT_LOC are locked by a change run.

 

Solution:

Pushed data from PSA and then activated characteristic 0FUNCT_LOC.

27)

Infopackage 8Z11VAIT2_DELTA_240404 failed due to the Record 3038 :No SID found for value '0070553024000010 ' of characteristic ZS_DOC.

 

Solution:

Created masterdata and then pushed transactional data from PSA.

28)

Infopackage 0SALESDEAL_ATTR failed due to the attributes for characteristic 0SALESDEAL are locked by a change run

 

Solution:

Pushed data from PSA and then activated characteristic 0SALESDEAL.

29)

Infopackage Backup InfoCube ZPSC04BU_PLAN_DX9X_UCA1_COPY failed due to the Error occurred in the data selection.

 

Solution:

Retriggered the IP:Backup Info Cube ZPSC04BU_PLAN_DX9X_UCA1_COPY it ran sucessfully.

 

30)

Infopackage:BWDX9XCOPS01_UCA1_COPY failed due to record 22190 :Value 0007824107000051 not contained in the master data table for ZS_ORDITM.

 

Solution:

After loading the masterdata for ZS_ORDITM,data was pushed from PSA.

31)

Infopackage 0EC_PCA_1_SE90_CVP failed due to job not triggered in MP1.

 

Solution:

Job Retriggered and the load got sucessful.

32)

Infopackages:ZBWQMEL_ATTR_CHANGE_DATE and ZBWOP22_VP failed due to job terminated in MP1 because of ABAP/4 processor: DBIF_SETG_SQL_ERROR.

 

Solution:

The job was retriggerd and the load was successful.

33)

Infopackage ZBWVBAKVBAP_ATTR_CREATION_DATE failed due to the Job termination in source system.

 

Solution:

Retriggered the Infopackage ZBWVBAKVBAP_ATTR_CREATION_DATE then it ran sucessfully.

34)

Infopackage CS_ORDER_ATTR_CHANGE_DATE failed due to the 293 duplicate record found. 2784 recordings used in table /BI0/XCS_ORDER.

 

Solution:

Activated table /BI0/XCS_ORDER then pushed data from psa.

35)

Infopackage 0EQUIPMENT_ATTR_CREATION_TODAY failed due to The attributes for characteristic 0EQUIPMENT are locked by a change run

 

Solution:

Pushed data from PSA then activated characteristic 0EQUIPMENT.

36)

description:

Infopackage ZBWEQUZ_HIERARCHY_CREATION_DATE failed due to the 1 duplicate record found. 17266 recordings used in table /BI0/XEQUIPMENT

 

Solution:

Activated infoobject 0EQUIPMENT then pushed data from PSA.

Viewing all 333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>