Quantcast
Channel: SCN : Blog List - SAP Business Warehouse
Viewing all 333 articles
Browse latest View live

Currency Conversion from JPY -> USD/JPY ->KRW/KRW->USD

$
0
0

I was pondering for the the solution for the currency conversion from JPY-->USD or JPY -->KRW.

 

The Issue we were facing here is all related to the currency which don't have decimals places in the currency figures like JPY and KRW..

 

Apparently JPY & KRW Currencies don't have decimal places so in this example I took currency JPY for elucidating.

 

Eg: Let's take Currency to be converted to USD is 1234 JPY.

 

We have defined Currency translation type and also functional module for currency conversion's everything was working fine and giving perfect results.

 

But the only issue we found is with currencies which don't have decimal places and that is JPY currency.

 

So I have debugged the results where exactly the issue is and found that while extracting the data from source the value 1234 is taken up as 12.34 JPY

and PSA contains these incorrect values (12.34 JPY Instead 1234 JPY),thus the reason is the Incorrect values in PSA which is leading to Incorrect results after currency conversion.

 

I just wrote a routine to fulfill the requirement especially for the currencies which doesn't have decimal places.

 

Hope the information given here would be of more helpfull..

 

Cheersss:-)


How to solve Process Chain Error when Repeat Option not work?

$
0
0

Sometimes process chain fails at particular step due to some kind of errors and stuck at the same time even after Repeat or Repeat option will not be applicable for the failed process.

 

The described steps will help to sort out this kind of problems.

 

(1) The process chain fails at the step below while loading data into InfoProvider(marked in red)

 

Err1.PNG

 

(2) Right Click --> Displaying Messages

 

err2.PNG

 

(3) Click on Chain tab and copy Variant& Instance

 

err3.PNG

 

(4) Go to SE11 and Enter a database table name RSPCPROCESSLOG

 

err4.PNG

 

Put Variant and Instance copied from Step 3

 

err5.PNG

 

Execute this and get the values of LOG_ID,TYPE,BATCHDATE and BATCHTIME.

 

(5) Go to SE38 and Enter Program name RSPC_PROCESS_FINISH and execute.

 

err7.PNG

 

Enter the values in the fields received from step 3 & 4, make STATE  as 'G' and execute it.

err6.PNG

 

Once the program will be executed with all mandatory fields,it will move process chain to next step and process chain execution starts with that step then next data load can be done smoothly.

 

Thanks & Regards,

-Harshil J Joshi

0 records problem in Master Data load

$
0
0

On a rainy noon of Friday faced strange type of problem while loading master data text in SAP APO server (EHP2 for SAP SCM 7.0).

 

Problem: There is one generic transaction data source for Master data which was coming from BW side. We have to load its data in different-different info objects like plant, division, sales organisation etc. by putting selection of particular info object in Info package. e.g. - Division

 

Info+Pack+for+Division.PNG

 

After schedule the Info package we were able to see the 76 records of division at PSA level. Then putting the same selection at DTP level

 

DTP+0+recorde.PNG

                     

After executing the DTP we was getting 0 records at manage level instead of 76 records for Division at PSA level.

 

Manage+0+recorde.PNG

 

 

Solution:


1. Check the data at PSA level,how it looks

 

Data in PSA.JPG

                         

2. At DTP level filter is same as data in PSA not as selection of info package(all upper case letter)

 

DTP+for+Division.PNG

                 

3. After executing the DTP we were able to got 76 records of division at info object manage level.

 

Manage+recordes+added.PNG

 

The solution is very simple but some times simple things do wonderful things.

 

Note: While doing master data load check Handle Duplicate records in update tab of DTP and activate master data after loading.

 

setting for master.PNG

 

Thanks for reading..

the importance of a good spec...

$
0
0

It's been ages since I have received a decent spec and today was another example of how that can lead to (serious) issues and most of all a waste of time (and money, as time = money). I've been working with SAP software since March 1997, first as a developer and since 2000 mainly as a BW consultant. Back in the 90s specs were still common, but that slowly disappeared in the "nillies" and seems to be completely gone since the 10s. Maybe it's just on the projects I work on, but I think it's a more widespread issue... especially when looking at some of the posts here, I sometimes think other people are even worse off than me.

 

What is the point of a spec?

For starters to have a clear description of the "problem / issue / request" at hand, but also to specify what exactly is expected from you (the consultant / project member / developer). No rocket science, right? So, why is it so hard to write a (good) spec? On some projects I have worked on, it wasn't even clear who should write specs? Really? Yes, really! It should be quite Obvious: he/she that requests a certain report (to keep it in the BI world) should be able to specify what it is they want. Next a "functional" consultant should "translate" that to SAP/BW language, so that we understand it. This means a spec should be written by a combination of one or more business people and one or more functional consultants. In my experience, that hardly ever happens... "business" usually has no time (really?) and they want things yesterday. The functional consultants quite often claim they don't know BW (even though you explain them several times and let's face it, it's not thàt hard to understand the basic principles) and if they put something on paper, it's usually so "vague" that you could probably deliver a zillion of different reports/setups to get "something" that is described in the "spec".

So, how do we know what to do? Well, you have a meeting (if you're lucky... usually it's multiple meetings) during which some people vaguely describe what they want and they try to make it sound really simple and straight forward. In a lot of cases they also wonder why they don't have such a report already? (those are my favorites... not). During the follow up (or status) meetings, the (non-)specs have a tendency to "change".

 

What's the impact of not having a (decent) spec?

Well, in the approach described above, you start with a basic setup and you try to "deliver" on time. Over the course of the "project", certain definitions (or even concepts) change, so you start bending & stretching your model to accomodate for these changes. This works in 90% of the cases... what if you're in one of those 10% though? One little change in the "requirements" litterally requires a complete new setup because the "tiny change" for the business, means a totally different logic. Today I was faced with such a "small" change that we actually implemented "between the soup and the potatoes" (Dutch expression litterally translated into English which means there was not a lot of time to do it) a couple of weeks (maybe even months) ago, but "suddenly" someone noticed that the report was no longer giving correct results. After more than half a (working) day of debugging, I came to the conclusion that the entire "logic" that was once true, has now become false. Is there time to "rethink" this? No... so, another work around is in the making.

So, by not really having a spec, we wasted a couple of weeks (I didn't work full-time on this as I was assigned to multiple clients) and now we're back to the "drawing board".

 

I really feel that this is what I have been doing for the past 4 or 5 years... and this can't be good, right? Surely I can deal with it and I am capable of changing logics completely, but that's no longer "fun" (or a challenge)... rather it's becoming a serious drag. Just wondering, am I the only one noticing this trend or are there soulmates out there?

10 Best Modeling Strategies in SAP BW –Part 1

$
0
0

I have divided modeling strategy into different categories. In part 1 I will start with modeling strategy for info cubes with 2 key points.

 

A. Info cube

 

1. Dimension vs. Fact table Ratio

 

As recommended by SAP and most of SAP BW consultants are aware of general rule that Dimension table should be within 15% of fact table size.  It is recommended for good performance.

But while designing a data model, how we would determine dimension’s design to maintain a healthy dimension vs fact table ratio?

 

Below are some small tips which will help to achieve it.

 

- Keep the dimensions small.

- Check the Info cube design with sample data using tcode RSRV.

    RSRV→ All Elementary Tests →Database.

S1.png

Highlight the Database Information about InfoProvider Tables, right-click (context menu), and choose Select Test.

 

Based on test result adjust your dimension design.

 

Note: If you would like to add sample records without uploading from ECC or flat file, you can use Program “CUBE_SAMPLE_CREATE”. It will provide ALV grid to input your sample records.

Caution: Use it ONLY in development or testing environment.

 

 

2. Navigational Attribute vs Dimension Info object

 

A great challenge when designing a data model is to decide whether to store data as a Characteristic in a dimension table (and therefore in the InfoCube) or as an attribute in a master data table and use it as Navigational attribute.

 

Navigational attributes instead of a simple dimensional attribute always introduce a performance penalty in terms of query execution.  It’s suggested to avoid if possible activating a large number of navigational attributes and keep them only if there are business requirements.

 

Reason:

 

- The fact table contains one foreign key column per Info Cube dimension and a column per key figure of the Info Cube.

-The dimension table consists of a dimension id (DIMID) column which constitutes the primary key of the dimension plus a column per characteristic in that dimension. Those columns hold SID (surrogate id) values of the corresponding characteristic.

- In the third layer, there are SID-tables of the characteristics. This can be a standard S-table, containing only relationship between SID and characteristic key, an X-table (SID-key relationship plus SID columns per time-independent navigational attribute), or a Y-table (SID-key relationship, timestamp, SID columns per time-dependent navigational attribute).

- In the fourth layer, there are standard S-tables for navigational attributes.

Example:

 

Material Group as Navigational Attribute of Material

In this case during query execution, data will be read up to fourth layer as per below diagram.

 

S2.png


S3.png  

 

I would like to hear your views and suggestions.

 

References: SAP BI Performance & Administration

C_TBW55_73 - Modelling and Data Management with SAP BW 7.3 & SAP BI 4.0 Certification Details

$
0
0

I have recently completed my Associate level certification in SAP BW 7.3 & SAP BI 4.0 with 97% and would like to share my experience with all those interested in getting certified.

 

Disclaimer: Each exam is different and the weightage, complexity quoted here are for general understanding for those aspiring to become certified and it may vary in the actual exam. These are just my thoughts and there is no guarantee that you will have a similar exam.

 

C_TBW55_73 is the certification offered by SAP to test the knowledge in the area of Business Intelligence. It verifies knowledge in modelling as well as data management with SAP BW 7.3 and BI 4.0.

 

Duration of the exam is 180 minutes and the minimum score required to clear is 68% (it may vary). It is divided into 10 sections spanning across BW & BI with BW having maximum weightage around 70 – 75% and BI around 25 – 30%. Below is the deep dive into the sections, topics in the descending order of weightage.

 

1)    InfoProviders – 20 – 25%

2)    Data Modelling – 10 – 12%

3)    Reporting Tools – 10 - 12%

4)    Administration & Performance – 10%

5)    Data Flows – 10%

6)    Source Systems – 10%

7)    Fundamentals - 8 – 9%

8)    BI Platform – 7 – 8%

9)    BODS – 6 – 7%

10)  InfoObjects – 5%

 

Out of the 10 sections listed above, challenging and in depth questions which tests your understanding of the concepts as well as the practical experience (hands on) are from the below sections (in the descending order of complexity):

 

  1.    Administration & Performance
  2.    InfoProviders
  3.    Data Modelling
  4.    Data Flows

 

Books required for these sections: BW310 (Units 3, 4, 6, 8 and 9) & BW330 (Units 5 to 9). Practical knowledge is highly recommended.

 

Most of the questions are scenario based and requires understanding of the concepts, not just definitions. Main focus is on InfoCubes, DSOs, MultiProviders, InfoSets and Master Data Characteristics. It is recommended to have a thorough understanding about each and every detail to ensure success in these areas as they cover at least 50% of the overall weightage.

 

Easy and straight forward questions are from the sections listed below:

 

  1.    BI Platform
  2.    BODS
  3.    Reporting Tools
  4.    Fundamentals
  5.   Source Systems
  6.    InfoObjects

 

Books required for these sections: BO100, BOE315, BODS, BW310 (Units 1, 2, 5 and 7), BW330 (Units 1, 2, 4, 10 & 11) and BW350 (Units 1 to 7 and 10). Questions on these sections verify the basic understanding such as use cases, connectivity options for various BOBJ reporting tools etc. and are straight forward.

 

Overall, here is the quick summary:

 

  1. Level of difficulty greatly depends on ones efforts towards the preparation
  2. For those new to SAP BW, read every detail in the books BW310 & BW330, do not skip any topics and  practise your exercises
  3. For those who are already working in earlier versions of SAP BW, aiming to get certified in the latest version and if you don’t have the latest books, refer help.sap.com for the latest features in SAP BW 7.3. It is a great source of learning.
  4. Visit SCN for examples and detailed explanation for topics which you feel you need more detail

 

Additional resources:

 

https://training.sap.com/v2/certification/c_tbw55_73-sap-certified-application-associate--modeling-and-data-management-with-sap-bw-73--sap-bi-40-g/topic-areas/

https://training.sap.com/v2/uploads/C_TBW55_73_sample_items.pdf

http://help.sap.com/saphelp_nw73/helpdata/en/a3/fe1140d72dc442e10000000a1550b0/frameset.htm

http://help.sap.com/bobip40

http://help.sap.com/businessobject/product_guides/boexir4/en/xi4_bip_admin_en.pdf

 

I hope this helps and feel free to contact if you need more information.

 

Enjoy learning and Good luck with your Certification!

 

Hema

The Benefits of BW on HANA, My Perception.

$
0
0

I spent some time compiling my own list of the benefits of running BW on HANA instead of the traditional a traditional RDBMS under BW for a customer. I thought it might also make a decent blog topic. There is no particular order in terms of importance etc.

 

Benefits of using the SAP HANA Database for BW

 

  • BW functions performed within SAP HANA benefit from its in-memory and calculation engines, accelerating BW data access.
  • Existing BW client tools, like SAP Business Explorer, are fully supported on BW powered by SAP HANA. Direct clients, like Microsoft Excel and SAP's Business Objects Business Intelligence (BI) tools, are also supported by BW on HANA.
  • With BW in-memory-optimized objects, complex analysis and planning scenarios with unpredictable query types, high data volume, high query frequency, and complex calculations can be processed with a high degree of efficiency.
  • Loading SAP HANA-optimized BW objects can also be done more efficiently.
  • The SAP HANA database replaces both any previous database and SAP NetWeaver BWA, reducing infrastructure costs. Instead of both database administration tools and additional SAP NetWeaver BWA administration tools, the SAP HANA database requires just a single set of administration tools for monitoring, backup and restore, and other administrative tasks.
  • Data modeling is simplified. Using in-memory-optimized objects you do not need to load a BWA index, for example. In addition, the architecture of the HANA datbase allows you to delete characteristics from an InfoCube that still contains data.
  • With its high compression rate, the column-based HANA datastore requires less data be materialized.
  • BWA is not required anymore.
  • Aggregates are not required anymore. You no longer need processes for creating and filling aggregates.
  • You no longer need processes for creating and destroying Indexes on Infocubes.
  • Obsolete Process Types: The following BW process types are not needed if you use the SAP HANA database to support BW:
    • Initial Filling of New Aggregates
    • Update Explorer Properties of BW Objects
    • Rolling Up Filled Aggregates/BWA Indexes
    • Adjust Time-Dependent Aggregates
    • Construct Database Statistics
    • Build Index
    • Delete Index
  • Note: If you are using the SAP HANA database, it is no longer possible to select these process types in process chain maintenance. Existing process chains do not have to be modified. The relevant process variants do not run any tasks in the chains and do not terminate with errors.
  • SAP HANA-optimized objects help you to achieve significantly better performance in load and activation processes. Up to 80% faster for loading in HANA Optimized Cubes.
  • Uses a Delta Merge for new data. Delta storage is optimized for write access. Main storage is optimized for read access.

 

Benefits of BW Analytic Engine on HANA

  • Calculation Scenarios: The system automatically generates, updates and deletes the logical indexes associated with InfoProviders.
  • There is no need to create and fill indexes.
  • No need for aggregates.
  • The system creates logical indexes for the following InfoProviders:
    • Standard InfoCube
    • SAP HANA-optimized InfoCube
    • InfoObjects as InfoProviders
    • Analytic Index
    • CompositeProvider
  • With Calculation Scenarios, complex calculations for various OLAP functions are performed directly in the database. These functions include:
    • Hierarchies
    • Top N and Bottom N Queries
    • MultiProviders
    • Selected exception aggregation including required currency translation.
  • BW workspaces: A BW workspace is a special area, where new models can be created based on central data from  the BW system and local data. Workspaces can be managed and controlled by a central IT department and used by local special departments. This means  you can quickly adjust to new and changing requirements. The aim of workspaces is to bridge the gap between the central requirements and the flexibility required locally. Central architected data marts from the IT department can be combined, in a workspace, with data marts from the specialized departments.
  • Analytic Indexes and Transient Providers: You can create a TransientProvider by publishing SAP HANA models in the BW system. The SAP HANA models published in the BW system are saved as an analytic index. This represents a view of the data in the SAP HANA model. A TransientProvider is generated on this analytic index. A TransientProvider based on an SAP HANA model is suitable for ad hoc data or scenarios that change frequently. If the SAP HANA model is changed, the analytic index is adjusted automatically at runtime. HANA Models are published to analytic indexes in transaction RSDD_HM_PUBLISH.Analytic views and calculation views are available as SAP HANA models.
  • Composite Provider: These TransientProviders can then be linked to other BW InfoProviders in a CompositeProvider. This enables you to combine ad-hoc data with consolidated data from the BW system and also use the OLAP functions of the BW system for analysis. If the SAP HANA model is changed, the analytic index is adjusted automatically at runtime. Analytic views and calculation views are available as SAP HANA models.
  • VirtualProvider: You can create a VirtualProvider based on a SAP HANA model if you want to use this model in the BW system. Analytic views and calculation views are available as SAP HANA models. Unlike the TransientProvider based on a SAP HANA model, the VirtualProvider based on a SAP HANA model is suitable for stable, long-term scenarios. This VirtualProvider can be used in a MultiProvider. In addition, navigation attributes can be used. To use navigation attributes, you can select InfoObjects with master data access Standard or SAP HANA Attribute View.

ITAB_DUPLICATE_KEY Dump error during Master Data loading

$
0
0

Hey All,


Have you ever come across this error message and thought what to do ? I certainly did


During data loading to info objects,

DTP fails with ABAP dump "ITAB_DUPLICATE_KEY" with a short text  "A row with the same key already exists"

 


This happens when a BW client has many different source systems, which gets added and removed now and then,so unused objects still remains, creating a duplicate entry in RSISOSMAP table.


Identify Unused Objects,

Goto SE38

RSAR_RSISOSMAP_REPAIR

 

Get the Details from here

 


Goto SE16
RSISOSMAP

 

Delete the Unused Objects.

Other reasons might be due to ACR has not run before next update,ensure aggregates are maintained,

 


Goto SE38

 

RSDDS_AGGREGATES_MAINTAIN

 

Give that info object name and execute.re load it should work.

 

 

All your comments are most welcomed and it's considered valuable. Correct me If I'm wrong ..


Semantic Groups in DTP

$
0
0

Hi All,

 

In this blog, I am trying to explain the use of semantic groups option in DTP. You will get this option in extraction tab of DTP. Below is the screenshot showing the same.

 

Capture.PNG

Basically semantic groups are used for error handling. Let's take an example.

 

Suppose I have a DSO with a record

 

EmpID  Location  Salary

101      PUN         40000

 

Now I am running another load to DSO

 

It is as follows:

 

EmpID  Location  Salary

101      PUN         50000

101      PUN         60000

 

Salary 40000 is changed to 50000 and then to 60000.

 

When I run the  DTP, suppose if the first record (101 PUN 50000) has an error it won't get loaded to the DSO, it would go to error stack, the next record with salary 60000 will go to the DSO and overwrite salary 40000. After correcting the error in error stack when I run the error DTP the record with salary 50000 will go and overwrite the previous record. So finally the record in the DSO will have salary 50000 and not 60000 which is incorrect.

 

Here comes the use of semantic group.

We select the key for grouping as you can see in the screenshot. All the records are now grouped according to this key.

If any error comes system takes further records which have same key field combination to error stack and after correcting the erroneous record all the records will get loaded to the DSO in that order. Thus, we get correct data.

Best ABAP coding practices for BW/BI consultants

$
0
0

This blog describes the basic best ABAP coding practices which would be helpful for BW/BI consultants.


I have written it from Start, End & field routine perspective.


Assumption:  DB_tab is database table with field names as field1 field2 field3 field4 field5.

                          itab is an internal table with field names as field1 field2 field3 field4

 

1) Begin End routine with


IF RESULT_PACKAGE[] IS NOT INITIAL.


This is useful because if result package does not have any value and if there is a statement like


select field1 field 2 from DB_tab into table itab FOR ALL ENTRIES IN RESULT_PACKAGE

where  field3 = <result-package>-field3 AND field4 = <result-package>-field4.

 

it will hit the database table to check blank values (and will fetch value against blank field3 field4 from DB table into itab. This is undesirable most of the times). Thus we can save this fruitless DB_tab hit by checking this condition.

 

Same is the case with Source package.


2) Do not use Select * from


  Copy only required fields (columns) from database table into internal table do not copy all database table into your internal table unless it is necessary.

 

3) It's better to avoid use of into corresponding fields of

  Consider below example,


3.a Select field2 field3 field1 from DB_tab  into corresponding fields of itab WHERE ....

 

instead of it, use

 


3.b Select field1 field2 field3 from DB_tab into itab WHERE ...

 

This means, Define itab's columns in same order as that of database table. Also, Write into itab in same order.

In above examples performance of 3.b is much better than 3.a


4) Always use for all entries in result-package


  consider below example,


select field1 field 2 from DB_tab into table itab FOR ALL ENTRIES IN RESULT_PACKAGE

  where field3 = <result-package>-field3

     AND field4 = <result-package>-field4.


This will bring only those records from DB_tab into itab  whose fields3 & field4 values  present in result package.


This means, if result package contains only one entry with field3 = 6000 and field4 = NY

then above select statement only bring records from DB_tab with field3=6000 & field4 = NY into itab.


5) Use WHERE condition on Loop statement wherever possible.


Suppose you want to perform specific operation only for field4 = 'LA'.


LOOP AT RESULT-PACAKGE assigning <RESULT_FIELDS> WHERE field4 = 'LA'.

" code logic

ENDLOOP.


This can avoid useless loops for remaining values of field4.


6) Use SORT carefully.


If you have to compare field3 & field4 then sort itab by those fields only.

Use only those fields in Binary search and Delete adjucent duplicates.


  i.e   SORT itab by field3 field4.


  Then use  READ table itab into wa comparing field3 = <result_fields>-field3

                                                                                       field4 = <result_fields>-field4.

or Delete adjacent duplicates from itab comparing field3 field4.


Also, avoid use of sort on result_package as it requires considerable time to sort millions of records.


7) Don't forget to use IF SY-SUBRC.


Always use SY-SUBRC = 0 (success) check for SELECT & READ operations to avoid any runtime errors or  unintended result values.


0 stands for success

4 stands for failure, that is no key found

8 stands for failure but it means no further keys available.

Communication of BW OHD delta load detail to third party systems

$
0
0

Communication of BW OHD delta load detail to third party systems: 

 

With open hub service APIs like RSB_API_OHS_DEST_GETDETAIL, RSB_API_OHS_DEST_SEND_NOTIFICATION and RSB_API_OHS_DEST_READ_DATA_RAW the delta loads extracted till OHD can be communicated to a third party systems. I would say that it is a kind of push mechanism from BW to 3rd party though
the data is, in reality, read by 3rd party system after the notification is sent to it. What if we wanted a pull mechanism where 3rd party can pull the delta data at its convenience and if we wanted to highlight certain sequences in data read and certain patterns and scenarios in delta data to the 3rd party system?

 

We can model our own communication mechanism. Here is one, on how we can model a pull mechanism of OHD data in a 3rd party system and how delta records related patterns and scenarios in the OHD can be shown to the 3rd party system.

 

Requirements: 

  1. BW data loads are daily deltas. Data records loaded in a particular day’s delta in OHD should be identifiable with a specific identifier in OHD so that 3rd party can pick the necessary data at its convenience. For e.g. deltas to OHD is loaded daily but 3rd party wants to fetch delta only once in a week or once in a month or once in 2 weeks etc.
  2. 3rd party is a relational database and hence will be overwriting old status with recent status as of OHD data fetch. Therefore all before image records in OHD delta should be identifiable to 3rd party so that it can restrict them in its read statement.
  3. If a transaction is deleted in ECC, then all corresponding postings of that transaction has to be filtered out from 3rd party reporting. So 3rd party needs a deleted indicator for that transaction in the OHD.
  4. There could be a possibility that transactional records in OHD could have undergone multiple changes between last 3rd party delta fetch and next delta to 3rd party. So the changes to the transactional records in OHD, even if they were loaded to OHD in different delta requests should be identifiable with a sequence. 3rd party can then make use of this sequence to identify the latest change.
  5. 3rd party needs freedom of fetching the same delta more than once if need arises.

 

Solution:

  1. In the transformation from DSO to OHD, a unique identifier, like a GUID or a unique numeric pointer etc., is generated which is stamped in all the records of that particular delta request to OHD. Now this unique identifier is entered into a Z Table in BW system along with the OHD table name and data load details like load date, time and status etc. 3rd party system is given read and write access to this Z table so that it will be able to identify the necessary delta records that it needs to fetch from OHD (using the unique identifier). 3rd party system should also write its data fetch status in the Z table to track progress like 3rd party fetch date and time and success message. This Z table concept enables 3rd party to read the deltas how many ever times as they wish.
  2. The before images in a DSO change log have RECORDMODE value ‘X’. If OHD gets delta data from DSO, then bring the field RECORDMODE to OHD as well. Now 3rd party can filter out all records where RECORDMODE is ‘X’ during data fetch from OHD or even later.
  3. If a transaction is deleted, the extractor will send this information in ROCANCEL field of the extractor. Map it to the RECORDMODE field of the DSO. Now a deleted transaction will have RECORDMODE ‘R’ in the DSO change log table. If we take the RECORDMODE field in OHD as well, the 3rd party can identify deleted transactions in OHD with this ‘R’ RECORDMODE value.
  4. We can introduce a timestamp field in the first level DSO and while loading from extractor we can derive this field in transformation with load system timestamp. Then even if a transaction undergoes multiple changes on same day or different days, the sequence and in effect the latest status can be identified with this timestamp. Take this timestamp till OHD. 3rd party system can fetch all delta records relevant for it and then sort on timestamp field descending and pick the latest status alone and ignore others since it is overwrite in 3rd party anyway.
  5. With the help of Z table with OHD load details and delta record GUID details, 3rd party can fetch relevant deltas more than once as well.

 

Some considerations for this topic:

  1. How to grant accesses to 3rd party to BW tables like the OHD table for read and Z table for read and write? This should not be a problem for closely knit systems. For others, I guess we can also extend the data read APIs to read and write data in BW tables.
  2. How to load from BW cube to a OHD directly for this requirement? I guess ‘before image’ records in a delta can still be highlighted to OHD from cube if we manage to get the before images identified in the cube as well. Rest of the requirements can be solved in same way like a DSO to OHD load.
  3. Can we achieve these with the standard APIs itself? I see that there is a sequence identifier in API ‘Data record (binary) with continuation indicator’. So most of the requirements can be achieved with standard APIs. I have doubts on how to give flexibility to 3rd party to pull data from OHD at its own convenience (may be log the API notification in 3rd party in a table along with load details and build the functionality in 3rd party itself) and how to give flexibility to pull data multiple times or at specific intervals (again store the API notification in 3rd party).

 

I expect that there will be variety of communication techniques used currently since sending data out of BW to 3rd party is such a common topic. Share your technique as it will be interesting to see the ideas of the BW world and maybe we can discuss on pros and cons of different techniques.

 

Arunan.

Loading Date of BW InfoProvider in SAP BO

$
0
0

Howdy fellas,

 

since I found the very nice blog entry from Prabhith and people may have not activated the BI Administration Cockpit, I decided to quickly post an entry quick and dirty while loading some data

(Link to document: http://scn.sap.com/docs/DOC-48514)

 

One of the most missing features in BI is the loading date of InfoProvider in SAP BW.

Quite a while before the BI Admin Cockpit was activated in our system, I needed to find a solution.

As Prabhith mentioned there a quite a few so this is how I done it:

 

1. Create a Generic DataSource in your BW System on Table RSBKREQUEST (no delta necessary at least in my case).

1.jpg

2. Create a DSO with the following InfoObjects:

-> ZULADDAT is a Date and ZULADTST ist the timestamp of the loading.

2dso.jpg

3. Create a Transformation

3trfn.jpg

4. Create the routines for converting the timestamps into somthing we can use:

a. Routine for the InfoObject ZULADDAT (convert the timestamp in date)

4rout.jpg

 

b. Routine for the InfoObject ZULADTST

5routtimstamp.jpg

-> Make sure to set your TimeZone and DayLightSaving indicator according to your needs.

in my case we are in Switzerland.

 

5. Create the DTP with filter:

a. TargetObjectType  CUBE, IOBJA, ISET and ODSO

b. Request Status: 2 and 8

 

6. Create a Multiprovider on top of the DSO - add the fields

 

-> Create a BEx Query and / or a Universe.

 

7. Result can look like this (we have this on all our Webi-Reports:

6.jpg

 

Thats about it.

 

Hope this helps,

 

Andreas

 

PS: Make sure to put this load after loading all other InfoProvider



Generic Delta Extraction using Function Module along with currency conversion in source system

$
0
0

Business Scenario:       

1. Need to have additional fields from Sales Partner Table (VBPA) for all Order Line Items in Sales Item Table (VBAP) for Sales Order Line Item Level Reporting. Standard data sources cannot be used as many info providers such as DSO’s; Cubes are using them so it involves lots of effort in terms of time and money maintaining the same. Also this approach involves a lot of risk considering if anything gets deactivated during Transports.

2.  In addition to this Currency Conversion has to be done in Source system as per client norms.

Note: It is recommended to do currency conversions in BW system.

 

R3 Side: In order to meet the above 2 requirements we decided to go for Generic Delta Extraction using Function Module. Need to make sure that Generic Extraction is Delta Based as Sales Order Items Table (VBAP) contains all the line item level information for Orders and its not easy extracting everything i.e. doing full update on daily basis and then maintaining the same in BW. Here, we shall be building a logic using AEDAT (Created on) and ERDAT (Changed on) of VBAP to extract the order Items getting changed/Created since the last BW Extraction.

 

Steps for Delta Enabled, Function Module Based Data source

 

1.       Create an Extract structure including DLTDATE field in addition to all other required fields. DLTDATE would be used to build the logic to extract the delta using AEDAT and ERDAT.

             Reason for Addition of a DLTDATE Field in Extract Structure

While configuring delta in the RSO2 screen, the field on which the delta is requested must be a field present in the extract structure. To allow the extractor to provide delta on timestamp, there must be a timestamp field in the extract structure. Hence the timestamp field is added here – it is merely a dummy field created to allow us to use the extractor for delta purposes, as will become clear later.

1.jpg

2.  Copy the Function group RSAX from SE80, give new function group as ZRSAX_TEST.

3.  Copy function module. Deselect all and then select only RSAX_BIW_GET_DATA_SIMPLE name it as ZBW_FUNCTION.

4.  Go to the Include folder and double-click on LZRSAX_TESTTOP define the structure and Field symbol and internal table as below.

 

INCLUDE LZRSAX_TESTTOP.

 

* Structure for the Cursor - extraction of data

TYPES: BEGIN OF ty_vbap,

         Vbeln TYPE vbeln,

         Posnr TYPE posnr,

         Netwr TYPE netwr,

         Waerk TYPE waerk,

         Dltdate TYPE dats,

       END OF ty_vbap.

 

* Structure for VBPA to extract PERNR

TYPES: BEGIN OF ty_vbap,

         Vbeln TYPE vbeln,

         Posnr TYPE posnr,

         Pernr TYPE pernr,

       END OF ty_vbap.

 

* Structure for the final Table

TYPES:  BEGIN OF ty_ord_final,

           Vbeln TYPE vbeln,

           Posnr TYPE posnr,

           Pernr TYPE pernr,

           Dltdate TYPE datum,

           Netwr   TYPE netwr,

           Waerk   TYPE waerk,

           Netwr_loc_val TYPE WERTV8,

           Loc_curr       TYPE waers,

           Netwr_rep_val TYPE WERTV8,

           Rep_curr       TYPE waers,

         END OF ty_ord_final.

 

* Internal table

DATA:    t_vbap TYPE STANDARD TABLE OF ty_vbap,

         t_vbpa TYPE STANDARD TABLE OF ty_vbpa.

 

*Work areas

DATA:    wa_vbap TYPE ty_vbap,

         wa_vbpa TYPE ty_vbpa.

 

* Variables

DATA:    lv_bukrs TYPE bukrs,

         lv_vkorg TYPE vkorg,

         lv_waers TYPE waers,

         lv_prsdt TYPE prsdt,

         lv_netwr TYPE netwr.

 

* Currency conversions Variables

       DATA:    save_ukurs     LIKE tcurr-ukurs,

         save_kurst     LIKE tcurr-kurst,

         save_ukurx(8)  TYPE p,

         save_ffact1    LIKE  tcurr-ffact,

         save_tfact     LIKE  tcurr-tfact,

         save_ffact     LIKE  tcurr-ffact,

         save_ukurs1(11) TYPE p DECIMALS 5.

 

* Field symbol declaration

FIELD-SYMBOLS: <i_fs_order_item> LIKE LINE OF t_vbap.

 

Save and Activate it.

 

Creating the Function Module

 

* Auxiliary Selection criteria structure

  DATA: l_s_select TYPE srsc_s_select.

 

* Maximum number of lines for DB table

  STATICS: s_s_if TYPE srsc_s_if_simple,

 

* counter

          s_counter_datapakid LIKE sy-tabix,

 

* cursor

          s_cursor TYPE cursor.

 

* Select ranges

  RANGES:  l_r_vbeln        FOR vbap-vbeln,    "DOC

           l_r_posnr        FOR vbap-posnr,    "ITEM

           i_r_dltdate      FOR vbap-erdat.    "DELTA DATE

 

DATA       t_final  LIKE LINE OF e_t_data.

* Initialization mode (first call by SAPI) or data transfer mode

* (following calls)?

  IF i_initflag = sbiwa_c_flag_on.

 

************************************************************************

* Initialization: check input parameters

*                 buffer input parameters

*                 prepare data selection

************************************************************************

 

* Check DataSource validity

    CASE i_dsource.

      WHEN 'ZBW_DS_TEST'.

      WHEN OTHERS.

        IF 1 = 2. MESSAGE e009 (r3). ENDIF.

* This is a typical log call. Please write every error message like this

        log_write 'E'                  "message type

                  'R3'                 "message class

                  '009'                "message number

                  i_dsource   "message variable 1

                  ' '.                 "message variable 2

        RAISE error_passed_to_mess_handler.

    ENDCASE.

 

    APPEND LINES OF i_t_select TO s_s_if-t_select.

 

* Fill parameter buffer for data extraction calls

    s_s_if-requnr    = i_requnr.

    s_s_if-dsource   = i_dsource.

    s_s_if-maxsize   = i_maxsize.

 

* Fill field list table for an optimized select statement

* (in case that there is no 1:1 relation between InfoSource fields

* and database table fields this may be far from beeing trivial)

    APPEND LINES OF i_t_fields TO s_s_if-t_fields.

 

  ELSE.                 "Initialization mode or data extraction ?

 

************************************************************************

* Data transfer: First Call      OPEN CURSOR + FETCH

*                Following Calls FETCH only

************************************************************************

 

* First data package -> OPEN CURSOR

    IF s_counter_datapakid = 0.

 

* Fill range tables BW will only pass down simple selection criteria

* of the type SIGN = 'I' and OPTION = 'EQ' or OPTION = 'BT'.

      LOOP AT s_s_if-t_select INTO l_s_select.

        CASE l_s_select-fieldnm.

          WHEN 'VBELN'.

            ls_vbeln-sign        = ls_select-sign.

            ls_vbeln-option      = ls_select-option.

            ls_vbeln-low         = ls_select-low.

            ls_vbeln-high        = ls_select-high.

            APPEND l_r_vbeln.

          WHEN 'POSNR'.

            ls_posnr-sign        = ls_select-sign.

            ls_posnr-option      = ls_select-option.

            ls_posnr-low         = ls_select-low.

            ls_posnr-high        = ls_select-high.

            APPEND l_r__posnr.

          WHEN 'DLTDATE'.

            ls_delta_date-sign   = ls_select-sign.

            ls_delta_date-option = ls_select-option.

            ls_delta_date-low    = ls_select-low.

            ls_delta_date-high   = ls_select-high.

            APPEND l_r_dltdate.

          ENDCASE.

      ENDLOOP.

 

* Determine number of database records to be read per FETCH statement

* from input parameter I_MAXSIZE. If there is a one to one relation

* between DataSource table lines and database entries, this is trivial.

* In other cases, it may be impossible and some estimated value has to

* be determined.

 

 

  OPEN CURSOR WITH HOLD s_cursor FOR

      SELECT itm~vbeln AS vbeln itm~posnr AS posnr

             itm~netwr AS netwr itm~waerk AS waerk itm~aedat

             AS dltdate FROM vbap AS itm

             WHERE itm~vbeln IN ls_vbeln

             AND itm~posnr IN ls_posnr

             AND ( ( itm~aedat EQ '00000000'

             AND itm~erdat IN ls_dltdate )

             OR  ( itm~aedat NE '00000000'

             AND itm~aedat IN ls_dltdate ) ).

    ENDIF.                             "First data package ?

 

* Fetch records into interface table.

*   named E_T_'Name of extract structure'.

 

  REFRESH: t_vbap.

                FETCH NEXT CURSOR s_cursor

                APPENDING CORRESPONDING FIELDS

                OF TABLE t_vbap

                PACKAGE SIZE s_s_if-maxsize.

                IF sy-subrc <> 0.

                 CLOSE CURSOR ls_cursor.

                 RAISE no_more_data.

                ENDIF..

* Loop at it_vbap to build the final table

 

LOOP AT t_vbap ASSIGNING  <i_fs_order_item> .

        CLEAR: t_final, lv_vkorg,lv_bukrs,lv_waers,lv_prsdt.

        MOVE: <i_fs_order_item>-vbeln TO t_final-vbeln,

              <i_fs_order_item>-posnr TO t_final-posnr,

              <i_fs_order_item>-netwr TO t_final-netwr,

              <i_fs_order_item>-waerk TO t_final-waerk,

              <i_fs_order_item>-dldat TO t_final-dltdate.

 

        SELECT SINGLE pernr FROM vbpa INTO  t_final-pernr

            WHERE vbeln = <i_fs_order_item>-vbeln

            AND  posnr = '000000'

            AND parvw = 'ZM'.

          IF sy-subrc NE 0.

             t_final-pernr = space.

          ENDIF.

 

*Select the order header data based on sales document

       SELECT SINGLE vkorg FROM vbak INTO lv_vkorg

              WHERE vbeln = <i_fs_order_item>-vbeln.

         IF sy-subrc = 0.

* Select the company code based on sales org

         SELECT SINGLE bukrs FROM tvko INTO lv_bukrs

             WHERE vkorg = lv_vkorg.

            IF sy-subrc = 0.

* Select the local currency based on company code

         SELECT SINGLE waers FROM t001 INTO lv_waers

            WHERE bukrs = lv_bukrs.

           IF sy-subrc = 0.

             t_final-loc_curr = lv_waers.

             t_final-rep_curr = 'USD'.

           ENDIF.

 

* Select the pricing date based on sales document and sales docu item

         SELECT SINGLE prsdt FROM vbkd INTO lv_prsdt

                WHERE vbeln = <i_fs_order_item>-vbeln

                AND  posnr = <i_fs_order_item>-posnr.

              IF sy-subrc NE 0.

                SELECT SINGLE erdat FROM vbap INTO lv_prsdt

                WHERE vbeln = <i_fs_order_item>-vbeln

                AND  posnr = <i_fs_order_item>-posnr.

              ENDIF.

 

* Convert to local currency

         IF  <i_fs_order_item>-waerk NE lv_waers.

 

               CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'

                  EXPORTING

                    date                    =  lv_prsdt

                    foreign_amount          =  1

                    foreign_currency        = <i_fs_order_item>-waerk

                    local_currency          = lv_waers

                    type_of_rate            = 'M'

                  IMPORTING

                    exchange_rate           = save_ukurs

                    foreign_factor          = save_ffact

                    local_amount            = save_tfact

                    local_factor            = save_ffact1

                    exchange_ratex          = save_ukurx

                    derived_rate_type       = save_kurst

                  EXCEPTIONS

                    no_rate_found           = 1

                    overflow                = 2

                    no_factors_found        = 3

                    no_spread_found         = 4

                    derived_2_times         = 5.

               IF sy-subrc = 0.

                  save_ukurs1 = save_ukurs / save_ffact.

                  IF save_ffact1 NE 0 .

                     save_ukurs1 = save_ukurs1 * save_ffact1.

                  ENDIF.

                  lv_netwr = <i_fs_order_item>-netwr * save_ukurs1.

                  t_final-netwr_loc_val = lv_netwr.

               ENDIF.

          ELSE.

              t_final-netwr_loc_val = <i_fs_order_item>-netwr.

          ENDIF.

 

* Compare the Local currency with reporting currency

          IF lv_waers NE 'USD' .

* Convert the currency in the reporting currency

               CLEAR : save_ukurs1,lv_netwr,save_ukurs,

                       save_ffact,save_tfact,save_ffact1,

                       save_ukurx,save_kurst.

 

               CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'

                 EXPORTING

                   date                    = lv_prsdt

                   foreign_amount          = 1

                   foreign_currency        = lv_waers

                   local_currency          = 'USD'

                   type_of_rate            = 'M'

                 IMPORTING

                   exchange_rate           = save_ukurs

                   foreign_factor          = save_ffact

                   local_amount            = save_tfact

                   local_factor            = save_ffact1

                   exchange_ratex          = save_ukurx

                   derived_rate_type       = save_kurst

                 EXCEPTIONS

                   no_rate_found           = 1

                   overflow                = 2

                   no_factors_found        = 3

                   no_spread_found         = 4

                   derived_2_times         = 5.

                 IF sy-subrc = 0.

                   save_ukurs1 = save_ukurs / save_ffact.

                   IF save_ffact1 NE 0 .

                      save_ukurs1 = save_ukurs1 * save_ffact1.

                   ENDIF.

                   lv_netwr = t_final-netwr_loc_val * save_ukurs1.

                   t_final-netwr_rep_val = lv_netwr.

                 ENDIF.

          ELSE.

              t_final-netwr_rep_val = t_final-netwr_loc_val.

          ENDIF.

            ENDIF.

         ENDIF.

   APPEND t_final TO e_t_data.

  ENDLOOP.

    s_counter_datapakid = s_counter_datapakid + 1.

  ENDIF.              "Initialization mode or data extraction ?

 

ENDFUNCTION.

 

 

 

Explanation of the Code

 

RANGES:  l_r_vbeln        FOR vbap-vbeln,    "DOC

         l_r_posnr        FOR vbap-posnr,    "ITEM

         l_r_delta_date   FOR vbap-erdat.    "DELTA DATE

 

The l_r_delta_date range is created for the timestamp. The selection criteria for the extractor will be filled up into this range. This would be used to build the logic for extracting delta from VBAP using AEDAT and ERDAT.

 

LOOP AT s_s_if-t_select INTO l_s_select.

        CASE l_s_select-fieldnm.

          WHEN 'VBELN'.

            ls_vbeln-sign        = ls_select-sign.

            ls_vbeln-option      = ls_select-option.

            ls_vbeln-low         = ls_select-low.

            ls_vbeln-high        = ls_select-high.

            APPEND l_r_vbeln.

 

This part of the code is used to pass down the selections of vbeln from OLAP to OLTP. The same applies with 2 other fields for POSNR and DLTDATE.

 

SELECT itm~vbeln AS vbeln itm~posnr AS posnr

             itm~netwr AS netwr itm~waerk AS waerk itm~aedat

             AS delta_date FROM vbap AS itm

             WHERE itm~vbeln IN ls_vbeln

             AND itm~posnr IN ls_posnr

             AND ( ( itm~aedat EQ '00000000'

             AND itm~erdat IN ls_delta_date )

             OR  ( itm~aedat NE '00000000'

             AND itm~aedat IN ls_delta_date ) ).

 

Here, we are basically extracting the delta records from VBAP based upon the selection passed for DLTDATE using ERDAT and AEDAT.Basically 2 conditions are used to extract delta:

1.  ( ( itm~aedat EQ '00000000' AND itm~erdat IN ls_delta_date ) – For New records

2.  ( ( itm~aedat NE '00000000' AND itm~aedat IN ls_delta_date ) ) – For Changed records.

 

Thereafter, once the above is done we are updating the final internal table T_VBAP.

 

LOOP AT t_vbap ASSIGNING  <i_fs_order_item>.

 

SELECT SINGLE pernr FROM vbpa INTO  t_final-pernr

            WHERE vbeln = <i_fs_order_item>-vbeln

            AND  posnr = '000000'

            AND parvw = 'ZM'.

          IF sy-subrc NE 0.

             t_final-pernr = space.

          ENDIF.

 

Looping over this table to extract additional fields as per the requirement. In our case we need to extract additional partner fields from VBPA based upon orders getting created /changed since last BW extraction.

 

Currency Conversions

 

1. Document Currency: The Currency in which a document is posted in R/3 is called Document Currency. It is stored at document level as available in VBAP as it contains Sales Document Item Level Information i.e. WAERK.

2. Local Currency: The Company Code Currency is called Local Currency. It is stored at Company Code level in T001 Table.

3. Reporting Currency: In our case it is fixed as ‘USD’.

 

2.jpg

Logic to get Local Currency:

 

Now in order to get Local currency one needs to have the Company Code (BUKRS) but this is not available at Sales Order Item Level (VBAP).

 

1. Fetch Sales Organization (VKORG) first for all the Orders extracted earlier.

     SELECT SINGLE vkorg FROM vbak INTO lv_vkorg

                 WHERE vbeln = <i_fs_order_item>-vbeln.

 

2. Fetch Corresponding Company Code (BUKRS) for all the Sales Organization (VKORG) extracted above from TVKO.

       SELECT SINGLE bukrs FROM tvko INTO lv_bukrs

             WHERE vkorg = lv_vkorg.

 

3. Fetch Corresponding Local Currency from the Table T001 for all company Code (BUKRS) extracted above.

       SELECT SINGLE waers FROM t001 INTO lv_waers

            WHERE bukrs = lv_bukrs.

 

This way we have fetched all the currencies for all the Order Items i.e. Document, Local and Reporting Currency. Now we need to do the conversion of the net value in Document Currency to net value in Local and Reporting Currency.

 

Logic to do Currency Conversions:

 

We shall be using a Standard Function Module ‘CONVERT_TO_LOCAL_CURRENCY’ for doing various conversions i.e. Local and Reporting Currency from Document Currency.

 

Input Parameters:

Date – PRSDT “Pricing Date”. Need to fetch it for doing conversions.

From Currency - already fetched above.

To Currency – already fetched above.

Type of Conversion: Fixed as ‘M’ in our case

 

SELECT SINGLE prsdt FROM vbkd INTO lv_prsdt

                WHERE vbeln = <i_fs_order_item>-vbeln

                AND posnr = <i_fs_order_item>-posnr.

 

In this part of the code we are fetching Pricing Date (PRSDT) from VBKD based upon Order Items already extracted from VBAP.

Finally we are passing everything to the Function Module to have the Conversion done to Local Currency first and later to Reporting Currency.

 

Putting it All Together: Create the Data source:

 

1. Go to RSO2 to create the data source.

2. Fill in the various Details including the Function module and Structure name.

3.jpg

 

3. Select the option Timestamp and select the DLTDATE field you had added in your extract structure. Also set the safety limits as required.

4.jpg

Note: We could have selected Calend. Day but in that case the delta extraction can only be done once in a day.

 

4. Click Save to go back to the previous screen and click Save again. The following screen comes up.

5.jpg

Note that the DLTDATE field is disabled for selection; this is because this will be populated automatically as part of the delta. As a result, it will be unavailable for manual entry in the Info package or in RSA3

Following this step, create the corresponding ODS, Data source etc. in BW side and replicate. These steps are similar to what would be done for a normal generic data source.

Later this ODS active table is read to have these additional fields in BW Old Flow.

 

Hope it helps.

 

Thanks.

 

 

 

 

How to schedule process Chain based on after job - Periodically

$
0
0

Requirement: Process chain will be scheduled after execution of info package on daily basis

 

Steps to Follow:

 

1)      Schedule info package as per below screen

pc1.jpg

Here you can mention cancel job after after X run also.suppose you want to execute infopackage for 5 days then you should maintain 5.

It will create job for 5 days only.

Check the job in sm37 it is scheduled.

pc2.jpg

2)      Open the created process chain modify the start variant

 

pc3.jpg

 

Save the Variant -> Activate the chain

Do not forget to execute Chain

Here we need not to mention periodic job for process chain it will automatically taken care by after job setting.

      As soon as your infopackge job is completed it will trigger the process chain.

      PC4.jpg

        Here the process chain triggered after info package successful execution.

        pc4_1.jpg

        

       As we are scheduled info package on daily basis so it will create new job for next day after successful execution of info package.

      pc5.jpg

        pc6.jpg

        

In this way we will able to schedule process chain after job periodically.

 

Appreciate you suggestions and feedback.

Avoid Re-Transport of event based Process chain Chain

$
0
0

Purpose: To Avoid Re-transport of Event based Process chain

 

There is process chain based on Event.Start variant of process chain shows the event.

 

pc1.jpg

These events can find in Tcode Sm64

For transport of process chain

Generally we are collecting transport objects from RSA1 -> Transport connection

pc2.jpg

It will show us all objects required for process chain except event. Some time we did not identifies that event is missed out and we will transport this request to QA system. Request will transport without error. If you don’t have rights to create event in QA then there is no use of process chain in qa until event is present.

In such case remember that while collecting the event based process chain we need to collect event also in transport which will save your time.

How to collect event:

 

Go to Tcode - > SM64 find your event

 

pc3.jpg

Select event and click on truck button it will ask the transport request. Select existing request for process chain or create new.

pc4.jpg

In this way you can transport event to QA or Prod.

 

 

Hopes it will helpful to those having the same scenario -do not have rights to create event in QA and Prod.


How to # Transport Bex Query From "DEV to QUA"

$
0
0

Dear Friends, With the help of this blog I am trying to explain how we can transport Bex Queries From DEV system to QUA and PRD System.

Because I saw so many new People who started our carrier in SAP BI Then always asking this discussion in SCN How to transport a BEx Query..

 

First of all, We need a query. So create a Bex Query in query Designer.

 

Now Go to Transaction RSA1 and Click on Transport connection Tab.

 

 

Now in this screen which type of object we wants to transport we can transport from this screen.

 

 

Know Before Going to select any object we  will look in to Grouping and collection mode.

 

What is Grouping?

 

The first option to consider is for grouping the objects to be included.

 

 

The following are the options that are available for Grouping. Select the appropriate option and proceed to the next step.

 

 

What is Collection Mode?

 

This indicator is for the method you are using to gather all the objects needed to support the item you are activating.

 

 

The following are the options that are available for Collection Mode.

 

 

Depending on the collection mode that is selected, the system starts collecting the objects. The time taken to collect all the objects may vary depending on the grouping option that is selected. Once all the objects have been collected, proceed to the next step.

 

Now We are Selecting a developed query for transport.

 

Select grouping as only necessary objects and collection mode as manually Select the query to be transported from query elements Select transfer.

 

 

Now Click on the Package.

 

 

Package :- A package is a transportable archive containing the portal objects that can be exported or imported.

                  Packages is mainly used for data for import and export purpose.

 

Ones we click on the package we can see below this screen and in this screen so many query elements also. So click on filter button.

 

 

In this screen select object type and transfer from list field to filter criteria and clock on ok button.

 

 

Now in this screen give ELEM

 

Now you can select all query element. So select all option and click on OK Button.

 

 

 

 

Give the package name and save.

 

 

Create a new request and save.

 

 

and save request also.

 

 

Now Click on this CTO (Transport Organizer over view) option.

 

 

A new screen will come this is SE09 screen.

 

 

 

Now 1st release the Task and then request with the help of this Transport button.

 

Now check the log.

 

 

Now this request is ready for import in QUA and PRD system.

 

This can done by using this STMS or STMS_IMPORT tcode in QUA and PRD system.

 

Thanks

By Default $TMP Package

$
0
0

Some time we faced this issue, All the newly created object Like DSO, Info Object, Info Cube etc. are by default are going to $TMP package.

Even we have already a "Z" package in SAB BI.

 

So if we Perform a small setting in BI RSA1 T code then we can solve this problem.

 

Go to RSA1 and click on BI Content Tab.

 

 

Now in BI Content tab go to tool Menu, Click on Edit > Transport > Switch off Standard Setting.

 

 

Know more about Switch On and Off Standard Setting.

 

Switch On Standard - If we use this option then after all the development  and changes a pop will come at a time to save development in a new request. 

 

Switch off Standard - If we use this setting then all the objects will by default save on the $TMP Local package. Then we need to collect all objects in Transport Connection > Click on Object type Folder then Collect required object and Give "Z" then transport.

 

 

 

 

Thanks,

How to Unlock Objects from Transport Request

$
0
0

I saw many Times people faced the problems while collecting the Transport objects Due to following reasons.

1) Some objects are locked due modification done by multiple persons

2) Some projects have there own Bex transport that means for each BEx query or workbook modification it will sit in particular transport only

While collecting the Objects or single query system will tell us the Warning message

 

tr1.jpg

If we are ignoring this and send the transport request as it to Qa it will give the error code8.

So we need to collect all dependant objects while sensing the request to next system.

Here are the steps to unlock objects

Copy request number from warning message and go to TCode -> SE03

 

tr2.jpg

Open Requests/Task folder and double click on Unlock Objects (Expert Tool)

 

tr3.jpg

Put your request number and click on execute button.

 

tr4.jpg

Click on Unlock it will unlock all objects from this request. Now you can delete required objects from this request in SE01.For Bex Transport usually delete all the objects because it’s difficult to identify objects of which query.

 

Check in SE01

Before Unlock

tr5_1.jpg

 

After Unlock

tr5_2.jpg

 

Select rows and click on delete button. After this you can collect the Objects in your request.

Remember that while collecting the objects you should get green icon @ screen

 

tr6.jpg

 

Hopes it will help you.

SAP Business Warehouse 7.3: a step to in-memory datawarehousing?

$
0
0

Published on www.element61.be

 

The newest version of SAP BW, 7.3, comes with a number of new infoproviders and features, seamingly optimized for the SAP HANA database.

SAP’s promise is an improvement in performance and scalability, improved integration with SAP BusinessObjects, more development options and simplified configuration and monitoring. The question which always pops up with a new version is the necessity and timing of an upgrade. One of the obvious reasons to perform an upgrade is the prerequisite to install SAP BusinessObjects Planning & Consolidation (BPC) version 10 on SAP BW. Without BPC as an upgrade driver, the answer becomes more complex.

In this insight, we will give an overview and discuss the different new infoproviders and features, and elaborate on performance tests executed on BW 7.3 environments. Integration with SAP Business Objects (Data Services) will not be discussed here. Additionally, new features/providers for BW Accelerator is not part of this discussion either, because BWA is a feature that will become obsolete in the future as HANA emerges.

 


 

Graphical modeling toolset

 

 

SAP BW 7.3 is shipped with a graphical data modeling tool set. In the overview given by SAP on BW 7.3, the goal of this toolset is to drastically reduce the manual work effort in creating dataflows. This proposition will be the subject of our discussion.

 

One of the first things that comes forward when you start creating a dataflow with the tool is that the graphics are similar to the 'display dataflow' function in earlier versions. The tool seems not to deliver an enhanced graphical interface.

 

The user has the option to start creating a dataflow from a template or create it from scratch.

When creating a dataflow from scratch, you have to drag in all the desired objects (datasources - DSO's - infosources - ...) , and connect them by dragging lines to each other - which will be your transformations and DTP's. The objects which are now displayed on the screen, are not created yet, they do not have any content and are in fact empty boxes. SAP calls them non-persistent objects. You can now choose to either create new objects for these containers or replace the non-persistent objects by already existing ones.

This process resembles design activities, and the design can now be directly entered in the system. However, there is no significant added value here, nor in the user interface as in the reduction of manual work effort.

Designing a dataflow in Microsoft Visio or Excel offers more possibilities and flexibility, and the reduction of manual effort by directly entering your design in the system is minimal. In the end, all the objects have to be created manually, together with info-objects, mappings, and ABAP.

 

The other option is to create a dataflow starting from a dataflow template.

SAP offers dataflow templates based on their layered, scalable architecture (LSA) principles, going from simple designs to complex dataflows for huge data volumes. This is shipped together with detailed descriptions for the use case attached to the template. After you deploy the template, the same steps need to be followed to implement the objects, as the template also incorporates non-persistent objects.

We can come to the same conclusion for the manual work reduction, it is neglectible. However, the templates should make it possible to force the BI developer to work using company standards for BW architecture (LSA or not). But in the end, this has more to do with the BI management enforcing architectural standards than it has something to do with available templates in the system .

 

The toolset also gives the developer the possibility to transport dataflows as a whole, whereas earlier versions forced you to group the objects manually. This might be an improvement for more complex dataflows, where DSO’s or masterdata not directly linked (only ABAP) can be grouped together in one dataflow; which creates more visibility in the system.

 

Given the different options and interfaces provided by the graphical modelling toolset, we can conclude that the tool is definitely not a giant step forward, nor it is creating huge value for IT or the business. As a result, this should not -on itself- be considered as a reason to upgrade to SAP BW 7.3.

 


 

Semantic Partitioned Objects

 

 

This is a new info-provider which groups together different physical infoproviders (DSO’s or InfoCubes) which are structurally identical but differ by a semantic split. The different infoproviders within the SPO contain the same dimensions but have data from a different region / costcenter / …

This split can be made also by means of complex logic imbedded in Business Add-ins (BADI). Of course, in earlier versions this could have been set up manually, but it would create an enormous manual effort.


SPO in a classic DB environment

This infoprovider makes a lot of sense when considering cases where one/multiple reports with a pre-defined structure need to be reported in every region/cost center/…

In that case a report runs only on one semantic partition, which drastically improves performance as the query should only hit one small infocube (achieved by so-called partition-pruning). An alternative to this are the physical partitions, but in SAP Business Warehouse these can only be set up on time characteristics, and this feature is database dependent.

When considering huge dataloads (f.e. global datawarehouses), SPO’s can be used to split the dataloads per country, to minimize the risk of the impact of failing dataloads of one country on another. This could also be achieved before BW 7.3, but considerable more manual effort was required.


SPO in a HANA environment

Of course, the remarks made for the classic DB also counts for the HANA environment but in a less manner. With HANA, the data resides in-memory and a data look-up is significantly faster than a classic database read. The absolute time gain will be smaller for the above scenario.

However, the use of SPO’s leads to parallel processing, so you optimize your resources for a HANA infrastructure which could use up to 80 parallell processing units. You will gain a lot of performance improvement for reports where you want to show summarized data from all regions/ cost centers / … The number of parallel processing units in a classic environment is much smaller, so the total gain for HANA will be higher for reports running over different regions / cost centers / … .

 


 

Hybridproviders

 

 

This new infoprovider combines historical data and (near) real-time data. In earlier versions, this was very hard to achieve in one infoprovider. This object should contain an infocube which contains the historical data and a virtual infocube or DSO with RDA. There is one transformation from the hybrid provider to the datasource, which is limited in complexity. When a query is executed from a hybridprovider, it reads the historic infocube and reads everything above the latest delta pointer for direct access.


Hybridprovider in a classic DB environment

In a classic DB, the virtualprovider is only useable for very small volumes of data. Knowing this, the only workeable option is to use the DSO with RDA within the hybridprovider. However, the restriction is here that the datasource used must be RDA-enabled, and there are almost no datasources which are RDA-enabled ! As a result, this new infoprovider will only be workable within very few scenario’s on a classic DB.


Hybridprovider in a HANA environment

Because of the restriction with RDA datasources, the virtualprovider will be the typical object to use within a HANA environment. The recommendation is that also your ERP system runs on HANA, as the virtualprovider will directly read from the ERP database to report real-time information. The same limitation exists for datasources connecting to virtualproviders : it needs to allow direct access.

Good news is that most finance and controlling datasources are supporting direct access. The typical business case can be derived from this : hybridproviders allow for financial closing reports where a combination of real-time data and historical data is very useful to speed up the the close cycles.

 


 

Transientproviders and Compositeproviders

 

 

Transient – and compositeproviders are specifically designed for real-time data reporting, but are omitting the SAP BW metadata concepts – no need for infoobjects – masterdata – infocube design - etc…

A transientprovider can directly access SAP ECC data, and the possibility exists to build these via the BW client in SAP ECC in order to use the BI tools directly on SAP ECC. It consists of a classic infoset or analytical index. The classic infoset is for a typical usage scenario in SAP ECC. For a more performant approach a SAP HANA model can be published to SAP BW, which generates an analytical index. A transientprovider can then be constructed on top of the analytical index. A compositeprovider can join different analytical indexes.

These functionalities seem very similar to what can be achieved with a BO universe, which can also directly connect to a SAP HANA model and combine them. This is why these providers are positioned as being prototyping instruments, in advance to a datawarehouse design.

One of the future interesting features will be the possibility of a direct connection to a BW datasource, which will provide a lot of added value to prototype BW, or enable very quick access to information with a minimal development effort. These types of providers are very interesting when considered within the overall market trend to a more agile approach to datawarehouse projects. In-memory computing will enable to start from prototypes and evolve to a datawarehouse design which better incorporates the business requirements.

 


 

New Features

 

This section will describe new specific interesting features in data loading and modelling.

  • Hierarchy ETL : Up till now, we still had to use the old 3.x dataflows to load hierarchies. Finallly there is a new hierarchy datasource on which transformations and DTP’s can be built. Where in older versions complex ABAP was necessary to upload or change hierarchies in BW, now this is standardized in a transformation. This is definitely a step forward in terms of ETL functionality.
  • The master data deletion function has been upgraded, to enable system admins to minimize runtime and let the deletion run in background without process chain.
  • Dataflow generation wizard when an infocube is created : this is also a feature to promote prototyping in SAP BW. The user only needs to know the desired fields and datasource to generate a simple dataflow.
  • Generic delta is now possible for Universal Data (UD) & Database Connect (DB) options, as well as for flat files.
  • Navigation attributes as source fields : This extra feature for transformations could replace masterdata lookups. This is an interesting feature which increases the system visibility, but will be subject to our performance tests.
  • Read DSO transformation rule : Where BW developers used to program ABAP to perform lookups in other DSO’s , now a standard rule is available to perform this lookup with the key fields of the DSO. As this is an interesting feature in terms of visibility in the system, we are also interested in the performance impact, so also this will be subject of our below described tests.

 


 

Performance Tests

 

Some articles suggest that SAP BW 7.3 is a quantum leap when it comes to better data load features.

Let’s take this statement and perform some live-system tests. Most of the tests are performed on a classic DB environment : server with 8 Intel Xeon 2,9 GHz processors, 16 GB RAM , Windows Server Enterprise. We used the standard DSO 0FIGL_O10 as basis for our tests, where we uploaded 1.038.220 records. We also made use of SAP’s HANA demo environment to run some performance tests. The HANA environment runs on a server with a Quad 2,13 GHz Intel Xeon®EZ-4830 processor, 16 GB RAM, Windows Server Enterprise.


  • in BW 7.3 the data activation is changed from single lookups to package fetch of the active table, resulting in faster activation and less locks on the lookup tables. As we did not have a perfect test environment on our hands (BW 7.0 and BW 7.3 running on same hardware) we could not perform a valid test. However, other lab tests have indicated that we could expect an increase of 20% to 40% in activation performance.

    It is interesting however to note the difference in activation times between a HANA environment and a classic DB environment.

    We noted following activation times when most of the SID’s were already generated on the classic DB :

 

 

Image 1: Activation classic DB BW7.3


click to enlarge

Exactly the same amount of records were activated in the identical DSO on the HANA environment :

 

Image 2: Activation HANA BW7.3


click to enlarge

The performance difference between the classic DB and HANA is huge, we are talking about an improvement with a factor 22.

  • During the infocube load, BW 7.3 now makes use of mass processing during SID and DIM-ID determination phase. This option can be switched on/off by setting the RSADMIN parameter – RSDD_ENABLE_MASS_CUBE_WRITE . This will be subject to our tests.

    The below figures show the results for a classic DB :

 

Image 3: No Mass Cube Write Classic DB BW7.3


click to enlarge

 

 

Image 4: Mass Cube Write Classic DB BW7.3


click to enlarge

 

If we calculate the ratio between the two duration differences (59 Min 20 sec and 45 Min 12 sec) we come to (3560 – 2712) /3560 ~ 24% performance improvement in our test environment.

Now we will look at the improvement in the HANA environment.

 

Image 5: No Mass Cube Write HANA - limited nr of records


click to enlarge

 

We can directly see a huge improvement in the load performance due to HANA , again with a factor around 20. If we would like to come to a decent estimate for the ratio in HANA, we will re-do this test with a lot more records (12.830.893) and a DSO which includes a lot more infoobjects (129 as opposed to 20 in the DSO 0FIGL_O10). The results are presented below :

 

Image 6: No Mass Cube Write Hana - large volume


click to enlarge

The load finished in 3 hours 42 min 23 sec = 13.343 seconds.

Below the results for the mass cube write on the HANA platform :

 

Image 7: Mass Cube Write Hana - large volume


click to enlarge

 

 

The load finished in 52 minutes 4 secounds = 3.124 seconds . Then we come to a ratio of (13343 – 3124) /3124 ~ 327% performance improvement, a huge difference with the ratio seen on the classic Db environment. The test confirms the statement that HANA provides a significant increase in the level of parallelization.

Now we will take a closer look at the performance of new data load features : navigational attributes and datastore lookup.

We perform the tests on our classic DB server. For this, we loaded the same number of records from one DSO to another, doing a lookup in the company code masterdata (containing 13.971 records) and a DSO containing the same data (+1M records).

Let’s first take a look at the results with a masterdata lookup using ABAP code – in a non-performant manner (doing a SELECT in the routine from the rule) and a peformant manner (using internal tables doing the SELECT in the start routine) :

 

Image 8: Masterdata Lookup SELECT in rule


click to enlarge

 

 

Image 9: Masterdata Lookup , SELECT in start routine


click to enlarge

 

 

We notice a very small difference here in both examples, also due to the low table size of masterdata – which is the normal case for this data type. Below the result using the navigational attribute as the source :

 

Image 10: Masterdata Lookup, Nav. Attr. as Source


click to enlarge

 

We notice no considerable negative performance impact from using the navigational attributes, it is even likelily to be more performant.

For the DSO lookups, the non-performant lookup is reading the DSO directly in the rule with the table key. We use the table key as the ‘read DSO rule’ is also using the key. The performant code makes use of sorted internal tables because of the large table size. Let’s take a look at the results from the DSO lookup :

 

Image 11: DSO Lookup, SELECT in the rule


click to enlarge

 

 

Image 12: DSO Lookup, SELECT In start routine to sorted table


click to enlarge

 

 

We see a significant performance difference between the two coding examples. Below the result of using the new read DSO rule :

 

Image 13: DSO Lookup, DSO lookup rule


click to enlarge

 

The new lookup rule seems to be performing well, comparing with the performant and non-performant code.

 

Conclusion

 

 

Where we have seen that the new graphical modelling toolset is not living up to its promises, BW 7.3 provides enough alternative reasons for upgrading your system, even if you choose to continue with a classic database supporting the datawarehouse.

BW 7.3 could deliver the necessary performance improvements to speed up large data loads and heavy reports, and enhance functionality to support complex dataflows.

If a company has chosen to migrate to HANA, an upgrade to 7.3 is required. Besides the minimal version requirement, 7.3 is shipped with extra features which take advantage from the in-memory database.

BW 7.3 provides more possibilities to engage in BW prototyping and direct reporting on SAP ECC, enabled by a performance increase generated by HANA and new BW 7.3 infoproviders which are HANA-optimized. The HANA-alignment strategy of SAP is very present in BW 7.3.

If Moore’s law ("computer technologies roughly double every 2 years”) continues to be true, the cost of in-memory databases will further decrease and all datawarehouses and transactional systems will be in-memory.

SAP BW 7.3 supports this evolution, and everybody wanting to benefit from this should consider a BW upgrade to version 7.3.

Good News - Easier Modeling of the SEM Add-On in Solution Manager

$
0
0

Easier Modeling of the SEM Add-On in Solution Manager

 

Recently the following Option was made available for the usage of the SEM component on top of an existing BW System starting from Release NetWeaver 7.0

Now you can define the SEM component as a "normal" Add-On on top of the NetWeaver BW System.

This makes the creation of the stack.xml for the usage of the software update manager and the database migration option not anymore a big hurdle.

 

See details in the Note 1927083 - SAP NetWeaver Systems with SEM-BW

 

the following Matirix shows all possible option with your existing NetWeaver BW with the SEM Add-On:

SEM_Matrix740.JPG

 

 

New Installation

For new installations the new Add-On product version could be choosen as usual during MOPZ transaction within the Add-On Selection Step.

 

Change to Add-On Product Version of SAP SEM (CISI)

Customer which are currently using the ERP variant have now the chance to change to the new Add-On PV using the CISI process.

Step by Step documentation could be found in the Maintenance Planning Guide "Specifics in Installation and Upgrade"

 

See also Note 1816146 - Correction of installed software information (CISI)

 

You can now create the CISI.XML file from the updated product system description:

Start transaction SE38 in the NetWeaver BW/SEM System and run the report AI_LMDB_EASY_SUPPORT.

On the Product System tab, select the option to download the CISI stack XML, select the product system and choose Execute.

Save the CISI.XML.

 

Thanks to the Application Lifecycle Management Team - https://service.sap.com/mopz

MOPZ.JPG

Additional Information about the Modeling of the NetWeaver BW/SEM System can be found in this https://scn.sap.com/docs/DOC-44121

 

Best Regards

Roland Kramer PM BW/In-Memory

Viewing all 333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>