Quantcast
Channel: SCN : Blog List - SAP Business Warehouse
Viewing all 333 articles
Browse latest View live

Ways to find the source of a Virtual Provider

$
0
0

This blog is to address the beginners in SAP BW space, who are exploring and learning BW. Here is the ways to find the Function Module which loads data into Virtual Provider.

 

Background:

 

         During the issue reported by customer on data mismatch, you may have to find the root cause. In the cases of MultiProvider which has a combination of VirtualProvider and StandardCubes, it is necessary to understand the DataSources and DataFlow from various SourceSystems. But Virtual Provider will not have manage screen OR even it doesn't show the dataflow to know the source of data.

 

Solution:

       

To find the source of Virtual Provider follow the below ways.

 

I. Using RSA1,

 

1. Goto RSA1 ==> InfoProvider ==> locate your InfoProvider.

2. Goto display mode of the InfoProvider, click on Information icon OR press (Ctrl+F5).

VirtualProvider1.jpg

3. In subsequent pop up screen, choose 'Type/Attributes'.

VirtualProvider2.jpg

4. In subsequent pop up, find the details of Virtual Provider. Click on the details icon to find the name of the Function Module and its SourceSystem.

VirtualProvider3.jpgVirtualProvider4.jpg

 

II. Using Database Tables,

 

1. Goto SE16 ==> Enter the table name RSDCUBE.

VirtualProvider5.jpg

2. Pass the InfoProvider name as input along with OBJVERS = A.

VirtualProvider6.jpg

3. Field 'Name of Function Module/Class/HANA mod' - FUNCNAME will give you the Function Module name.

VirtualProvider7.jpg

 

Conclusion:

Either of these two ways will give you the source of the VirtualProvider along with its SourceSystem for further investigation.


Thanks for reading the blog, feel free to share the feedback OR even other ways to find the same.


The future of the SAP EDW: Interview with Juergen Haupt - Part I

$
0
0

This blog has previously been published on my company's website, and posted here to reach the SCN audience as well.

 

At the High Tech Campus Eindhoven, the Netherlands. Juergen Haupt, Product Manager SAP EDW (BW/HANA) gave a presentation for the Dutch User Group (VNSG). In the morning before the meeting, I was fortunate enough to get the chance to sit down with Mr. Haupt for an interview.

About SAP BW on HANA, LSA++, Native development, S/4HANA Analytics and everything in between.

 

Juergen-Haupt         IMG_0683
Left: Juergen Haupt, SAP. Right: Sjoerd van Middelkoop, SOA People | Intenzz


Mr. Haupt, welcome to Eindhoven! Please introduce yourself to our readers. 


Well, thank you, Sjoerd!  Ok, my name is Juergen Haupt and I am now with SAP for 18 years, working in the area of Data Warehousing. Before joining SAP, I worked at Software AG, where I had the first contact with Data Warehousing. Starting to work with the early releases of SAP BW it quickly became clear to me that BW was a fully new BI approach bringing business requirements into focus. Nevertheless the first versions were primarily focused on OLAP, not on data warehousing like for example defined by Bill Inmon. Knowing about the impacts of ‘stove pipes’ and encouraged by customers. I began pushing the idea of Inmon’s ‘single version of the truth’ and the ‘conformed dimensions’ of Kimball towards an architecture driven BW approach. Around 2005 more and more customers positioned BW as their Enterprise Data Warehouse and asked for more guidance on how to set up a BW EDW. As a consequence we defined the Layered Scalable Architecture (LSA) that has become the standard setting up a BW EDW on AnyDB today.

 

But there is never a standstill. So in the moment where we had reached a solid, generally accepted state of LSA on RDBMS - SAP HANA and little later BW on HANA entered the scene…. And this is the reason LSA++ for BW on HANA is the successor of the LSA for BW on anyDB.

 

 

Q: So, if we compare the ‘traditional’ BW to BW on HANA – what are the major differences?


Well first of all customers that moved to BW on HANA report tremendous performance gains with respect to data loads and querying. Then they notice the simplification through less InfoCubes. Further simplification we see in BW on HANA 740 SP8 thru the new Advanced DSO that replaces traditional DSOs and InfoCubes. In addition to simplification comes the flexibility thru new CompositeProvider that allows combining any BW InfoProviders (DSOs, the new Advanced DSOs or InfoCubes) and create new virtual solutions. Even combinations with HANA native models outside of BW are possible.

 

But there are benefits at the second glance that are may be not so well known: let’s call it ‘the new openness of BW on HANA’. We all have the experience on what integrating non-SAP raw data in BW meant in the past certain efforts. You had always to assign and define InfoObjects to the raw data fields. This is now no longer a prerequisite to integrate data into BW as BW on HANA 7.40 comes with the so called field-based modeling. Field-based modeling means that you now can integrate data into BW with considerably lower effort than before. Regardless whether you load data into BW or whether the data resides outside BW: you can now directly model and operate on field level data without the need of defining InfoObjects in advance and subsequently mapping the fields to the InfoObjects. This makes the integration of any data much easier. And how is this achieved? Well the new Advanced DSOs allows storing field-level data in BW. Advanced DSOs can have only fields, a mixture with InfoObjects or just InfoObjects, like the old DSOs. On top of the BW Advanced DSOs with fields or on any SQL/ HANA view outside BW you define the BW on HANA Open ODS Views to model reusable BW-semantics identifying facts, master data, and semantics of fields like currency fields or text fields. Furthermore in Open ODS Views you can define associations between Open ODS Views and InfoObjects what means you model virtual star schemas. Last but not least you can use Open ODS Views in a query or combined with other Providers in a CompositeProvider like any InfoProvider

 

So in short BW on HANA is capable to model and work on raw data regardless where they are located and we can integrate these raw data with the harmonized InfoObject-world by associating InfoObjects in Open ODS Views to fields.

The idea of working with raw data in BW and the early and easy integration of raw data results in the new ‘Open ODS Layer’, which brings BW and the sources closer together

 

 

Q: So what you are saying is that the functionality that has been developed for BW on HANA is actually created from an architectural point of view, and not from a technological point of view?


Exactly, this is an important driver. Knowing that HANA can work on data like it is, without transforming the data into specific analytic structures you should be able to work with virtual objects directly on any field level data. Bringing the source systems closer to BW means that we need to have something intermediate between the source and the fully fledged and top down modeled EDW described by InfoObjects. This is achieved by the Open ODS Layer.

 

 

Q: LSA++ is, as you stated, the successor of LSA for BW on HANA scenarios. What are the main differences between the LSA approach and LSA++?


LSANo architecture stays forever. Any architecture has to be reviewed continuously especially when the circumstances change. When HANA came along and a little later BW on HANA was released, colleagues asked me very early “Juergen, can you make an update of LSA for BW on HANA?” I hesitated, because it was clear that BW on HANA is more than just exchanging the relational database, more than the offering of the in-memory BW Accelerator. This is why just an ‘update of LSA’ was and is not adequate – I do not want to bore you with the discussions we had – we can see the results looking to BW on HANA 7.4 and the LSA++ as successor of LSA:

Bearing in mind what I said before about BW on HANA we can look at LSA++ from two different perspectives – the first I call LSA++ for simplified data warehousing.

This perspective deals with the traditional way of doing data warehousing, moving data to BW and organizing the data in a proper way. With LSA++ the architecture becomes far more streamlined and flexible. We can find here two major differences with respect to the traditional LSA: First- making persistent Data Marts – BW InfoCubes - obsolete using virtual composition of persistent data (CompositeProviders). The result is the LSA++ Virtual Data Mart Layer. Second bringing BW closer to the source data thru BW field-based modeling. The result is the Open ODS Layer.

 

The Open ODS Layer broadens our architecture options as it may serve as inbound layer not only for an EDW Layer that is described mainly by InfoObjects. We can also stage the data in a DWH layer that is mainly described by fields. We call this a raw or domain data warehouse. A Domain data warehouse is dominated by one leading source system and all other sources integrate in the domain DWH with respect to this leading source. For example an S/4HANA can be such a leading source system. All other sources would then integrate in the related BW domain data warehouse with respect to the S/4HANA semantics and values.  Defining InfoObjects is always necessary if you have to harmonize multiple equivalent sources – this is the well-known EDW case.

 

But LSA++ is more than just simplified data warehousing. It is an open architecture, allowing an evolutionary DWH approach. I call this the LSA++ for logical data warehousing. It means a complimentary perspective to the traditional LSA++ simplified data warehousing perspective: sources of any nature (operational sources, data lakes like Hadoop or Open ODS as Data InHubs) play an equivalent role like the data warehouse: they are a basis for analytics.  The logical data warehouse like described by Gartner provides analytics and reporting on the original data as long as you can keep the service level agreements and cover the business requirements. You move data to the data warehouse only if the service requirements are violated or the business requirements cannot be fulfilled.

 

The LSA++ supports the logical DWH approach via an agile virtual data mart layer. Agility comes in from two modeling options in BW on HANA. First it comes in through the CompositeProviders allowing you to combine any BW Provider with HANA models from outside BW, wherever they are located. Second it comes in through Open ODS views of type fact, master or text allowing defining dimensional models on any data outside of BW defined by tables, sql-views or HANA views. You have always the possibility to switch a virtual Open ODS View source to a persisted BW Advanced DSO, like suggested by the logical DWH approach. Switching from virtual to persisted means that BW on HANA generates the data flow from the remote source to an Advanced DSO and the Advanced DSO itself based on the definition of the Open ODS View.

If you look to the virtual models on the source systems, like offered by HANA Live or S/4HANA Analytics, BW can then be considered as an extension offering additional services like historic data, business consistent views et cetera that the source cannot offer. The transition from the source model to BW can then happen in a very dynamic way.

 

 

Q: On SAP HANA you can define normalized DWH models like Data Vault directly. Data Vault is quite popular with Dutch companies. Do you think Data Vault modeling is a valid alternative for SAP ERP data?


We call our team SAP EDW Product Management, so that implies that we cover both BW on HANA and HANA native data warehouse modeling as we call it. A native HANA data warehouse can be modeled using any known DWH model (e.g. dimensional, 3NF, data vaults).That means freedom but also threat.  Threats especially for customers who decide about their future DWH architecture based on sentiments and a BW-perception that is driven by the past. We find all kind of BW perceptions in the market: people who love it and for whatever reason people who dislike it. I have a quite good idea why people may dislike BW but one thing is clear to me: sentiments are a bad advisor. Having a bad perception about the traditional BW in mind  we saw already customers who tried to build a native HANA data warehouse for SAP Business Suite sources saying “we have an SAP source system, and other SAP tools like Powerdesigner and Data Services, so we are going to ‘Vault it’. Making a long story short: finally this ended up as a nightmare as you have to rebuild all the semantics, associations and annotations natively. And it offers no business value because with BW, you get all this for free: BW knows these semantics because of the tight dictionary integration between SAP sources and BW.

In addition Data Vault modeling assumes that you should always expect the worst from your sources. It assumes that at any time and frequently source-model changes can happen that enforces you to change your DWH models and links and so on. But that is not the reality with SAP source systems. The SAP source models are in general pretty stable making the dimensional BW model working very well. Vaulting in general for SAP sources brings in complexity that cannot be justified.

 

 

Q: This is the case with standard SAP content. There is however not a single customer I know without quite a bit of customization in their SAP system. And this inability to adapt to these changes is a strong part of criticism on BW.


Yes, you are right and these customizations could not be modeled flexible enough in the past. But this is no longer true with BW on HANA. With BW 740 SP8 we now can model kind of dimensional satellites of a BW entity using Advanced DSOs with Open ODS Views on top or directly in a CompositeProviders .Let me give you an example: you have all the standard SAP attributes in your 0COSTCENTER InfoObject. You have the requirement to model country-specific attributes let’s say for UK only. Today you store these attributes in an Advanced DSO and define an Open ODS View of type master on top of it. In any ODS View of type fact or in a CompositeProvider you can then associate/ join the different views of  the entity cost center regardless whether they come from an InfoObject like 0COSTCENTER or an Open ODS views..

From my point of view, this will solve most modeling challenges customers had with such scenarios in the past: you load attributes with different ownership independently, you create new attributes without impacting the existing model, and you associate different attribute views and can even create dedicated authorizations.

 

Overall: I don’t believe that it makes sense to create data vaults for SAP ERP operational systems because it adds complexity, but no value. BW on HANA is pretty flexible to model volatility of SAP source models caused by customization. On the other hand if you have multiple, highly volatile non-SAP sources you are free to create a data vault DWH natively on SAP HANA The resulting architecture would then end up in a hybrid architecture between BW and a native HANA DWH.

 

 

This blog is the first half of the interview I conducted with Juergen Haupt. The second half will be posted shortly!


Handling Negative Number in Open Hub

$
0
0

Scenario:

While sending data to external systems via OpenHub or Analysis Process Design (APD), if there are negative numbers in the extracted data, the minus sign is positioned after the number in the output file. However, we want the minus sign to be positioned before the number.

 

 

Reason:

When the data is copied from the OpenHub interface, it is copied from the display in the internal format directly to the string that is finally written to the file. In the internal display of a negative number, the minus sign is displayed after the number.

 

SAP Note :   856619


For example :   - 9.21 would be stored in SAP as 9.21( Minus sign at the end) . This also gets transferred to external system or file.



Work around: 

Changing OpenHub  field setting from Internal to External does not help.  However, we can put a simple two line to code to get around this issue.

Field for which you expect a negative sign to occur put the below code in Field Routine / End Routine of Transformation.  For my scenario it is 0NETVAL_INV ( Net Value Invoiced ) .

 

 

    Field Routine :

__________________________________________________________

 

      IF SOURCE_FIELDS-NETVAL_INV IS NOT INITIAL.
         RESULT
= SOURCE_FIELDS-NETVAL_INV .
        
IFRESULT < 0 .
          
SHIFT RESULT RIGHT CIRCULAR  .
          
CONDENSERESULT NO-GAPS.
       
ENDIF.
    ENDIF.




End Routine:


________________________________________________________________________________


LOOP ATRESULT_PACKAGE ASSIGNING<RESULT_FIELDS> WHERENETVAL_INV < 0 .
 

   SHIFT<RESULT_FIELDS>-NETVAL_INV RIGHT CIRCULAR  .
  
CONDENSE<RESULT_FIELDS>-NETVAL_INV  NO-GAPS.


ENDLOOP.



You can have multiple variation of these codes based on your scenario and number of fields you want to change the sign for .  Basic ABAP Key word here is

SHIFT RIGHT CIRCULAR , which moves the minus sign in a circular fashion and bring to the front.  Then you condense the field to delete the gap between the number and the minus sign at the left.



Alternatively we can create a function module in BW system by copying CLOI_PUT_SIGN_IN_FRONT from ECC ( not sure why this is not available in BW by default ) and then call this function module.  However, as the code is very simple , I would prefer to put it in routine.

Handling Before Aggregation (Exception Aggregation) in BW 7.X

$
0
0

Applies to:       SAP BW 7.X


Summary:      

 

This document gives a clear picture on How to handle (Calculate)  Before Aggregation (This option was available in the BW 3.x version)at BEx Query level which is obsolete in BW 7.x


Author:           Ravikumar Kypa

Company:       NTT DATA Global Delivery Services Limited

Created On:    24th July 2015


Author Bio  

Ravikumar is a Principal Consultant at NTT DATA from the SAP Analytics Practice.

 

Scenario:

 

In some of the reporting scenarios, we need to get the number of records from the info cube and we have to use that counter in calculations. We can easily achieve this in BW 3.x system, as there is a readymade option given by SAP (i.e. Before Aggregation in the Enhance tab of a Calculated Key Figure) at Bex query level.

 

But this option is obsolete in BW7.X system and we can’t use this option. But SAP has given a different mechanism to achieve this at Bex level.

 

The below illustration explains you this scenario:

 

Data:

 

0DOC_NUMBER

MAT_DOC

MATERIAL

MAT_ITEM

PLANT

CALDAY

PRICE

UNIT

12346

23457

ABC

3

2000

20150102

30

USD

12346

23458

ABC

3

2000

20150102

30

USD

12347

23459

DEF

4

3000

20150103

40

USD

12347

23459

DEF

4

4000

20150103

40

USD

12345

23456

XYZ

1

1000

20150101

25

USD

12345

23456

XYZ

2

1000

20150101

25

USD

 

The user wants to see the Price of each material in the report, and the format of the report is as shown below:

 

MATERIAL

                  Price / Material

ABC

30 USD

DEF

40 USD

XYZ

25 USD

 

 

If we execute the report in Bex, it will give the below result:

 

1.jpg

But expected output is:

 

MATERIAL

PRICE OF EACH UNIT

ABC

30 USD

DEF

40 USD

XYZ

25 USD

 

We have to calculate this using Counter at Bex query level. In BW 3.X version we can achieve this by using the option ‘Before Aggregation’ in Enhance tab of the Calculated Key Figure (Counter).

 

Steps to achieve this in BW 3.X system:

 

Formula to calculate Price of each material is Price / Counter.

 

Create New Calculated Key Figure (ZCOUNTER1) and give the value as 1.

 

2.jpg

 

In the properties of the Calculated Key Figure Click on Enhance tab:

 

3.jpg

 

Keep the Time of Calculation as Before Aggregation as shown in the below screen shot:

 

4.jpg

If we don’t select the above option,the Counter Value will be 1 and it gives the below output:

 

5.jpg

So we have to calculate Price of each Material with Before Aggregation property (Now the counter value will be 2):

 

Now the output of the query will be like this:

6.jpg

Now we can hide the Columns ‘Price’ and ‘Counter (Before Aggr)’ and deliver this report to Customer as per his requirement.

7.jpg

This option is obsolete in BW 7.X ( check the below screen shot) :

 

Create a Calculated Key Figure as mentioned below (Give value 1):

8.jpg

In the Aggregation Tab, unselect the check box: ‘After Aggregation’.

 

9.jpg

You will get the below message:

 

Info: Calculated Key Figure Counter (Before Aggr) uses the obsolete setting ‘Calculation Before Aggregation’.

 

Steps to achieve this in BW 7.X system:

 

Create a Calculated Key Figure as mentioned below (Give value 1):

 

10.jpg

 

If we this Counter directly in the calculation it will give the below output:

 

11.jpg

We can achieve the ‘Before Aggregation’ option in BW 7.x system by following the below steps:

 

Create Counter1 with fixed value 1:

 

12.jpg

 

In Aggregation Tab select the below options:

 

          Exception Aggregation: Counter for All detailed Values

          Characteristic: 0MAT_DOC (Because we have different Material Documents (23457, 23458) for the material ABC):

 

13.jpg

Now the output of query has given correct value for the material ABC and the other 2 are not correct as they have same Material documents (refer sample data):

 

14.jpg

 

Now create Counter2:

 

15.jpg

Aggregation Tab:

 

Exception Aggregation: Summation

Ref. Characteristic: 0MAT_ITEM (Because we have different Material Items (1, 2) for the material XYZ).

16.jpg

 

Now the output is showing correct values for the materials ABC and XYZ and still we are getting wrong values for the material DEF, as it has same Material documents and Material Items:

 

17.jpg

 

Now create Counter3:

 

18.jpg

 

    Exception Aggregation: Summation

    Ref. Characteristic: 0PLANT (Because we have different Plants (3000 and 4000) for the material DEF).

 

19.jpg

 

Now create New Formula: Price of Each Material

 

Price of Each Material  = Price / Counter3

 

20.jpg

Now the output is:

 

21.jpg

 

Now we can hide the columns ‘Price’ and ‘Counter3’ and show the Price of each material in the output:

 

22.jpg

Likewise we have to analyze the data in the info cube and we have to identify the Characteristics on the aggregation has happened at Bex query level and we have to use them as the Ref. Characteristic in the Calculated Key Figure and we can achieve the counter ( no. of records aggregated).

The future of the SAP EDW: Interview with Juergen Haupt - Part II

$
0
0

This is the second part of the interview with Juergen Haupt by Sjoerd van Middelkoop. The first part of the interview, covering LSA++, native development and S/4HANA topics, is available here >>

This blog is also available on my company website.  This blog is cross-posted here to reach the SCN audience as well

 

Q: BW is now more open to non-SAP sources than it was before. Is the main development focus now on supporting any data model and source in BW modeling, or is the focus more on hybrid scenarios?

We are continuously improving and extending BW’s possibilities in respect to also supporting non SAP data. That means we do not force the use of InfoObjects any longer but enable straight forward modeling of persistencies using fields and defining datawarehouse semantics using Open ODS views on top of it. This allows customers to respond faster to business requirements. Next to that, we also support landscapes where a customers use SAP HANA as a kind of a Data Inhub or landing-pad replicating data from any source to HANA and modeling natively on that data. From LSA++ perspective these areas are like an externally managed extension of the Open ODS Layer.

 

When it comes to data warehousing the customer can integrate these data virtually with BW data or stage them via generate data flows to BW to apply more sophisticated services.

Q: How did BW on HANA and LSA++ change the way you see BW development?

BW on HANA now provides the option to work a lot more with a bottom-up approach. It means that you can evolutionary improve your models and your data starting for example with fields that define Advanced DSOs in the Open ODS Layer ending up with Advanced DSOs that leverage also InfoObjects to provide advanced consistency and query services. These Advanced DSOs are shielded by virtual Open ODS Views allowing a smooth transition between these stages if a transition is necessary at all. This flexibility is highly important to integrate non-SAP data in a step by step manner. I think this complements the proven but slow top-down approach in BW projects like we have seen them in the past.

Q: Talking about development in the current landscape. Customers that are have migrated to HANA a while ago and are remodeling their current LSA structures are finding it hard to keep up with developments in BW and the new functionality rapidly coming available. How can customers develop and remodel without investing in objects that will become obsolete soon?

This is a real challenge. Not a technology challenge, but more of an architectural and functional challenge. How will my landscape of the future look like what are the functions and features that provide most value for my business users? I would advise customers to think of their EDW strategy from a holistic point of view. That means for example you can’t see BW on HANA without considering SAP’s operational analytics strategy. Overall BW is not an island any longer, BW is now tighter connected than ever to other systems. So we have to think about the role in the future of all of our systems and what services they should provide.

So when customers think about going on BW on HANA, normally the first question is “Do we go greenfield or are we going to migrate?” This is very understandable question but I fear that this question does not go far enough.

Q: Most customers, when on the decision point to migrate or greenfield, consider their current investments and make sure these investments will not be undone.

Yes. Very often, but not always. Over the last time we s a steady increase of customers choosing a greenfield approach. They see that introducing BW on HANA is more than just a new version that you upgrade to. They are aware that BW on HANA means running and developing solutions on a real new platform, and they do not want to bring their ‘old’ style solutions into this new platform. So these customers go for a greenfield approach. This approach does not prevent you of course to transport in some of your existing content that you want to keep and maybe invested heavily in.

Q: This point of view is quite opposite of SAP’s ‘non-disruptive’ marketing strategy

What does non-disruptive mean? It is non-disruptive when it comes to migrating existing systems-yes. But does a ‘non-disruptive’ strategy really change the world to a better one? If you look on BW on HANA just as a new better version a non-disruptive migration would be your choice. But if you have the idea that BW on HANA is something really new that allows you creating values you never could offer before and that enables you to rethink the services you want your BW data warehouse to provide bringing it to new level then you cannot be non-disruptive.

It’s like driving into the Netherlands from Germany, I only notice it by chance because the road signs are different – the border has disappeared at least for car drivers… Compared to the EDW I would say that the border we used to have between EDW and sources has always been a very strict border. These borders between systems are more and more disappearing. And this has a lot of influence on all systems and the solutions we build in future. And this is related again with disruption. I can continue to work like I did it ten years before still stopping at borders that have disappeared in the meantime…….

Q: With the Business Suite on HANA and S/4HANA, embedded BW is seen by many as a viable option to use instead of a standalone BW system. In what cases should customers opt for an embedded scenario?

The question here is a matter of your approach. Let’s assume you start with S/4HANA Analytics or HANA Live, you can do everything with these virtual data models as long as business requirements and SLA’s are met. Then, the question is what to do when we need Data Warehousing services. Why not use the embedded BW? Yes, especially for smaller-sized companies, this will be an option. There are limitations of course. I think the rule of thumb here is that an embedded BW system should not exceed 20% of the OLTP data volume. With the HANA platform it is a matter of managing workload.

But there is also a certain danger with this approach and it does not derive just from the amount of BW data you should not exceed. The bigger the company is, the more likely you will have more than a single source. In this case you should start from the very beginning thinking about an EDW strategy. Otherwise you will sooner or later start to move data back and forth between these embedded BWs. So most importantly when making decisions about using the embedded BW is have a long-term vision about the future DWH landscape. In this context it is important to mention that with SAP HANA SPS9, we have the multi-tenant DB feature that allows us to run multiple databases on the same appliance. So sooner or later we will see BW on HANA and S/4HANA running on different HANA DBs but on the same appliance meaning that as there will be no boundary any longer then between the BWonHANA and S/4HANA. Thus you can share between them data and models directly. This would offer the benefits of the embedded BW but with a higher flexibility and scalability.

Q: So what you are saying is that embedded BW is an option for now in some cases, but with HANA multi-tenant DB in the near future and multi-source requirements stand-alone BW is the better option?

That depends on what your situation and what you are developing, for smaller clients and simple landscapes, I can imagine embedded scenarios to function very well, even in the future. For most other scenarios yes, I think stand-alone BW with multi-tenant DB is the better option.

Thank you very much for this interview!

You are most welcome!

 

This concludes my two-part blog of the interview I conducted with Juergen Haupt. I would hereby like to thank mr Haupt for his time and cooperation, SAP for their cooperation in getting this published, and the VNSG for getting Mr. Haupt in Eindhoven.

Archive administration for the archiving object BWREQARCH in BW7.X

$
0
0

Applies to:

 

SAP BW NW 7.X. For more information, visit the Business Intelligence home page for Data ware house management.

 

Author:          MP Reddy

Company:      NTT DATA Global Delivery Services Private Limited

Created On:   4th August 2015


Author Bio  

 

Pitchireddy Mettu is a Principal Consultant at NTT DATA Global Delivery Services Private Limited from the SAP Analytics Practice.


Summary

In one BI system, the volume of data increases constantly. Constant changes to business and legal requirements mean that this data must be available for Longer. Since keeping a large volume of data in the system affects performance and increases administration efforts, we recommend BI administrator to apply Data archiving task if needed.

Using the archive administration for the archiving object BWREQARCH we execute an archiving program. This program writes the administration data for selected requests in an archive file. After archiving,we execute a deletion program that deletes the administration data from the database.

By archiving request administration data we make sure that the request administration data does not impair the performance of our system.

 

This guide gives a step-by-step demo to show how to Achieving BW Requests using BWREQARCH and reloading Achieved requests when needed.

 

 

Prerequisites

In our case the settings are already maintained for the object that we are archiving on the SAP BW system.

 

The objects that are archived are:

  • BWREQARCH
  • IDOC
  • WORKITEM

 

The checklist of archiving new objects is as followed

Process Flow

Before using an archiving object for the first time

 

  • Check archiving object-specific Customizing settings:
  • Was the file name is correctly assigned?
  • Are the deletion program variants maintained? (Note that the variants are client-specific)
  • Is the maximum archive file size correctly set?
  • Is the deletion program to run automatically?

 

File locations


File locations must be set in order to write the archive files. Currently the files are located in /local/data/storage/<SYSEMID>/archiving.

The section below explains the setup of the files. Unless specified by the support lead DO NOT CHANGE ANY OF THE SETTINGS.

 

This can be accessed via AL11 .

image1.JPG

 

File Names

 

The file names are generated automatically by the archiving tool. The current setup is as followed:

For BWREQARCH the customizing shows that ARCHIVE_DATA_FILE is used.

image2.JPG

 

The ARCHIVE_DATA_FILE has the mask <PARAM_1>_<PARAM_3>_<DATE>_<TIME>_<PARAM_2>.ARCHIVE
and uses the logical path ARCHIVE_GLOBAL_PATH

PARAM_1 = Type of system

PARAM_2 = Sequence number

PARAM_3 = Archiving Object

 


Archiving
  Object


PARAM_1


PARAM_2


PARAM_3


BWREQARCH


BW


0


BWREQARCH

image3.JPG

The ARCHIVE_GLOBAL_PATH is set to /local/data/storage/<SYSID>/archiving.

image4.JPG

In AL11 you can view the files created.

image5.JPG

Archive BW Requests

When archiving requests there are two steps to perform.

 

  1. Write the archive files
  2. Delete the data from the tables

 

Write the archive files


Start transaction code SARA - Fill in BWREQARCH

image6.JPG

 

First we need to write the archive log files.
In order to this we need to create a variant of what needs to be archived.

When installing a regular job we need to make the timing relative to the date. By default we archive requests older than 4 months.

image7.JPG

For a test run you can put the processing option to Test Mode, for actual archiving you can put the processing option to Production Mode. Keep With DTP Requests turned on as we also archive the DTP request information. Min. Number Requests will stay on 1000. This means that only if there are more than 1000 requests to archive it will actually start archiving.

image8.JPG

 

Manual writing with SARA

 

When running the manual jobs make sure that the username is has the correct authorization to run archive jobs.

Create the archive file once all the settings (Spool Params& Date) have been maintained.

image11.JPG

When the write is executed you can find the jobs running/finished.

image12.JPG

There will be two jobs, one with SUB in the name that will schedule the various write job(s).
The other job will have WRI in the name.

image13.JPG

 

Delete the data from the tables

 

When the request are writing into the archive files. The data can be deleted from the tables.

The next chapters will provide details on how to delete the BW-Request data once the archive files have been written.

 

Manual deletion with SARA

  Return to SARA and select Delete.

image14.JPG

  Click on Archive selection to select the file for deleting the actual entries from the table

image15.JPG

image16.JPG

When file is selected enter the start data for deletion of the entries from the table. Periodic scheduling only makes sense when the write
job is dynamically deleting the requests. As we are deleting once a month on a 4 month basis we can schedule the delete also periodically.

Be aware that the delete should not run during the write job and that there is enough time between the two activities.

When all settings (Spool params.and scheduled date) are maintained you can run the deletion job.

image20.JPG

In the job overview you should see a deletion job running/finished.

image21.JPG


Reloading Archived Requests

When the requests are archived they are still accessible when needed if

There are three ways of reloading the requests

 

  1. Reload the individual request from the DTP monitoring screen or InfoPackage monitoring screen
  2. Reload a complete archiving job (T-Code SARA)
  3. Reload multiple requests (T-Code RSREQARCH)

 

For reloading complete archive jobs or multiple requests look further down in the documents. The following sections shows how individual
requests can be reloaded from the archive.

 

Reload the individual request

 

Before retrieving the request from the archive, question yourself if the detailed data is needed. The header information of the request is still visible, only detailed messages are archived.

 

InfoPackage Requests

 

When displaying the request in the monitoring screen a popup will inform the user that the request is archived with a question to retrieve
the details from the archiving. By default do not reload the details when looking at an archived request unless it is really necessary.

image22.JPG

  An archived requests looks like this:

image23.JPG

  When an request is reloaded the data becomes visible again

image24.JPG

 

DTP Requests

 

When displaying the request there will not be a popup compared to InfoPackage requests. On the DTP overview screen you will see that the DTP is archived.

image25.JPG

 

By default the details will not show the details.

image26.JPG

 

On the menu you can reload the request from the archive.

image27.JPG

 

When the data is reloaded the details become visible again.

image28.JPG

 

Reload a complete archiving job :

When you want to reload a complete archiving job you have to reload is within T-Code SARA.

 

Run SARA and put in the archiving object. In the menu the Reload function becomes available.

image29.JPG

  Select a variant. There will be only two necessary variants as the selection screen will only give you option Test or Production mode.

image30.JPG

 

Select an Archive file.

image32.JPG

 

Maintain the start data and spool parameters like in the previous sections and run the reload activity.

image33.JPG

 

The job log will show if the reload has finished.

 

A job with REL in the name will run.

image35.JPG

 

Related Content


http://scn.sap.com/docs/DOC-30539

http://scn.sap.com/thread/1944220


http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/70ef2f01-641a-2d10-c59e-cf6d9e673926?QuickLink=index&overridelayout=true&47197395624637

 

https://help.sap.com/saphelp_nw73/helpdata/en/49/9500f79c1d311fe10000000a421938/content.htm

 

For more information, visit the Business Intelligence home page for Data ware house management.

Simulation Workbench: Part 1 - Transformation Rules

$
0
0

Simulation Workbench - Introduction

Data loaded to BW might go through complex transformations. In many case it is required to debug these transformations. BW provides standard simulation functionality that can be improved providing better interface. This is what Simulation Workbench is for. Major benefits of Simulation Workbench are:

  • Simplified data selection;
  • Improved data presentation;
  • One stop shop for all simulations;
  • Simple navigation to transformations and targets;
  • Variants creation.

First part on the blog explains how to use of Simulation Workbench with Transformation Rules (BW 7.x data staging type) and second part with Transfer Rules and Update Rules (BW 3.x data staging type).

 

 

Simulation Workbench - Installation

Import attached ZSWB SAPLink nugget. Activate Z_SIMULATION_WORKBENCH_BW_3X and Z_SIMULATION_WORKBENCH_BW_7X programs along with report texts, screens and statuses.

 

 

Transactional Data Transfer Rule Simulation

Launch Simulation Workbech using trx. ZSWB. It will take you to following screen.

Simulation Workbench BW 7.x.jpg

Keep defaults and select Target, Source and DTP. Simulation Workbench assists you all steps of the way.

Transformation Target - F4.jpg

Transformation Target - Value Help.jpg

Transformation Source - F4.jpg

Transformation Source - Value Help.jpg

Transformation DTP - F4.jpg

Transformation DTP - Value Help.jpg

To limit simulation to specific PSA requst select it from drop-down.

Request - F4.jpg

Select request from popup

Transformation DTP - Value Help.jpg

Press F8 (Execute) on next screen

Simulation - F8.jpg

On next screen request selection can be refined providing additional selection.

Debug Request - F8.jpg

Lets skip additional selection for now and just press F8 (Execute and Display of Log) to execute simulation.

Transformation rules get simulated and both After Extraction and After Transformation Temporary Storages are displayed one underneath another

Temporary Storage - Descriptions.jpg

Temporary Storage fields headers can be switched between Descriptions and Technical Names (in contrast with SAP standard functionality) to help you identify required field

Temporary Storage - Technical Names.jpg

 


Navigation to Transformation

From initial screen you can navigate to Transformation, for example, to set break point on specific Transformation Rule

Navigation - Transformation.jpg

Select Create on Transformation Rule

Transformation Rule - Created on (EPM Demo).jpg

Copy ABAP code line

Transformation Rule - Created on (EPM Demo) Display Rule ABAP code.jpg

Open Transformation Rules Generated Program.

Transformation Rule - Generated Program.jpg

Lookup for copied ABAP code and set break-point.

Transformation Rule - Created on (EPM Demo) Display Rule Breakpoint.jpg

Navigate all way to Simulation Workbench selection screen, press F8 (Execute) and then press F8 (Execute and Display of Log) on Debug Request popup.

Voilà simulation stopped at desired Transformation Rule

Transformation - Debugger Session.jpg

 

 

Navigation to Data Target

From Simulation Workbench selection screen you can also navigate to simulation target, for example, to find some request for simulation.

Navigation - Target.jpg

Manage InfoProvider - Monitor.jpg

Copy Request Id.

Monitor - DTP.jpg

 

 

Simulation Across All PSA Requests

If Request field is left empty on Simulation Workbench selection screen then all requests are selected for simulation. Use this option with caution because even well maintained PSA tables can have lots of records.

Debug Request - Multiple Request Selection.jpg

Debug Request - Multiple Request Selection Popup.jpg

 

Simulation with Performance Optimized Request Selection

Optimize Request Selection option on Simulation Workbench selection screen can improve simulation performance. Check Optimize Request Selection check-box and press F8 (Execute)

Simulation Workbench BW 7.x - Optimize Request Selection.jpg

Provide additional selection on Debug Request screen and press F8 (Execute and Display Log).

Debug Request - Optimize Request Selection.jpg

Press F8 (Execute and Display Log). What Simulation Workbench will do it will limit request selection based on additional selection provided.

 

 

Master Data Simulation

Transformation Rules for Master Data can also be simulated. Select Target, Source, DTP, Request and uncheck Expert Mode check box to skip Debug Request popup

Simulation Workbench BW 7.x - Master Data Selection.jpg

Press F8 (Execute) button to simulate

Simulation Workbench BW 7.x - Master Data Temporary Storage.jpg

 

Texts Transformation Rules Simulation

Transformation Rules for Texts can also be simulated. Select Target, Source, DTP, Request and uncheck Expert Mode check box to skip Debug Request popup

Simulation Workbench BW 7.x - Texts Selection.jpg

Press F8 (Execute) button to simulate

Simulation Workbench BW 7.x - Texts Temporary Storage.jpg

 

SAP Standard Output Format

Simulation Workbench also support SAP Standard Output Format for simulation result comparison (if in doubt)

 

 

Second part of the blog Simulation Workbench: Part 2 - Transfer Rules and Update Rules

Simulation Workbench: Part 2 - Transfer Rules and Update Rules

$
0
0


In first part of the blog Simulation Workbench: Part 1 - Transformation Rules I introducted Simulation Workbench and demonstrated how it help simulate Transformation Rules. Now I will explain how it can be used to simulate Transfer Rules and Update Rules. To begin with start Simulation Workbench calling trx. ZSWB, then click on BW 3.x button

 

Simulation Workbench 7.x Navigation to 3.x.jpg

It will take you to another screen for Tranfer Rule / Update Rule simulation

Simulation Workbench Navigation to 3.x.jpg

 

Multiple Request Simulation with Additional Selection

Select from Source system, DataSource, Target, leave Request field empty. Again Simulation Workbench assists you with value helps all steps of the way. Press F8 (Execute) to continue

Simulation Workbench Navigation to 3.x - F8.jpg

On next popup provide selection to limit PSA data

Simulation Workbench Navigation to 3.x - PSA Selection Criteria.jpg

As you can see on next screen data HT-1000 material sales from multiple requests was pre-selected. Select all records

PSA Data Preselection - Select all.jpg

Then click on Transfer (F5) to proceed with simulation

PSA Data Preselection - Transfer.jpg

On next screen Transfer Structure data records are displayed, click on Communication Structure (Shift+F4) button to simulate Transfer Rules

Transfer Structure Data Records.jpg

On next screen Communication Structure records are displayed, click on Data Target (Shift+F6) button to simulate Update Rules

 

Data Target data records.jpg

 

Naviation to Transfer Rules

On Simulation Workbech selection screen click on Transfer Rules (Ctrl+F2) button to navigate to Transfer Rule, for example, to set break-point on specific trasnfer rule

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules.jpg

On next screen from Menu Choose Extras -> Display Program

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules - Display Program.jpg

Choose Transfer Program on popup

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules - Choose Program.jpg

Set break-point on Role Transfer Rule

Simulation Workbench Navigation to 3.x - Navigate to Transfer Rules - Break Point.jpg

 

Navigate to Update Rules

On Simulation Workbech selection screen click on Update Rules (Ctrl+F1) button to navigate to Update Rule, for example, to set break-point on specific update rule

Simulation Workbench Navigation to 3.x - Navigate to Update Rules.jpg

On next screen from screen choose from menu

Simulation Workbench Navigation to 3.x - Navigate to Update Rules - Display Activate Program.jpg

On next screen set break-point on Created at Update Rule

Simulation Workbench Navigation to 3.x - Navigate to Update Rules - Set Break Point.jpg

 

Master Data Transfer Rules / Update Rules Simulation

Similarly to Transactional Data Simulation Workbench can simulate Master Data

Simulation Workbench 7.x Navigation to 3.x - Master Data Simulation.jpg

 

Texts Transfer Rules / Update Rules Simulation

Similarly to Transactional Data Simulation Workbench can simulate Texts

Simulation Workbench 7.x Navigation to 3.x - Texts Simulation.jpg

 

SAP Standard Output Format

Simulation Workbench also support SAP Standard Output Format for simulation result comparison (if in doubt)


How to Optimize Data Loads to Write-Optimized DSO

$
0
0

Scenario


Write-optimized DSOs were first introduced in SAP BI 7.0 and are generally used in the Staging layer of an Enterprise Data Warehouse as data loads to these are quite fast. This is because they do NOT have three different tables but only one, Active table. This means data loaded to a write-optimized DSO goes straight to the active table, thus saving us the activation time. Also these DSOs further save time by NOT involving the SID generation step.


However, write-optimized DSOs have one short-coming. During data loads all the data packages are processed serially and not in parallel, even if parallel processing is defined in the batch manager settings of the DTP. This results in cumbersomely long loading times while loading large number of records (typical in a full dump and reload scenarios).


The goal of this paper is to demonstrate how to enable parallel processing of data packages while loading to write-optimized DSOs thereby optimizing load time.



Step By Step Solution


Parallel processing of data packages while loading to a write-optimized DSO can be enabled by defining the semantic key in the Semantic Groups of the DTP.


Open the DTP of the write-optimized DSO and in the Extraction tab click on Semantic Groups button.

01.jpg

In the pop-up screen select the fields which form the semantic key of the DSO.

02.jpg

Make sure that parallel processing is enabled by going to the menu Goto > Settings for Batch Manager and defining the number of processes for parallel processing.

03.jpg

Now if you run this DTP you will notice that the data packages are processed in parallel and there is a significant improvement in the data load timings. Please note that the improvement will be conspicuous in loads involving large data sets.


Load Time Comparison


First screenshot below shows that it took around 17 hours to load about 23.5 million records in a write-optimized DSO. During this load semantic key was NOT defined in the DTP.

04.jpg

The next screenshot shows that it just took a little over one and half hour to load the same number of records in the write-optimized DSO (11 times faster!). The difference was this time the semantic key was defined in the DTP.

05.jpg


Further Reading


1. Write Optimized DSO

http://wiki.scn.sap.com/wiki/display/BI/Write+Optimized+DSO


2. SAP Note 1007769 : Parallel updating in write-optimized DSO

Using Selective deletion in a Process Chain with a filter from the TVARVC table

$
0
0

SCOPE: This document will explain “Selective deletion of infocube data in a process chain” based on fiscal period.

 

SCENARIO: Infocube is being loaded in such a way that we need to delete the previous month (Previous Fiscal period) data first and then reload it again in order to accommodate the changed data. The infocube contains many delta requests. Previous month data can be in any of these requests. In such a case ‘Delete overlapping request in Infocube’ functionality in process chain doesn’t work and we need to have ABAP program so that we could automate this in a process chain.

 

STEPS FOLLOWED:

  1. Create a selection variable in the TVARVC table.
  2. Use transaction DELETE_FACTS and generate ABAP program and variant for the selective deletion of the infocube
  3. Create an ABAP Program to populate the dynamic variable in the TVARVC table.
  4. Add the ABAP Program from step 3 and step 2 to the process chain. The process chain will have following sequence of variants.

     ABAP program from Step 3 --> ABAP program   from Step 2 --> DTP to load previous month data.

 

STEP 1:

Create a selection variable in the TVARVC table.

For this go to STVARV transaction à Select options Tab  à Click on Create and create a variable.

I have created ZPREV_FISC_PERIOD.

image1.png

An entry will be created in TVAVRC table

image2.png

 

STEP 2:

Go to transaction DELETE_FACTS. Provide the Infocube name for which you want to perform a selective deletion and select the radio button ‘Generate Selection Program’.

Click on execute.

image3.png

A program will generated automatically.

image4.png

In order to follow naming convention you can rename the program to a ‘Z’ program.

Create a variant of the generated program.

image5.png

I created ZSELECT_DEL variant

After you click on create you will see the following screen:

image6.png

 

Go to Fiscal period field and click F1, you will see the following screen.

image7.png

Click on technical Information

image8.png

 

Note down the ‘screen field’ name.

image9.png

Now click on attributes button

image10.png

Give description to variant and turn the technical name on.

image11.png

 

Go to fiscal period (which is C023) in this case and enter the selection variable as “T”.  (T: Table Variable from TVARVC (only option available)).

image12.png

 

Also provide the Name of variable (which we created in Step 1) and save the variant.

image13.png

 

STEP3:


Create an ABAP Program to populate the dynamic variable in the TVARVC table.

We need to apply logic such that based on system date and fiscal variant we determine the fiscal period and then load the previous fiscal period in the variable.

 

REPORT  Z_PRE_MONTH_TVARVC.

            DATA:w_datum LIKE sy-datum,
w_yy
TYPE T009B-BDATJ,
w_month
TYPE T009B-POPER,
w_fiscper
TYPE /bi0/oifiscper,
lt_tvarvc
type STANDARD TABLE OF tvarvc,
gs_tvarvc
TYPE tvarvc.

CONSTANTS: c_s TYPE rsscr_kind VALUE 'S'.

************fiscal month and Year from system date**********

CALL FUNCTION 'DATE_TO_PERIOD_CONVERT'
EXPORTING
I_DATE              
= sy-datum
*   I_MONMIT             = 00
I_PERIV             
= 'V3' "Fiscal Variant
IMPORTING
E_BUPER             
= w_month
E_GJAHR             
= w_yy.
* EXCEPTIONS
*   INPUT_FALSE          = 1
*   T009_NOTFOUND        = 2
*   T009B_NOTFOUND       = 3
*   OTHERS               = 4


****If fiscal month is jan, previous fiscal month will be December and Year =Year-1


IF w_month = 10.
w_month
= 9 .
w_yy
w_yy - 1.
ELSE.
w_month
= w_month - 1.
ENDIF.


Concatenate  w_month  w_yy into w_fiscper.
gs_tvarvc
-low = w_fiscper.
gs_tvarvc
-sign = 'I'.
gs_tvarvc
-opti = 'EQ'.
gs_tvarvc
-name = 'ZPREV_FISC_PERIOD'."Variable name in TVARVC table
gs_tvarvc
-type = c_s.


append gs_tvarvc to lt_tvarvc.

Modify tvarvc FROM table lt_tvarvc.

FREE: gs_tvarvc.


Save and activate the program. You can also execute the program manually to check whether the TVARVC table is updated with required record or not.

image14.png

Step 4:


Add the ABAP Program from step 3 and step 2 to the process chain.

image15.png

Reference:

Using Selective Deletion in Process Chains


By Surendra Kumar Reddy Koduru
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/603a9558-0af1-2b10-86a3-c685c60071bc?QuickLink=index&overridelayout=true&39569533701241

Using Selective deletion in a Process Chain with a filter from the TVARVC table


By R. Beeren


http://scn.sap.com/community/data-warehousing/bw/blog/2013/11/20/using-selective-deletion-in-a-process-chain-with-a-filter-from-the-tvarvc-table

Setup automatic email trigger for ABAP Short Dumps in the system

$
0
0

1. Business Scenario

 

 

As a system performance improvement measure, the requirement is to send an email to the team with a list of ABAP Short Dumps that occur in the system during the day.

The email needs to be sent at 12:00 AM, and should contain a list of all the short dumps that have occurred in the system during the previous day.

 

 

2. Create a variant for the ABAP Runtime Error program RSSHOWRABAX

 

  1. Go to SE38 and enter the program name RSSHOWRABAX. Select the Variants Radio button and click display.

        In the next screen, enter the Variant Name and create.

 

          img1.jpg

 

     2. This takes you to the Parameters screen, where we need to add the parameters that we want our variant to contain.

 

          img2.jpg

 

     3. Click on Attributes. Enter the description.

 

          img3.jpg

 

     4. Since our requirement is to execute the variant for the previous day, we will select the following options for ‘Date’ in the ‘Objects for Selection Screen’ section

                  - Selection Variable = ‘X’ (X: Dynamic Date Calculation (System Date))


               img4.jpg


                    - Name of Variable: For the Variable name ‘Current date +/- ??? days’ the Indicator for I/E should be selected as ‘I’ and option as ‘EQ’


                    img5.jpg


                       

                 - Upon clicking ‘OK’, the next screen allows to enter the value for the Date Calculation Parameters.

                    Enter ‘-1’ here, since we need the previous day’s data.

 

                    img6.jpg

 

                    - The final screen will be as follows

 

                    img7.jpg

 

     5. Upon saving this, you will be re-directed to the Parameters screen, where the Date field will be auto populated with the previous day value

 

              img9.jpg

 

3. Define a Job to schedule the above report output as an email

 

     1. Go to System à Services à Jobs à Define Job

 

          img10.jpg

 

     2. Enter the Job Name and Job Class

 

          img11.jpg

 

     3. Go to Step. Here, enter the program name RSSHOWRABAX and the variant created above ZSHORT_DUMPS.

          In the user field, you can enter the User ID with which you want the email to be triggered.

 

          img12.jpg

 

          In our case, we needed it to be executed with ALEREMOTE. Click on Save.

 

               img13.jpg

 

     4. This step will send a mail to the SAP Business Workspace. In order to further forward this mail to the external email addresses, we will use the                         program RSCONN01 (SAPconnect: Start Send Process) and the variant SAP&CONNECTINT.

 

          img14.jpg

 

     5. Upon clicking Save, you can see both the steps in the overview.

 

          img15.jpg

 

     6. Next, enter the recipient details using the ‘Spool List Recipient’ Button. You can select from Internal User, Distribution lists and External addresses.

 

          img16.jpg

 

     7. Next, select your Start Condition to trigger this job. In our case, we have defined the same to trigger at the 1st second of the day daily.

 

          img17.jpg

 

5. Final Output

 

An email will be received daily at 12:00 AM, from ALEREMOTE. The Subject of the email will be as follows:

      Job <Job Name>, Step 1

 

          img18.jpg

The attachment will display the Runtime Errors information as shown below. This is the same information that we get in ST22.

      The below information is obtained in the mail triggered at 12:00 AM on 8/12/2015. Hence, it gives all the ABAP short dumps occurred on 8/11/2015.

 

     img19.jpg

Procedure for Deletion of Master Data in SAP BI

$
0
0

Master data deletion is not a straightforward deletion what normally we do for Info Cube or DSO. Master data may have dependency with the transaction data, in this case Master data deletion is not easy. We should delete the related master data over the transaction data providers (Info Cube or DSO or Info Object) first and then we need to delete the main master data. Here in this blog I would like share the procedure what we follow for Master data deletion in our project,

 

 

  1. Identify the Master data we would like to delete. Here I would like to delete a data for an employee in the master data.

 

 

2.  Select all the three records and delete. After pressing the delete button, click Save, next one pop up will come for selecting the data deletion Without SID's and With SID's. Always select with SID's and Save.

 

 

3. If the master data is used somewhere in the transactional data providers (Info Cube or DSO or Info Object), it will not get data deleted in the master data. It will pop up an message saying "No Master Data was deleted".


 

4. This means that the master data is used somewhere, to check where this go to the transaction SLG1 and pass the parameters as mentioned in the below screenshot,

 

 

After passing the above parameters execute and now you can see the details where the master data still used.

 

 

5. Now in the above step it shows that the master data is used on one of the Info cube. Usually it shows Info Cube, DSO, or Info Object. In case the master data is used in the Info Cube, please do selective deletion of the master data in that Info Cube. By doing this, Fact table data only deleted, to delete the data in the Dimension table, go to transaction RSRV, and pass the required parameters and execute for test, by doing this, the data in the Dimension table will also get deleted.

 

 

In case the master data is used in the DSO, please do selective deletion of the master data in that DSO, by doing this data in the Active data table and Changed log table will gets deleted.

 

In case the master data is used in the Info Object, please do selective deletion of the master data in that Info Object and follow the procedure from 1 to 5 if in case the master data of the current object is used somewhere in the target providers(Info Cube or DSO or Info Object).

Error Handling x Semantic Groups

$
0
0

I have seen recently a lot of problems or SCN discussions about the use of Error Handling and Semantic groups on DTP's.


And I thought, it would be a good idea to give a brief overview of the use of these features on DTP's.


The goal of this blog post is provide a generic information about the Influence of START/END Routine in Transformation on the Processing mode of Data Transfer Process (DTP) for loading Datastore Object (DSO) and the technical reason behind it. In all the cases it is assumed that either a START routine or END routine or both are used in the transformation connecting the source and DSO which is the target. The cases are broadly described below:


  • A1: Semantic Group is not defined in the DTP, Parallel Extraction flag is checked and Error handling is switched off ie. either 'Deactivated' or set to 'No update No Reporting': Processing Mode of the DTP is 'Parallel Extraction and Processing'.


  • A2: Semantic Group is not defined in the DTP, Parallel Extraction flag is not checked and Error handling is switched off ie. either 'Deactivated' or set to 'No update No reporting':Processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.


  • A3: Semantic Group is not defined in the DTP, Error handling is switched on ie. either 'Valid Records Update, No reporting (Request Red) or 'Valid Records Update, Reporting Possible (Request Green)': Processing Mode of the DTP is 'Serial Extraction and Processing of Source Package'. The system also prompts a message 'Use of Semantic Grouping'.


  • B1: Semantic Group is defined in the DTP and Error Handling is switched off ie. either 'Deactivated' or set to 'No update No Reporting': Processing Mode of the DTP: 'Serial Extraction, Immediate Parallel Processing'. The system also prompts a message 'If possible dont use semantic grouping'


  • B2: Semantic Group is defined in the DTP and Error Handling is switched on ie. either 'Valid Records Update, No reporting (Request Red) or 'Valid Records Update, Reporting Possible (Request Green)': Processing Mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.



In any DSO we allow the aggregation 'OVERWRITE' along with 'MAX', 'MIN' and 'SUM' which is non cumulative. So it is very important that chronological sequence of the records are intact for update because the 'principle of last wins' needs to be maintained. Therefore if Error handling is switched on and there are errors in the update then the erreneous records which are filtered out and written in the errorstack must be in chronological sequence.


The solution for the cases described above are:


  • In cases A1 and A2 the error handling is switched off so if there is one error the load is terminated and erreneous records are not stored anywhere. Therefore according to the the setting 'Parallel Extraction' ie. whether is checked or not the processing mode of the DTP is 'Parallel Extraction and Processing' or 'Serial Extraction, Immediate Parallel Processing' respectively.


  • In case of B1 you have defined a Semantic Group which ensures that the records pertianing to the same keys defined in the semantic group are ensured to be in one package. But as error handling is switched off this contradicts the setting of the semantic group as you dont want erroroneous records to written in the errorstack and so the processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'. Also the system prompts to remove the Semantic group otherwise it makes no sense.


  • In case of A3 the semantic group is not defined but the errorhandling is switched on so it means that erreneous records are needed to be written in the errorstack and the chronological sequence needs to be maintained. But as there are no keys in semantic group defined it cannot be ensured that records pertaining to the same keys are in the same package. So the processing mode of the DTP is 'Serial Extraction and Processing of the source package'.Also the system prompts you to use the semantic groups sothat it is ensured that the records pertaining to the same keys defined in the semantic group are ensured to be in same package.


  • In case of B2 semantic group is also defined and errorhandling is switched on. This means the records pertaining to the same keys defined in the semantic group are ensured to be in one package and if the errors happen then the chronogical sequence will be maintained while writing the erreneous records in the errorstack after they are sorted according to the keys. So the processing mode of the DTP is 'Serial Extraction, Immediate Parallel Processing'.


I hope this blog can help you in further questions about use of error handling, semantic groups on DTP's.


Regards,

Janaina


Distinct Value Analysis for se16 Table Columns

$
0
0

Introduction

 

In the past I had to analyse PSA tables and, to be more specific, I had to find out the distinct values for a table column in order to know which specific values have been extracted from the source system. This requirement cannot be solved with the se16 transaction directly. As a turnaround I exported the table data to Excel and then I used the option "Remove duplicates". This worked in the beginning, but with larges PSA tables that turnaround wasn't practicable anymore.

For SAP BW InfoProviders this requirement can be handled with the transaction LISTCUBE, but from my point of view it is too complicated and time consuming.

 

So I developed a solution for this requirement in SAPGUI which was inspired by the "Distinct values" option in SAP HANA Studio.

 

 

Requirement

 

User-friendly tool to analyse the distinct values of a se16 table column & of any SAP BW InfoProvider.

 

 

Solution & Features


The attached report ZDS_DISTINCT_VALUES has two parameters to pass the specific table and column name. The parameter values will be checked and analysed. If the table parameter is a SAP BW InfoProvider the function "RSDRI_INFOPROV_READ" will be used to extract the data. Otherwise a generic ABAP SQL call is executed to get the distinct values. If the column parameter is empty or cannot be found for this table / InfoProvider, the list of possible column values for this table is returned.

 

The output returns a table with the distinct values and the number of occurrence. If it is possible to get text values (InfoObject master data or domains), these text values will also be returned. For SAP BW master data the function "RSDDG_X_BI_MD_GET" is used and for domains "DDIF_DOMA_GET".


Screen01.PNG

 

Screen02.PNG

 

Screen03.PNG

 

Conclusion

 

Feel free to use and extend the tool. Contact me for any questions, etc. Attention: MultiProviders are not supported.

Who raised this Event - SAP BW

$
0
0

This may sound very basic but can be useful to someone who doesn't know yet.Others, please ignore.

You may have a situation where Event triggers are used in process chains and you are confused or find difficult to identify which process exactly triggered an particular event. The figures below illustrates example scenario and method of finding it using the standard way of digging the related tables.

 

You have 2 chains,

1) Chain that raises an event trigger

2) Chain that receives the event

 

e1.PNG

e2.PNG

 

If you need to find out the parent chain that raised the event "EVENT1"( in this case ), you can use below tables to get the information.

 

1) Table RSPCVARIANT

2) Input LOW = "EVENT1",TYPE = "ABAP" (basically its the event parameter you want to search for)

3) Copy the value from field VARIANTE

4) Table RSPCCHAIN

5) Input VARIANTE = value copied from step (3)

6) CHAIN_ID field will give you the technical id of the process chain that raised this event.


BW-WHM* Released SAP Notes (Last 7 Days)

$
0
0

Hello Guys,

 

 

I only would share with you the BW-WHM* released notes for the last 7 days:

 

 

ComponentNumberDescription
BW-WHM-MTD-SRCH2152359BW search/input help for InfoObjects returns no results
BW-WHM-MTD-INST2142826Method INSTALL_SELECTION of class CL_RSO_BC_INSTALL uses selection subset of pr
BW-WHM-MTD-HMOD2211315External SAP HANA view: navigation attribute returns no values
BW-WHM-MTD-HMOD2217796External SAP HANA View with Nearline Storage: column store error: fail to creat
BW-WHM-MTD-CTS2204227Transport: Error RSTRAN 401 in RS_TRFN_AFTER_IMPORT due to obsolete TRCS instan
BW-WHM-MTD2217377IOBJ_PROP: fix CL_RSD_CHABAS_PROP->IF_RSD_CHA_PROP~GET_ATTRIBUTES
BW-WHM-DST-UPD2213337Update rule activation ends with dump MESSAGE_TYPE_X
BW-WHM-DST-TRF2216264730SP15: Transformation not deactivated if the InfoObject/DSO used in lookup ru
BW-WHM-DST-TRF2212917SAP BW 7.40 (SP14) Rule type READ ADSO doesn't work as expected
BW-WHM-DST-TRF2214542SAP HANA Processing: BW 7.40 SP8 - SP13: HANA Analysis Processes and HANA Trans
BW-WHM-DST-TRF2215940SP35:Time Derivation in Transformation is incorrect
BW-WHM-DST-TRF2003029NW BW 7.40 (SP08) error messages when copying data flows
BW-WHM-DST-TRF2217533DBSQL_DUPLICATE_KEY_ERROR when transporting transformation
BW-WHM-DST-TRF2192329SAP HANA Processing: BW 7.50 SP00 - SP01: HANA Analysis Processes and HANA Tran
BW-WHM-DST-SRC2185710Delta DTP from ODP Source System into Advanced-DataStore-Objekt
BW-WHM-DST-SDL2126800P14; SDL; BAPI: Excel IPAK changes with BAPI_IPAK_CHANGE
BW-WHM-DST-PSA2196780Access to PSA / PSA / Error stack maintenance screen takes long time or dumps .
BW-WHM-DST-PSA2217701PSA : Error in the report RSAR_PSA_CLEANUP_DIRECTORY_MS when run in the 'repair
BW-WHM-DST-PC2216236RSPCM scheduling issue due to missing variant
BW-WHM-DST-DTP2185072DTP on ODP source system: error during extraction
BW-WHM-DST-DTP2213529P18:MPRO/LPOA:DTP:Abbruch in CHECK_MPRO_REQUESTLIST
BW-WHM-DST-DTP2214682P35:PC:DTP:Monitor-Anzeige dumpt bei Skipped DTP
BW-WHM-DST-DS1923709Transport of BW source system dependent objects and transaction SM59
BW-WHM-DST-DS2038066Consulting: TSV_TNEW_PAGE_ALLOC_FAILED dump when loading from file
BW-WHM-DST-DS2154850Transfer structure is inactive after upgrade. Error message: mass generation: n
BW-WHM-DST-DS2218111ODP DataSource: Data type short string (SSTR)
BW-WHM-DST-DFG2216492Datenflusseditor erscheint in den BW Modeling Tools anstatt im SAP GUI
BW-WHM-DST-ARC2155151Archiving request in Deletion phase / Selective Deletion fails due existing sh
BW-WHM-DST-ARC2214688Short dump while NLS Archiving object activation
BW-WHM-DST-ARC2214892BW HANA SDA: Process Type for creating Statistics for Virtual Tables
BW-WHM-DST1839792Consolidated note on check and repair report for the request administration in
BW-WHM-DST2170302Proactive Advanced Support - PAS
BW-WHM-DST2075259P34: BATCH: Inactive servers are used - DUMP
BW-WHM-DST2176213Important SAP notes and KBAs for BW System Copy
BW-WHM-DST1933471Infopackage requests hanging in SAPLSENA or in SAPLRSSM / MESSAGE_TYPE_X or TIM
BW-WHM-DST2049519Problems during data load due to reduced requests
BW-WHM-DBA-SPO2197343Performance: SPO transport/activation: *_I, *_O, transformation only regenerate
BW-WHM-DBA-ODS1772242Error message "BRAIN290" Error while writing master record "xy" of characteris
BW-WHM-DBA-ODS2215989RSODSACTUPDTYPE - Deleting unnecessary entries following DSO activation
BW-WHM-DBA-ODS2209990SAP HANA: Optimization of SID processes for DataStore objects (classic)
BW-WHM-DBA-ODS2214876Performance optimization for DataStore objects (classic) that are supplied thro
BW-WHM-DBA-ODS2218170DSO SID activation error log displays a limit of 10 characteristic values
BW-WHM-DBA-ODS2217170740SP14: 'ASSIGN_TYPE_CONFLICT' in Transformation during load of non-cumulative
BW-WHM-DBA-MPRO2218861730SP15:Short dump 'RAISE_EXCEPTION' during creation of Transformation with so
BW-WHM-DBA-MD2172189Dump MESSAGE_TYPE_X in X_MESSAGE during master data load
BW-WHM-DBA-MD2216630InfoObject Master Data Maintenance - Sammelkorrekturen für 7.50 SP 0
BW-WHM-DBA-MD2218379MDM InfoObject - maintain text despite read class
BW-WHM-DBA-IOBJ2215347A system dump occurs when viewing the database table status of a characteristic
BW-WHM-DBA-IOBJ2217990Message "InfoObject &1: &2 &3 is not active; activating InfoObject now" (R7030)
BW-WHM-DBA-IOBJ2213527Search help for units not available
BW-WHM-DBA-ICUB1896841Function: InfoCube metadata missing in interfaces
BW-WHM-DBA-ICUB2000325UDO - report about SAP Note function: InfoCube metadata missing in interfaces (
BW-WHM-DBA-HIER2211256Locks not getting released in RRHI_HIERARCHY_ACTIVATE
BW-WHM-DBA-HIER2215380Error message RH608  when loading hierarchy by DTP
BW-WHM-DBA-HIER2216696Enhancements to the internal API for hierarchies in BPC
BW-WHM-DBA-HCPR2210601HCPR transfer: Error for MetaInfoObjekte during copy of queries
BW-WHM-DBA-COPR2080851Conversion of MultiProvider to CompositeProvider
BW-WHM-DBA-ADSO2215201ADSO: Incorrect mapping of RECORDTP in HCPR
BW-WHM-DBA-ADSO2215947How to Set Navigation Attributes for an ADSO or HCPR
BW-WHM-DBA-ADSO2218045ADSO partitioning not possible for single RANGE values
BW-WHM-DBA2218453730SP15: Transaction RSRVALT is obsolete
BW-WHM1955592Minimum required information in an internal/handover memo

 

Regards,

Janaina

Are you facing deadlock issue while uploading master data attributes?

$
0
0

Sometimes you face such issues in SAP BW which may drive you crazy and this deadlock issue is one of them. I have recently resolved this infamous dump so decided to share my experience with you all. Before any further delay, let me tell you the system & database details about my system.

 

Component/SystemValues
SAP_BW740
Database SystemMSSQL
Kernel Release741
Sup.Pkg lvl.230

 

Let me first explain what is deadlock.

A database deadlock occurs when two processes lock each other's resources and are therefore unable to proceed.  This problem can only be solved by terminating one of the two transactions.  The database more or less at random terminates one of the transactions.

Example:

Process 1 locks resource A.

Process 2 locks resource B.

Process 1 requests resource A exclusively (-> lock) and waits for process 2 to end its transaction.

Process 2 requests resource B exclusively (-> lock) and waits for process 1 to end its transaction.

For example, resources are table records, which are locked by a modification or a Select-for-Update operation.

Following dump is expected when you will upload master data attribute.

Dump1.jpg

Sometimes you might encounter this dump too.

Dump2.jpg

 

Solution:

In order to avoid this issue please make sure that your DTP does not have semantic grouping ON and it's processing mode should be "Serially in the Background Process". To be on the safe side, I would recommend to create new DTP with these settings.

 

 

Please let me know if you find this blog helpful or not.

Model Drill Down Using Analysis Items in WAD

$
0
0

There is an OFB solution how to model drill down using Analysis Items in WAD. What it takes is to pass selection from parent analysis item to child one. But this solution has two major problems:

  • Bad Performance (since there is no parent Analysis Item initial selection, it takes long time to load detailed data of child analysis item);
  • Not intuitive interface (since there is no parent Analysis Item initialial selection, it is not clear that parent analysis item should limit data of child one).

In my blog I will explain how to model drill down with initial selection to make analysis application both responsive and intuitve (some JavaScript knowledge will be required).

    Once my analysis application is refreshed it looks like this

 

Analysis Application.jpg

This is what is required to make initial selection work:

Lets see each step in details.

 

Initially hide child Analysis Item

ANALYSIS_ITEM_2_Properties.jpg

 

Find first Product from parent Analysis Item

Add Data Provider Info Item for DP_1 (used by 1st Analysis Item)

DATA_PROVIDER_INFO_ITEM_1.jpg

Define JavaScript function to read first Product.

 

function Get_Product() {

var s;
var xml;
xml = document.getElementById('DATA_PROVIDER_INFO_ITEM_1').innerHTML;
xmlDoc=new ActiveXObject("Microsoft.XMLDOM");
xmlDoc.async=false;

xmlDoc.loadXML(xml);


var Product = xmlDoc.getElementsByTagName("AXIS")[0].getElementsByTagName("MEMBER")[0].getAttribute("text")

return Product;

}

 

 

Select first row in parent Analysis Item

Define JavaScript function to select first row in 1st Analysis Item

 

function Select_Row() {
var tableModel;
var element =  document.getElementById('ANALYSIS_ITEM_1_ia_pt_a');
  if (typeof(element) != 'undefined' && element != null)  {
// BW 7.3
  tableModel = ur_Table_create('ANALYSIS_ITEM_1_ia_pt_a'); 
  }
  else {
// BW 7.0
    tableModel = ur_Table_create('ANALYSIS_ITEM_1_interactive_pivot_a'); 
  }

var oRow = tableModel.rows[ 2 ];
sapbi_acUniGrid_selectRowCellsInternal( tableModel, oRow, true, null);

}

 

Limit child Analysis Item data to fist Product in parent Analysis Item and unhide child Analysis Item

Define JavaScript function that executes command sequence of two commands:

 

function Filter_N_Unhide( Product ){

//Note: information can be extracted using the parameter 'currentState'

// and 'defaultCommandSequence'. In either case create your own object

// of type 'sapbi_CommandSequence' that will be sent to the server.

// To extract specific values of parameters refer to the following

// snippet:

//  var key = currentState.getParameter( PARAM_KEY ).getValue();

//  alert( "Selected key: " + key );

//

// ('PARAM_KEY' refers to any parameter's name)

//Create a new object of type sapbi_CommandSequence

var commandSequence = new sapbi_CommandSequence();

/*

  * Create a new object of type sapbi_Command with the command named "SET_SELECTION_STATE_SIMPLE"

    */

var commandSET_SELECTION_STATE_SIMPLE_1 = new sapbi_Command( "SET_SELECTION_STATE_SIMPLE" );

/* Create parameter TARGET_DATA_PROVIDER_REF_LIST */

var paramTARGET_DATA_PROVIDER_REF_LIST = new sapbi_Parameter( "TARGET_DATA_PROVIDER_REF_LIST", "" );

var paramListTARGET_DATA_PROVIDER_REF_LIST = new sapbi_ParameterList();

// Create parameter TARGET_DATA_PROVIDER_REF

var paramTARGET_DATA_PROVIDER_REF1 = new sapbi_Parameter( "TARGET_DATA_PROVIDER_REF", "DP_2" );

paramListTARGET_DATA_PROVIDER_REF_LIST.setParameter( paramTARGET_DATA_PROVIDER_REF1, 1 );

  // End parameter TARGET_DATA_PROVIDER_REF!

paramTARGET_DATA_PROVIDER_REF_LIST.setChildList( paramListTARGET_DATA_PROVIDER_REF_LIST );

commandSET_SELECTION_STATE_SIMPLE_1.addParameter( paramTARGET_DATA_PROVIDER_REF_LIST );

 

/* End parameter TARGET_DATA_PROVIDER_REF_LIST */

 

/* Create parameter RANGE_SELECTION_OPERATOR */

var paramRANGE_SELECTION_OPERATOR = new sapbi_Parameter( "RANGE_SELECTION_OPERATOR", "EQUAL_SELECTION" );

var paramListRANGE_SELECTION_OPERATOR = new sapbi_ParameterList();

// Create parameter EQUAL_SELECTION

var paramEQUAL_SELECTION = new sapbi_Parameter( "EQUAL_SELECTION", "MEMBER_NAME" );

var paramListEQUAL_SELECTION = new sapbi_ParameterList();

// Create parameter MEMBER_NAME

var paramMEMBER_NAME = new sapbi_Parameter( "MEMBER_NAME", Product );

paramListEQUAL_SELECTION.addParameter( paramMEMBER_NAME );

  // End parameter MEMBER_NAME!

paramEQUAL_SELECTION.setChildList( paramListEQUAL_SELECTION );

paramListRANGE_SELECTION_OPERATOR.addParameter( paramEQUAL_SELECTION );

  // End parameter EQUAL_SELECTION!

paramRANGE_SELECTION_OPERATOR.setChildList( paramListRANGE_SELECTION_OPERATOR );

commandSET_SELECTION_STATE_SIMPLE_1.addParameter( paramRANGE_SELECTION_OPERATOR );

 

/* End parameter RANGE_SELECTION_OPERATOR */

 

/* Create parameter CHARACTERISTIC */

var paramCHARACTERISTIC = new sapbi_Parameter( "CHARACTERISTIC", "D_NW_PRID" );

commandSET_SELECTION_STATE_SIMPLE_1.addParameter( paramCHARACTERISTIC );

 

/* End parameter CHARACTERISTIC */

 

// Add the command to the command sequence

commandSequence.addCommand( commandSET_SELECTION_STATE_SIMPLE_1 );

/*

  * End command commandSET_SELECTION_STATE_SIMPLE_1

    */

/*

  * Create a new object of type sapbi_Command with the command named "SET_ITEM_PARAMETERS"

    */

var commandSET_ITEM_PARAMETERS_2 = new sapbi_Command( "SET_ITEM_PARAMETERS" );

/* Create parameter ITEM_TYPE */

    var paramITEM_TYPE = new sapbi_Parameter( "ITEM_TYPE", "ANALYSIS_ITEM" );commandSET_ITEM_PARAMETERS_2.addParameter( paramITEM_TYPE );

 

    /* End parameter ITEM_TYPE  */

/* Create parameter INIT_PARAMETERS */

var paramINIT_PARAMETERS = new sapbi_Parameter( "INIT_PARAMETERS" );

    var paramListINIT_PARAMETERS = new sapbi_ParameterList();commandSET_ITEM_PARAMETERS_2.addParameter( paramINIT_PARAMETERS );

 

// Create parameter VISIBILITY

var paramVISIBILITY = new sapbi_Parameter( "VISIBILITY", "VISIBLE" );

paramListINIT_PARAMETERS.addParameter( paramVISIBILITY );

  // End parameter VISIBILITY!

paramINIT_PARAMETERS.setChildList( paramListINIT_PARAMETERS );

/* End parameter INIT_PARAMETERS  */

 

/* Create parameter TARGET_ITEM_REF */

var paramTARGET_ITEM_REF = new sapbi_Parameter( "TARGET_ITEM_REF", "ANALYSIS_ITEM_2" );

commandSET_ITEM_PARAMETERS_2.addParameter( paramTARGET_ITEM_REF );

 

/* End parameter TARGET_ITEM_REF */

 

// Add the command to the command sequence

commandSequence.addCommand( commandSET_ITEM_PARAMETERS_2 );

/*

  * End command commandSET_ITEM_PARAMETERS_2

    */

//Send the command sequence to the server

    return sapbi_page.sendCommand( commandSequence );

}

 

 

Code call of all onload JavaScripts

Define JavaScript function to call all above and attach it to BODY onload event


function initial_selection( )  {

Select_Row();
Filter_N_Unhide(Get_Product());

};

 

        </head>

        <body onload="initial_selection();" >

            <bi:QUERY_VIEW_DATA_PROVIDERname="DP_1" >

 

 

See attached EPM_DEMO Web Application templete for complete implementation details (rename to EPM_DEMO.bisp before upload to WAD)

Need Support to Extract data from SAP TCODE automatically in Excel

$
0
0

Hi,

 

Please help me to Extract data from SAP TCODE ( front end ) automatically in Excel vba code.

 

Sap “Script Recording and Playback ” is disabled from server.


Regards,

Satish

Syntax Errors - Message no. RG102

$
0
0

Dear All,

 

 

In the last months I indentified several incidents reporting Syntax errors in BW objects (Message no. RG102).

The ones that appear with more frequency are related with DSO and Transformation activation:

 

1. Syntax error in GP_ERR_RSODSO_ACTIVATE, row xxx (-> long text)

Message no. RG102

 

2. Syntax error 'GP_ERR_RSTRAN_MASTER_TMPL'

Message no. RG102

 

 

For these errors the following notes were created:

 

Syntax error 1:

 

2100403 - Syntax error in GP_ERR_RSODSO_ACTIVATE during activation of DSO

 

 

Syntax error 2:

 

2152631 - 730SP14: Syntax error during activation of Transformations

2124482 - SAP BW 7.40(SP11) Activation failed for Transformation

1946031 - Syntax error GP_ERR_RSTRAN_MASTER_TMPL during activation of transformation

1933651 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for rule type "Time Distribution"

1919235 - "Syntax error in routine" of a migrated transformation/during migration

1889969 - 730SP11:Syntax error in GP_ERR_RSTRAN_MASTER_TMPL for RECORDMODE

1816350 - 731SP8:Syntax errors in routines or Assertion failed during activation of transformation

1762252 - Syntax error in GP_ERR_RSTRAN_MASTER_TMPL

 

 

Regards,

Janaina

Viewing all 333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>