Sunday, October 17, 2010

Useful Transactions and Notes goes with NetWeaver 7.0

RSRD_ADMIN - Broadcasting Administration - Available in the BI system for the administration of Information Broadcasting.
CHANGERUNMONI - Using this Tcode we can monitor the status of the attribute change run.
RSBATCH - Dialog and Batch Processes. BI background management functions:
  • Managing background and parallel processes in BI
  • Finding and analyzing errors in BI
  • Reports for BI system management 
are available under Batch Manager.
RRMX_CUST - Make setting directly in this transaction for which BEx Analyzer version is called.
Note: 970002 - Which BEx Analyzer version is called by RRMX?
RS_FRONTEND_INT - Use this transaction to Block new Frontend Components in field QD_EXCLUSIVE_USER from migrating to 7.0 version.
Note: 962530 - NW04s - How to restrict access to Query Designer 2004s.
WSCONFIG - This transaction is to create, test and release the Web Service definition.
WSADMIN - Administration Web Services - This transaction is to display and test the endpoint.
RSTCO_ADMIN - This transaction to install basic BI objects and check whether the installation has been carried out successfully. If the installation is red, restart the installation by calling transaction RSTCO_ADMIN again. Check the installation log.
Note 1000194 - Incorrect activation status in transaction RSTCO_ADMIN.
Note 1039381 - Error when activating the content Message no. RS062 (Error when installing BI Admin Cockpit).
Note 834280 - Installing technical BI Content after upgrade.
Note 824109 - XPRA - Activation error in NW upgrade. XPRA installs technical BW Content objects that are necessary for the productive use of the BW system. (An error occurs during the NetWeaver upgrade in the RS_TCO_Activation_XPRA XPRA. The system ends the execution of the method with status 6.
RSTCC_INST_BIAC - For activating the Technical Content for the BI admin cockpit
Run report RSTCC_ACTIVATE_ADMIN_COCKPIT in the background
Note 934848 - Collective note - (FAQ) BI Administration Cockpit
Note 965386 - Activating the technical content for the BI admin cockpit
Attachment for report RSTCC_ACTIVATE_ADMIN_COCKPIT source code
When Activating Technical Content Objects terminations and error occurs
Note 1040802 - Terminations occur when activating Technical Content Objects
RSBICA - BI Content Analyzer - Check programs to analyze inconsistencies and errors of custom-defined InfoObject, InfoProviders, etc - With central transaction RSBICA, schedule delivered check programs for the local system or remote system via RFC connection. Results of the check programs can be loaded to the local or remote BI systems to get single point of entry for analyzing the BI landscape.
RSECADMIN - Transaction for maintaining new authorizations. Management of Analysis Authorizations.
Note 820123 - New Authorization concept in BI.
Note 923176 - Support situation authorization management BI70/NW2004s.
RSSGPCLA - For the regeneration of RSDRO_* Objects. Set the status of the programs belonging to program classes "RSDRO_ACTIVATE", "RSDRO_UPDATE" and "RSDRO_EXTRACT" to "Generation required". To do this, select the program class and then activate the "Set statuses" button.
Note 518426 - ODS Object - System Copy, migration
RSDDBIAMON - BI Accelerator - Monitor with administrator tools.
  • Restart BIA server: restarts all the BI accelerator servers and services.
  • Restart BIA Index Server: restart the index server.
  • Reorganize BIA Landscape: If the BI accelerator server landscape is unevenly distributed, redistributes the loaded indexes on the BI accelerator servers.
  • Rebuild BIA Indexes: If a check discovers inconsistencies in the indexes, delete and rebuild the BI accelerator indexes.
RSDDSTAT - For Maintenance of Statistics properties for BEx Query, InfoProvider, Web Template and Workbook.
Note 964418 - Adjusting ST03N to new BI-OLAP statistics in Release 7.0
Note 934848 - Collective Note (FAQ) BI Administration Cockpit.
Note 997535 - DB02 : Problems with History Data.
Note 955990 - BI in SAP NetWeaver 7.0: Incompatibilities with SAP BW 3.X.
Note 1005238 - Migration of workload statistics data to NW2004s.
Note 1006116 - Migration of workload statistics data to NW2004s (2).
DBACOCKPIT - This new transactions replaces old transactions ST04, DB02 and comes with Support Pack 12 for database monitoring and administration.
Note 1027512 - MSSQL: DBACOCKPIT  for basis release 7.00 and later.
Note 1072066 - DBACOCKPIT - New function for DB monitoring.
Note 1027146- Database administration and monitoring in the DBA Cockpit.
Note 1028751 - MaxDB/liveCache: New functions in the DBA Cockpit.
BI 7.0 iView Migration Tool
Note 1128730 - BI 7.0 iView Migration Tool
Attachements for iView Migration Tool:
  • bi migration PAR
  • bi migration SDA
  • BI iView Migration Tool
For Setting up BEx Web
Note 917950 - SAP NetWeaver2004s : Setting Up BEx Web
Handy Attachements for Setting up BEx Web:
  • Problem Analysis
  • WDEBU7 Setting up BEx Web
  • System Upgrade Copy
  • Checklist
To Migrate BW 3.X Query Variants to NetWeaver 2004s BI:
Run Report RSR_VARIANT_XPRA from Transaction SE38 to fill the source table with BW 3.X variants that need to be migrated to SAP NetWeaver 2004s BI. After upgrading system to Support Package 12 or higher run the Migration report RSR_MIGRATE_VARIANTS to migrate the existing BW 3.x query Variants to the new NetWeaver 2004s BI Variants storage.
Note 1003481 - Variant Migration - Migrate all Variants
To check for missing elements and repairing the errors run report ANALYZE_MISSING_ELEMENTS.
Note 953346 - Problem with deleted InfoProvider in RSR_VARIANT_XPRA
Note 1028908 - BW Workbooks MSA: NW2004s upgrade looses generic variants
Note 981693 - BW Workbooks MSA: NW2004s upgrade looses old variants
For the Migration  of Web Templates from BW 3.X to SAP NetWeaver 2004s:
Note 832713 - Migration of Web Templates from BW 3.X to NetWeaver 2004s
Note 998682 - Various errors during the Web Template migration of BW 3.X
Note 832712 - BW - Migration of Web items from 3.x to 7.0
Note 970757 - Migrating BI Web Templates to NetWeaver 7.0 BI  which contain chart
Upgrade Basis Settings for SAP NetWeaver 7.0 BI
SAP NetWeaver 7.0 BI Applications with 32 bit architecture are reaching their limits. To build high quality reports on the SAP NetWeaver BI sources need an installation based on 64-bit architecture.
With SAP NetWeaver 7.0 BI upgrade basis parameter settings of SAP Kernel from 32 bit to 64 bit version. Looking at the added functionality in applications and BI reports with large data set use lot of memory which adds load to application server and application server fails to start up because the sum of all buffer allocations exceeds the 32 bit limit.
Note 996600 - 32 Bit platforms not recommended for productive NW2004s apps
Note 1044441 - Basis parameterization for NW 7.0 BI systems
Note 1044330 - Java parameterization for BI systems
Note 1030279 - Reports with very large result sets/BI Java
Note 927530 - BI Java sizing
Intermediate Support Packages for NetWeaver 7.0 BI
BI Intermediate Support Package consisits of an ABAP Support Package and a Front End Support Package and where ABAP BI intermediate Support Package is compatible with the delivered BI Java Stack.
Note 1013369 - SAP NetWeaver 7.0 BI - Intermediate Support Packages
Microsoft Excel 2007 integration with NetWeaver 7.0 BI
Microsoft Excel 2007 functionality is now fully supported by NetWeaver 7.0 BI
Advanced filtering, Pivot table, Advanced formatting, New Graphic Engine, Currencies, Query Definition, Data Mart Fields
Note 1134226 - New SAP BW OLE DB for OLAP files delivery - Version 3
Full functionality for Pivot Table to analyze NetWeaver BI data
Microsoft Excel 2007 integrated with NetWeaver 7.0 BI for building new query, defining filter values, generating a chart and creating top n analysis from NetWeaver BI Data
Microsoft Excel 2007 now provides Design Mode, Currency Conversion and Unit of Measure Conversion

Thursday, October 7, 2010

Analyzing Throughput times of Process Chains

pTo analyze your process chains in SAP BI (but also other BI elements, SAP provides the program /SSA/BWT.
How to use:
Select the first bullet after running the program in SA38:
Enter the technical name of the process chain to be analyzed:
In the following screen you can see all details of the different process chain runs, but also compare different process chain runs with each other.
This program is very useful when you want to analyse why a certain process chain is taking longer.
Try it out!

Extraction Logistics Datasources - T-Codes and Programs

An overview of Datasources and the programs filling the relevant setup table (named MC*SETUP). With this handy table you can find the status of your current job or previous initialization jobs through SM37.

DatasourceTcodeProgram
2LIS_02*OLI3BW RMCENEUA
2LIS_03_BXMCNBRMCBINIT_BW
2LIS_03_BFOLI1BWRMCBNEUA
2LIS_03_UMOLIZBWRMCBNERP
2LIS_04* ordersOLI4BWRMCFNEUA
2LIS_04* manufacturingOLIFBWRMCFNEUD
2LIS_05*OLIQBWRMCQNEBW
2LIS_08*VTBWVTRBWVTBWNEW
2LIS_08* (COSTS)VIFBWVTRBWVIFBW
2LIS_11_V_ITMOLI7BWRMCVNEUA
2LIS_11_VAITMOLI7BWRMCVNEUA
2LIS_11_VAHDROLI7BWRMCVNEUA
2LIS_12_VCHDROLI8BWRMCVNEUL
2LIS_12_VCITMOLI8BWRMCVNEUL
2LIS_12_VCSCLOLI8BWRMCVNEUL
2LIS_13_VDHDROLI9BWRMCVNEUF
2LIS_13_VDITMOLI9BWRMCVNEUF
2LIS_17*OLIIBWRMCINEBW
2LIS_18*OLISBWRMCSNEBW
2LIS_45*OLIABWRMCENEUB

Wednesday, October 6, 2010

Delta Management

Delta Load Management Framework Overview  Locate the document in its SAP Library structure

CAF and SAP BW integration supports delta loading for DataSources created by entity and application service extractor methods. When working with applications with large data volumes, it is logical to prevent long loading times and unnecessary locks on the database by only loading new or modified data records into SAP BW.

Features

Generic delta management works as follows:
 
       1.      A data request is combined with particular selection criteria  in an InfoPackage and is to be extracted in delta mode.
       2.      The request is sent to the source system and then received by the SAPI (service application programming interface) request broker.
       3.      The generic delta management is initiated before the data request is transferred to the extractor corresponding to the DataSource. This enhances the selection criteria of the request in accordance with the update mode of the request. If the delta-relevant field is a timestamp, the system then adds a time interval to the selection criteria. 
Delta management can take the lower limit from the last extraction. The upper limit is taken from the current time. For example, date of application server minus a safety margin (SY-DATE, SY-TIME).
       4.      The enhanced request is transferred to the extractor. The update mode is ‘translated’ by the generic delta management into a selection criteria. For this reason, the update mode is set first to full.
       5.      At the end of the extraction, the system informs generic delta management that the pointer can now be set to the upper limit of the previously returned interval.
You can find a description of this transfer process in the figure below.
This graphic is explained in the accompanying text

Structure

Delta Fields
The delta-relevant field of the extract structure meets one of the following criteria:
      Field type is timestamp.
       New records that are to be loaded into BW using a delta upload each have a higher entry in this field than records that have already been loaded.
       The same criteria applies for new records as in the case of a timestamp field.
      Field type is not timestamp. This case is only supported for SAP Content DataSources. At the start of delta extraction, the maximum value to be read must be returned using a DataSource-specific exit.
You can use special data field to achieve more reliable delta loading from different source systems. They are integrated into the delta management framework. They are:
      Safety interval upper limit
      Safety interval low limit
Safety Interval Upper Limit
The upper limit for safety interval contains the difference between the current highest value at the time of the delta or initial delta extraction and the data that has actually been read. If this value is initial, records that are created during extraction cannot be extracted.                      
Example
A timestamp is used to determine the delta value. The timestamp that was read last stands at 12:00:00. The next data extraction begins at 12:30:00. The selection interval is therefore 12:00:00 to 12:30:00. At the end of the extraction, the pointer is set to 12:30:00.                                       
This transaction is saved as a record. It is created at 12:25 but not saved until 12:35. As a result, it is not contained in the extracted data and the timestamp means the record is not included in the subsequent extraction.                                                                            
To avoid this discrepancy, the safety margin between read and transferred data must always be longer than the maximum time the creation of a record for this DataSource can take (for timestamp deltas), or a sufficiently large interval (for deltas using a serial number).
Safety Interval Lower Limit
The lower limit for safety interval contains the value that needs to be taken from the highest value of the previous extraction to obtain the lowest value of the following extraction.
Example
A timestamp is used to determine the delta. The master data is extracted. Only images taken after the extraction are transferred and overwrite the status in BW. Therefore, with such data, a record can be extracted more than once into BW without too much difficulty.
Taking this into account, the current timestamp can always be used as the upper limit in an extraction and  the lower limit of the subsequent extraction does not immediately follow on from the upper limit of the previous one. Instead, it takes a value corresponding to this upper limit minus a safety margin.
This safety interval needs to sufficiently large so that all values that already contain a timestamp at the time of the last extraction, but which have yet to be read (see type 1), are now contained in the extraction. This implies that some records will be transferred twice. However, due to the reasons outlined previously, this is irrelevant.
You should not fill the safety intervals fields with an additive delta update, as duplicate records will invariably lead to incorrect data.
Note
It is not necessary to set safety intervals for DataSources used in CAF and SAP BW integration.
End of Content Area

Logistics Cockpit

SAP BW Business Content: valued, but not a wishing well...

As we can read from help.sap.com, SAP Business Information Warehouse provides pre-configured objects under the collective term "Business Content" (BC). These objects accelerate the implementation of SAP BW, since they deliver ready solutions, meeting the requirements for business information.
As regards our analysis area (BW extraction tools), it’s evident that, coming out from SAP official definition and just working for some period on the system, BC (and its datasources) mean ready to run build-in extractors, a good (and growing) business coverage within SAP environments (BW API Service is available as plug-in for all R/3 systems, BW itself and therefore also in APO, mySAP ERP components and industry solutions), transactional and master data involved, less implementation efforts and costs and sophisticated delta handling.
And you will say "This is a dreamland !".
I’m sorry, my dear friend, I hate to be a killjoy, but, wake up and welcome in the reality...
Although the BC and the related extraction technology have reached a significant coverage of every business area, there are still a lot of reasons (or, better, there are a lot of situations in which you are compelled) to enhance existing extractors or even develop entirely custom extractors: need to extract customer-specific data, to build customer extension for the BC that doesn’t support specific information required by the customer and so on.
Besides, we can’t forget about modified SAP systems in some areas with specific customizing settings or simply with custom fields added to standard tables.
Now the spontaneous question is: what’s happen if a standard logistic datasource as provided in standard (ready-to-use) configuration doesn’t completely meet our data model requirements ?
In other words, we go in RSA5 transaction screen (Installation of datasource from Business Content), by surfing through application component hierarchy, we find our candidate datasource 2LIS_11_VASCL (since functional analysis requires a set of information belonging to sales order schedule line level, for example); afterwards, a double-click on it and we inspect the field list and - dash it ! - I don’t see a specific needed field (e.g., AUDAT, the document date) !
What I have to do ?

Extraction cockpit technique: let's go back a little...

image
Fig.1: LC Delta Process for Sales Order Schedule Lines


With the LC, several data structures are delivered and, for each level of detail, there exists an extract structure as well as a datasource (that already represents a BW extract view).
When you create and save a sales order (as other transactional tasks), the document is processed in the memory and then stored into application (and database) tables.
In LC extraction technique (see Fig.1) we have at our disposal different LIS communications structures (like the MCVBAK, MCVAP, MCVEP and so on for sales orders) that we can decide to use for our reporting purposes when the application is running and during memory processing (into a separate memory partition, but for details refer to LOGISTIC COCKPIT DELTA MECHANISM Weblog Series).
To be more precise, every extract structure is related to one or more communication structures (and for every communication structure involved an include is provided by standard; see Fig.2) : for sales order schedule line extract structure we have got MCVBAK, MCVAP, MCVEP, MCVBKD, MCVBUK and MCVBUP, whose components you can see from SE11).
image
Fig.2: 2LIS_11_VASCL Include mapping


Keep in mind that here there is no need of any LIS knowledge, because these LIS structures are involved only from a memory processing point of view and no subsequent updating into LIS tables is performed.

In search of lost field (as Proust said...)

Now let’s come back to our little example and try to understand what procedure have to be followed when, as in our situation, we need a field not provided in a ready-to-run configuration.
2LIS_11_VASCL is the standard LC datasource to extract order schedule lines related information. MC11VA0SCL represents its linked extract structure.
Remember that it’s possible to enhance that, but you can’t create new extract structures (on the same standard datasource).
In the LC there also events mentioned (shown below the extraction structure), but they don’t have any customizing options. They are here just to give you some kind to transparency when our structure will be updated, but they are not of any relevance from the customizing point of view.


  • Among existing fields from the available communication structures

  • Within LC (see Fig.3) a tool is provided that enables you to add fields from the LIS communication structures (to the extract structure) without having to do any modifications.
    image
    Fig.3: Logistic Cockpit Customizing screen


    In the maintenance screen (see Fig.4), on the left side, you see what has already been selected in the standard extract structure and on the right side, you see all the available fields of the communication structures where you can select fields from for the update.

    image
    Fig.4: Maintenance screen


    And, what my eyes are seeing at a glance ? My AUDAT field !
    Ok, now it’s enough to highlight the row and click on the left-arrow: (every) selected field is included automatically in a generated append structure for the corresponding include structure of the extract structure (for example, append ZZMC11VA1SCL for include MC11VA1SCL for additional fields in the order schedule lines extractor for LIS communication structure MCVBAK).
    When you successfully complete this step, the traffic light icon turns red. This indicates that you changed the structure.
    At this point, you have to generate the datasource (see Fig.5): here you can (among the other things) choose fields that can be selected (for various reasons, it is not possible to offer all the fields contained in the LIS communication structure for selection in the extract structure; these fields are hidden for a specific purpose because some specific extract structure is a combination of different processes; for details see OSS Note 351214 ‘BW extraction SD: Restricted field selection’) and if a key figure is inverted or not (refer to OSS Note 382779 ‘Cancellation field in the datasource maintenance’ for details).
    After maintenance in this step, the traffic light turns yellow.

    image
    Fig.5: Datasource generation


    Once you activate the update, data is written to the extract structure and the traffic light then turns green. Our enhancement process is completed and now you can schedule (if required by your delta method) the delta job control process.
    If, during a subsequent import of a new plug-in, this same field is already included in the standard extract structure, it will be removed from the customer enhancement (in order to avoid a double occurrence for the same field) thanks to an automatic XPRA program execution within the upgrade procedure.
    When you extend the extraction structures in the LC, you can realize that not all existing fields of an assigned LIS communication structure are available for selection.
    This is not a lapse of memory: this behavior is wanted.
    As not all fields of the communication structures can be used in a practical way, some are hidden because, for example, the field is not filled for the relevant events or is only used internally or for other reasons of design (e.g. you should select key figures only from the most detailed communication structure and only the characteristics from all communication structures !).

    Enhance it, but mind the queue !
    If you change an extract structure in the Logistic Cockpit through transaction LBWE (or one of the LIS communication structures which are the basis for the extract structure or, in individual cases, also an application table - e.g. MSEG, it’s happened to me ! - by importing a transport request or a support package or by carrying out an upgrade, you have to operate with a lot of cautiousness.
    Many problems in this area result from the fact that, although everything is well organized in the development system, the transport that takes place in the production system is not controlled.
    In fact, a not responsible and diligent behaviour (when your datasource had already been activated for update, even if for a very short period) can lead to various errors: delta requests terminate, the update from the extraction queue does not finish or V3 update is no longer processed, the initialization on data that was generated before the change no longer works, the protocol terminates...in short, a real tragedy !
    Without venturing on a too technical ground, this situation can be briefly described in this way: when you change a structure, the data which is stored in the old form can no longer be interpreted correctly by the new version of the same extract structure.
    For the same reason, you can no longer use the statistical data already contained in setup tables and you have to delete it via transaction LBWG.
    Therefore, you should carry out the following steps before you change the extract structure (also for an R/3 release upgrade or plug-in/support package import):

  • close your system and make sure that no updates are performed (both by users or batch/background process);

  • start the update collective run directly from LC (that concerns either the V3 update or the Delta Queued update);


  • At this moment, with "delta direct" method, the delta queue (RSA7) must be empty, with "delta queued" method, the extraction queue (LBWQ) and delta queue (RSA7) must be empty, with "unserialized V3 update", the extraction queue (SM13) and the delta queue (RSA7) must be empty.
  • load all data of the respective datasources into your BW System

  • .
    To completely empty the delta queue, request a delta twice, one after the other: in this way the second upload will transfer 0 data records, but only with the second upload is the data of the delta queue that is available as a delta repeat deleted.
    Anyway, with plug-in PI 2000.2 (or PI-A 2000.2) specific checks were implemented in LBWE so that structures are changed only if (in this order) there are no entries in setup tables of the affected application, there are no entries in the V3 update and in the queue for the affected application.
    Now, that all data containers of the relevant data flow are empty, you can make (import) the change.
    IMPORTANT: the fields that are available by default in LBWE are automatically filled during the extraction and are delta relevant(see later in this weblog for more details about ‘delta relevant’ changes).


  • Using the LIS enhancement on available communication structures

  • If your field is not available in LC (that is, it is not in the available communication structures) you have to follow some different ways.
    One method of adding user-defined fields is the following: add the required fields to the communication structures (MCVBAK, MCVBAP and so on ) using the append method (via SE11) and then use the LIS customer exits to fill the field.
    For information on enhancing the communication structures, you can see the documentation for the enhancements MCS10001, MCS50001 and MCS60001 provided in transaction SMOD.
    After you enhance the communication structures you can then enhance the extract structure with the relevant field in the customizing cockpit (transaction LBWE), provided that the communication structure is available in the selection. Then you can proceed with the steps described before in the previous bullet.
    Even this procedure allows to manage delta records, but you must make sure that you can determine the status in the user exit before and after the document change (this varies from every peculiar situation and from which table the field is filled, e.g. internal document tables, such as XVBAP and YVBAP).
    Why this is so important ?
    A document change in the delta extraction process (relating to our LC datasources) consists in the transfer of two data records to BW: one of these records represents the status of the document before the change (before image) and the other one represents the status after the change (after image).
    During the extraction process, these two data records are compared by the system and checked for changes: only if there is some difference between the before and after images, these records will be involved in the extraction process and will continue to be processed.
    Please refer to OSS Notes 216448 ‘BW/SIS: Incorrect update / SD user exit‘ and 757361 ‘Additional data records in BW when document changed’ for more information on correctly populating the before and after image (even if related to only SD Applications).


  • Using custom append on the extract structure

  • If you don’t find your field already available within LBWE, if, for any reason, you don’t want (or you are not able) to enhance LIS communication structures, you have another chance: enhance your extract structures by creating an append with your ZZ* fields and then filling these fields with a specific user-exit.
    To do this, go to RSA6, choose your datasource, double-click on it and then on the extract structure: you will see an SE16 screen, create an append, insert your ZZ* fields, save.
    Then you have to fill those fields with some ABAP custom code that can be anything from some simple calculations or table lookups to complex business logic requiring access to multiple database tables. You can do that by CMOD, creating a project and using the enhancement RSAP0001.
    The function modules provided in this enhancement serve for the derivation or modification of data, that is extracted and transferred by the extraction engine of the Business Information Warehouse: EXIT_SAPLRSAP_001 for transactional data, EXIT_SAPLRSAP_002 for master data and EXIT_SAPLRSAP_004 for hierarchies.
    However , consider that customer enhancement (CMOD) functionality is being converted to BADIs (RS_BBS_BADI in this case), even if not all BW CMOD enhancements have been converted to BADIs and as long as the exit will be not replaced by SAP there is no need to convert a CMOD exit to a BADI (via transaction SPAU).
    In general, SAP doesn‘t recommend to use this latter ‘direct’ method to enhance extract structures in LC.
    In fact, by following this procedure, changes to added fields are not extracted to BW if no additional field contained in standard extract structure was changed (since delta relevant): our ZZ* field is empty at the time of the check, in both the before and after image and, since there is no change for the system, no delta records are extracted.
    And the same problem occurs in case of document deletion because document has already been deleted when the custom user exit is executed.


  • Getting to the root: enhance your application tables

  • The last resort of adding user-defined fields to extract structures is to directly enhance the document tables, in our example, VBAK, VBAP, VBEP (...).
    The fields can then be filled in the general (sales) user exits for the applications and are also available in LBWE for enhancing the extract structures.
    With this procedure, bear in mind that the fields added to the document tables must be saved at database level with the information they contain (with related space effort required), but, on the other side, there is no need for data retrieval in the LIS user exit.

    In the end: some useful technical background info

    There are four control tables involved in the customizing process in LC:
  • TMCEXCFS: Field status of communication structures.
    Here the content is supplied by SAP: each field has a status per extract structure and communication structure: initial (inactive), A (active) or F (forbidden).

  • TMCEXCFZ: Field status of customer communication structures.
    In this table you can find all fields selected by the customer, per extract structure and communication structure.

  • TMCEXEVE: Events and extract structures.
    Supplied by SAP: which event supplies which extract structure with which communication structure.

  • TMCEXACT: datasources activation and updating status.
    Also this one is supplied by SAP, but can be changed by customer.

  • Delta Mechanism In LO Extraction Part 3

    In the beginning it was Serialized V3 Update.

    After examining the conception and technical background of delta extraction in Logistic Cockpit by using this method (Episode one: V3 Update, the ‘serializer’), we also examined all peculiar restrictions and problems related to its usage ( Episode two: V3 Update, when some problems can occur...).
    Since performance and data consistency are very critical issues for a datawarehouse and, from this point of view, the ‘serializer’ started showing its not so reliable face, with advent of PI 2002.1 (or PI-A 2002.1) three new update methods came up and, as of PI 2003.1 (or PI-A 2003.1), these methods have been completely replaced Serialized V3 update method, that it’s no longer offered.

    image
    Fig.1: LBWE, Update Mode Selection Screen as PI.2002.1

    image
    Fig.2: LBWE, Update Mode Selection Screen as PI.2003.1


    Now let’s meet our new guests and try to discover when it’s better to make use of one than another one!


    1. New "direct delta" update method: when in R/3, life is not so berserk...

    With this update mode, extraction data is transferred directly to the BW delta queues with each document posting.
    As a consequence, each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues.
    Just to remember that ‘LUW’ stands for Logical Unit of Work and it can be considered as an inseparable sequence of database operations that ends with a database commit (or a roll-back if an error occurs).

    image
    Fig.3: Direct Delta update mechanism


    Starting from the definition of this method we can see at once what are advantages and disadvantages resulting from direct delta usage.
    BENEFITS AND...
    As we can see from the picture above, there’s no need to schedule a job at regular intervals (through LBWE “Job control”) in order to transfer the data to the BW delta queues; thus, additional monitoring of update data or extraction queue is not require.
    Logically, restrictions and problems described in relation to the "Serialized V3 update" and its collective run do not apply to this method: by writing in the delta queue within the V1 update process, the serialization of documents is ensured by using the enqueue concept for applications and, above all, extraction is independent of V2 update result.
    ...LIMITATIONS !
    The number of LUWs per datasource in the BW delta queues increases significantly because different document changes are not summarized into one LUW in the BW delta queues (as was previously for V3 update).
    Therefore this update method is recommended only for customers with a low occurrence of documents (a maximum of 10000 document changes - creating, changing or deleting - between two delta extractions) for the relevant application.
    Otherwise, a larger number of LUWs can cause dumps during extraction process and, anyway, V1 update would be too much heavily burdened by this process.
    Besides, note that no documents can be posted during delta initialization procedure from the start of the recompilation run in R/3 (setup tables filling job) until all records have been successfully updated in BW: every document posted in the meantime is irrecoverably lost.
    (Remember that stopping the posting of documents always applies to the entire client).

    2. New "queued delta" update method: how to easily forget our old ‘serializer’...

    With queued delta update mode, the extraction data (for the relevant application) is written in an extraction queue (instead of in the update data as in V3) and can be transferred to the BW delta queues by an update collective run, as previously executed during the V3 update.
    After activating this method, up to 10000 document delta/changes to one LUW are cumulated per datasource in the BW delta queues.

    image
    Fig.4: Queued Delta update mechanism


    If you use this method, it will be necessary to schedule a job to regularly transfer the data to the BW delta queues (by means of so-called "update collective run") by using the same delivered reports as before (RMBWV3<Appl.No.>); instead, report RSM13005 will not be provided any more since it only processes V3 update entries.
    As always, the simplest way to perform scheduling is via the "Job control" function in LBWE.
    SAP recommends to schedule this job hourly during normal operation after successful delta initialization, but there is no fixed rule: it depends from peculiarity of every specific situation (business volume, reporting needs and so on).
    BENEFITS, BUT...
    When you need to perform a delta initialization in the OLTP, thanks to the logic of this method, the document postings (relevant for the involved application) can be opened again as soon as the execution of the recompilation run (or runs, if several and running in parallel) ends, that is when setup tables are filled, and a delta init request is posted in BW, because the system is able to collect new document data during the delta init uploading too (with a deeply felt recommendation: remember to avoid update collective run before all delta init requests have been successfully updated in your BW!).
    By writing in the extraction queue within the V1 update process (that is more burdened than by using V3), the serialization is ensured by using the enqueue concept, but collective run clearly performs better than the serialized V3 and especially slowing-down due to documents posted in multiple languages does not apply in this method.
    On the contrary of direct delta, this process is especially recommended for customers with a high occurrence of documents (more than 10,000 document changes - creation, change or deletion - performed each day for the application in question.
    In contrast to the V3 collective run (see OSS Note 409239 ‘Automatically trigger BW loads upon end of V3 updates’ in which this scenario is described), an event handling is possible here, because a definite end for the collective run is identifiable: in fact, when the collective run for an application ends, an event (&MCEX_nn, where nn is the number of the application) is automatically triggered and, thus, it can be used to start a subsequent job.
    Besides, don’t omit that queued delta process extraction is independent of success of the V2 update.
    ...REMEMBER TO EMPTY THE QUEUE !
    The ‘queued delta’ is a good friend, but some care is required to avoid any trouble.
    First of all, if you want to take a look to the data of all extract structures queues in Logistic Cockpit, use transaction LBWQ or "Log queue overview" function in LBWE (but here you can see only the queues currently containing extraction data).
    In the posting-free phase before a new init run in OLTP, you should always execute (as with the old V3) the update collective run once to make sure to empty the extraction queue from any old delta records (especially if you are already using the extractor) that, otherwise, can cause serious inconsistencies in your data.
    Then, if you want to do some change (through LBWE or RSA6) to the extract structures of an application (for which you selected this update method), you have to be absolutely sure that no data is in the extraction queue before executing these changes in the affected systems (and especially before importing these changes in production environment !).
    To perform a check when the V3 update is already in use, you can run in the target system the RMCSBWCC check report.
    The extraction queues should never contain any data immediately before to:



  • perform an R/3 or a plug-in upgrade

  • import an R/3 or a plug-in support packages


  • 3. New "unserialized V3 update" update method: when a right delta sequence is not a requirement...

    With this update mode, that we can consider as the serializer’s brother, the extraction data continues to be written to the update tables using a V3 update module and then is read and processed by a collective update run (through LBWE).
    But, as the name of this method suggests, the V3 unserialized delta disowns the main characteristic of his brother: data is read in the update collective run without taking the sequence into account and then transferred to the BW delta queues.

    image
    Fig.5: (Unserialized) V3 Delta update mechanism

    It’s pleonastic to say that all (performance) problems related to the serialized V3 update can’t apply to the unserialized one !
    However, already known V2 dependency keep on subsist.
    When this method can be used ?
    Only if it’s irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence (serialization) in which the data was generated in R/3 (thanks to a specific design of data targets in BW and/or because functional data flow doesn’t require a correct temporal sequence).

    image
    Fig.6: Comparison among call function hierarchies involved in different update methods

    Little recap of essential points to consider in migration process

    If you want to select a new update method, you have to implement specific OSS notes, otherwise, even if you have selected another update method, data will still be written to the V3 update and it can no longer be processed !
    Here is an OSS collection related to update method switch divided for application:
  • PURCHASING (02) -> OSS note 500736

  • INVENTORY MANAGEMENT (03) -> OSS note 486784

  • PRODUCTION PLANNING AND CONTROL (04) -> OSS note 491382

  • AGENCY BUSINESS (45) -> OSS note 507357

  • If the new update method of an application is the queued delta, it’s better to have the latest qRFC version installed.
    However, before changing, you must make sure that there are no pending V3 updates (as suggested before, run the RMCSBWCC during a document posting free phase and switch the update method if this program doesn’t return any open V3 updates).

    Early delta initialization in the logistics extraction and final considerations

    Not always on our systems an important downtime is possible in the initialization process during the reconstruction and the delta init request (just thinking when you need to ask a billing stop period...a real nightmare in some company !)
    For this reason, as of PI 2002.1 and BW Release 3.0B, you can use the early delta initialization to perform the initialization for selected datasources (just checking in infopackage update mode tab if this function is available): in this way you can readmit document postings in the OLTP system as early as possible during the initialization procedure.
    In fact, if an early delta initialization infopackage was started in BW, data may be written immediately to the delta queue.
    But, if you are working with the queued delta method, using early delta initialization function doesn’t make sense: as described before, it’s the same method definition that permits to reduce downtime phase.
    But leaving aside that, don’t forget that, regardless of the update method selected, it’s ALWAYS necessary to stop any document postings (for the relevant application) during setup tables recompilation run !

    Final considerations
    In the end, just some little personal thoughts...
    After concluding this delta methods overview, it’s clear that queued delta will be very probably the most used and popular delta method in Logistic Cockpit: if we consider direct and unserialized ones as exploitable only in specific and not so frequent situations (low delta document occurrences or no serialization needed), queued delta comes as legitimate heir to the throne before occupied by the old ‘serializer’.
    Ok, all the elements are now available: it’s up to you making a right choice of delta method taking in consideration your specific scenario

    Delta Mechanism In Lo Extraction

    The Serialized V3 Update: the end of a kingdom

    Up to (and including) PI 2001.2 (or PI-A 2001.2), only the Serialized V3 update method was used for all applications of the extract structures in the Logistic Cockpit.
    The logical reason for this ‘absolutism’ was that, at a first sight, this specific BW update option has being guaranteed evident useful features in a data warehouse management perspective:


  • the requirement of a specific job to be scheduled, resulting in a temporally detachment from the daily business operations;

  • the peculiarity of the serialization (that is the right delta sequence from an R/3 document hystory point of view) in the update mechanism of BW queues that allows a consistent data storage in the datawarehouse.

  • In spite of these well known benefits, with the advent of PI 2002.1 (or PI-A 2002.1) the short kingdom of the ‘serializer’ ends: as a result of the new plug-in, three new update methods for each application come up and, as of PI 2003.1 (or PI-A 2003.1) the Serialized V3 update method is no longer offered.
    What’s happened !?
    In the reality, all that glitters is not gold.
    In fact, during daily operations and by facing with all practical issues, some restrictions and technical problems arose.


  • Collective run performance with different languages

  • During a collective run processing, requests that were created together in one logon language are always processed together.
    image

    Starting from this assumption, let’s try now to imagine what happens when several users logged on the source system in different languages (just thinking to a multinational company) create/modify documents for a relevant Logistic Cockpit application. In this case the V3 collective run can only ever process the update entries for one language (at a time) during a single process call.
    As a consequence, it’s easy to understand that a new process call is automatically started for the update entries belonging to the documents entered in a different language from the previous one.
    So, if we want that the delta mechanism to maintain the chronological (serialized) sorting despite the different languages, it’s possible that only a few records (even only one record !) is processed per internal collective run processing.
    This was the reason why the work processes carrying out the delta processing could be often found in the process overview with the "Sequential reading" action on the VBHDR table for long time.
    In fact, for every restart, the VBHDR update table is read sequentially on the database (and you can bet that the update tables can become huge): the risk is that processing the update data may require so much time that, in the meanwhile, the number of new update records generated on the system are over the number of records being processed !
    image
    Fundamentally, in the serialized V3 update only update entries that were generated in direct chronological order (to comply with the serialization need) and with the same logon language (for technical restrictions) could therefore be processed in one task.
    image

    If the language in the sequence of the update entries changed, the V3 collective update process was terminated and then restarted with the new language, with all performance impacts we can suppose.


  • Several changes in one second

  • For technical reasons, collective run updates that are generated in the same second cannot be serialized.
    That is, the serialized V3 update can only guarantee the correct sequence of extraction data in a document if the document did not change twice in one second.


  • Different instances and times synchronization

  • I think it’s easy to verify how much it is probable that in a landscape in which there are several application servers for the same environment different times can be displayed.
    The time used for the sort order in our BW extractions is taken from the R/3 kernel which uses the operating system clock as a time stamp. But, as experience teaches, in general, the clocks on different machines differ and are not exactly synchronized.
    The conclusion is that the serialized V3 update can only ensure the correct sequence in the extraction of a document if the times have been synchronized exactly on all system instances, so that the time of the update record (determined from the locale time of the application server) is the same in sorting the update data.


  • The V2 update dependence

  • Not to be pitiless, but the serialized V3 update have also the fault of depending from the V2 processing successful conclusion.
    Our method can actually only ensure that the extraction data of a document is in the correct sequence (serialized) if no error occurs beforehand in the V2 update, since the V3 update only processes update data for which the V2 update is successfully processed.
    Independently of the serialization, it’s clear that update errors occurred in the V2 update of a transaction and which cannot be reposted, cause that the V3 updates for the transaction that are still open can never be processed.
    This could thus lead to serious inconsistencies in the data in the BW system.

    Delta Mechanism In LO Extraction

    For extracting logistic transactional data from R/3, a new generation of datasources and extractors, no longer based on LIS (Logistic Information System) information structures, was developed starting from BW Release 2.0B and PI 2000.1 or PI-A 2000.1 (valid from R/3-Release 4.0B ).
    The tools for the logistics extract structures can be found in the IMG for BW (transaction SBIW): access into your OLTP system, choose Customer-Defined DataSources -> Logistics -> Managing Extract Structures.
    The Logistics Extract Structures Customizing Cockpit (you can directly see it by transaction LBWE) represents the central tool for the administration of extract structures.

    image

    Ok...but, in other words, what is this the Logistic Cockpit (LC)?
    We can say that it’s a new technique to extract logistics information and consists of a series of a standard extract structures (that is, from a more BW perspective, standard datasources), delivered in the business content thanks to a given plug-in.
    But, what is the logic behind these datasources that allows to manage the logistic flows towards BW in delta mode after an initial load of the historical data (done with update mode ‘Delta Initialization’, by retrieving data from the setup tables, which are assigned to each extract structure, and are filled using special setup transactions in OLTP) ?
    Following from many questions posted until now in SDN BW Forums, in this weblog we will focus only on the delta mechanism of the LC and not on the other tasks we can manage inside it, like the necessary steps for activating and carrying out successful data extraction or the maintenance of extract structures and datasources (but, don’t worry, it will arrive also a summary weblog dedicated to these important procedures in the next days !).

    The V3 Update

    Unlike the LIS update (well, I know that you are asking ‘but how does this old LIS update work ???’...my dear, another weblog will arrive very soon also for this topic...sorry for the waiting, but I have a lot of things to do !), data is transferred from the LIS communication structure, using extract structures (e.g. MC02M_0HDR for the header purchase documents), into a central delta management area.
    This transfer takes place thanks to the V3 update with a specific (scheduled) job and is therefore temporally detached from the daily application operations; the main consideration is that the delta management acts as a buffer (not depending from the application business) containing data that can be requested from BW via infopackage with update mode ‘delta’.
    The following picture shows (with an high-level view) the interaction between the LIS communication structure and the V3 extraction technology.

    image We said that for updating the extraction of transactional data from the different logistics applications (MM, PP, SD and so on), the technology for collective updates (‘V3 updates’) is used (until PI 2003.1).
    This means that the data is collected in the R/3 update tables before the transfer to the interface: the data is retrieved there by means of a periodic update process that needs to be started in order to transfer delta records to the the BW system delta queue.
    During this V3 collective run (that you can start and schedule from LBWE for each application component), the data is transferred to the BW delta queue (that you can see from RSA7 (see the picture below) or LBWQ transactions), from which they are retrieved by means of (delta) requests from the BW system.
    image

    V1, V2, V3...
    When scheduling what

    Normally in R/3 there are three types of update available:


  • Synchronous update (V1 update)


  • Statistics update is carried out at the same time (synchronous) as the document update (in the application tables).
  • Asynchronous update (V2 update)


  • Document update and the statistics update take place in different tasks.

    So, V1 and V2 updates don’t require any scheduling activity.

  • Collective update (V3 update)


  • As for the previous point (V2), document update is managed in a separate moment from the statistics update one, but, unlike the V2 update, the V3 collective update must be scheduled as a job (via LBWE).
    Remember that the V3 update only processes the update data that is successfully processed with the V2 update.

    image

    This is a key task in order to properly manage the right working of the BW logistic flows.
    In fact, scheduling timing process is very important and it should be based on the basis of
    1) the amount of activities on a particular OLTP system and on
    2) the particular requirements related to the updating needs of data displayed in BW reports.

    For example (relating to the first point), a development system with a relatively low/medium of new/modified/deleted documents may only need to run the V3 update on a weekly/daily basis.
    Instead, a full production environment, with really many thousands of transactions everyday, may have to be updated hourly, otherwise postings will queue and can affect performance heavily.
    About the second point: if, for example, the reporting timing refers to a monthly periodic view, successfully monthly scheduling the V3 update will ensure that all the necessary information structures are properly updated when new or existing documents are processed in the meanwhile.
    Finally, the right choice will be the result of all these considerations; by doing so, the information structures in BW will be current and overall performance will be improved.
    It’s possible to verify that all V3 updates are successfully completed via transaction SM13.
    SM13 transaction will take you to the Update Records: Main Menu screen:


    image
    On this screen, enter asterisk as your user (for all users), flag the radio button ‘V2 executed’, select a range date and hit enter.
    Any outstanding V3 updates will be listed.

    At this point, it’s clear that, considering the V3 update mechanism, the main requirement is that the delta info have to be transferred in the same sequence to the BW system as it occurred in the OLTP system.
    Just a consideration...if we had to load our delta records only in a cube, there would be no problem: everything goes in append and, in the end, we’ll find the final situation right displayed thanks to the OLAP processor!
    But since updating in ODS objects is permitted in the logistics extraction for almost all DataSources, we have to consider any effects that can derive from the ‘overwrite’ update mode (specific of the ODS object).
    For example, the consistent storage of a status field (e.g. delivery status) in ODS objects can only be ensured only with a right (serialized) delta sequence: if the record with ‘open delivery’ status (created as first record in R/3) arrives later than the record with ‘closed delivery’ one (created as second one in R/3), we would have a false representation of the reality.
    image
    Considering that, the sequence of the existing data records is recognized by and taken into account when reading and processing the update data (step A of the picture), as well as when transferring data to the BW system (step B).
    Since the normal existing update methods actually does not recognize the serialized processing of update data, the Serialized V3 Update function was created (also thanks to subsequent several corrections in SAP Basis) in order to be able to serialize step A.

    LO COCKPIT STEP BY STEP

    LO Cockpit Step By Step
    Here is LO Cockpit Step By Step
    LO EXTRACTION
    - Go to Transaction LBWE (LO Customizing Cockpit)
    1). Select Logistics Application
           SD Sales BW
                Extract Structures
    2). Select the desired Extract Structure and deactivate it first.
    3). Give the Transport Request number and continue
    4). Click on `Maintenance' to maintain such Extract Structure
           Select the fields of your choice and continue
                 Maintain DataSource if needed
    5). Activate the extract structure
    6). Give the Transport Request number and continue
    - Next step is to Delete the setup tables
    7). Go to T-Code SBIW
    8). Select Business Information Warehouse
    i. Setting for Application-Specific Datasources
    ii. Logistics
    iii. Managing Extract Structures
    iv. Initialization
    v. Delete the content of Setup tables (T-Code LBWG)
    vi. Select the application (01 – Sales & Distribution) and Execute
    - Now, Fill the Setup tables
    9). Select Business Information Warehouse
    i. Setting for Application-Specific Datasources
    ii. Logistics
    iii. Managing Extract Structures
    iv. Initialization
    v. Filling the Setup tables
    vi. Application-Specific Setup of statistical data
    vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
            Specify a Run Name and time and Date (put future date)
                 Execute
    - Check the data in Setup tables at RSA3
    - Replicate the DataSource
    Use of setup tables:
    You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
    Full loads are always taken from the setup tables

    Responsibilities of an Implementation Project

    Responsibilities of an implementation project...
    For ex, Lets say If its a fresh implementation of BI or for that matter you are implementing SAP...
    First and foremost will be your requirements gathering from the client. Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation...
    After the blue print phase sign off we start off with the realization phase where the actual development happens... In our example after installing the necessary softwares, patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc., basically its a question and answer session with the business users... After collecting those informations the development happens in the development servers...
    After the development comes to an end the same objects are tested in quality servers for any bugs, errors etc., When all the tests are done we move all the objects to the production environment and test it again whether everything works fine...
    The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions...
    The responsibilites vary depending on the requirement... Initially the business analyst will interact with the end users/managers etc., then on the requirements the software consultants do the development, testers do the testing and finally the go-live happens...
    BW Data Architect Description :
    The BW Data Architect is responsible for the overall data design of the BW project. This includes the design of the 
    - BW InfoCubes (Basic Cubes, Multi-cubes, Remote cubes, and Aggregates) 
    - BW ODS Objects
    - BW Datamarts 
    - Logical Models 
    - BW Process Models
    - BW Enterprise Models 
    The BW Data Architect plays a critical role in the BW project and is the link between the end user's business requirements and the data architecture solution that will satisfy these requirements. All other activities in the BW project are contingent upon the data design being sound and flexible enough to satisfy evolving business requirements. 
    Time Commitment – the time which must be committed to this Role to ensure the project requirements are met. Project Complexity Time Commitment: 
    Low - If the BW project utilizes standard BW content and InfoCubes, this role can be satisfied by the BW Application Consultant. Medium If the BW project requires enhancements to the standard BW content and InfoCubes and/or requires the integration of non-SAP data, this role may require a committed resource. 
    High - If the BW project requires significant modification and enhancement to standard BW content and InfoCubes, it is highly recommended that an experienced resource be committed full-time to the project. 
    Key Attributes The BW Data Architect must have: 
    - An understanding of the BW data architecture 
    - An understanding of multidimensional modeling 
    - An understanding of the differences between operational systems data modeling and data warehouse data modeling
    - An understanding of the end user's data 
    - An understanding of the integration points of the data (e.g., customer number, invoice number) 
    - Excellent troubleshooting and analytical skills 
    - Excellent communication skills 
    - Technical competency in data modeling
    - Multi-language skills, if an international implementation 
    - Working knowledge of the BW and R/3 application(s)
    - Experience with Data Modeling application software (i.e., ERWIN, Oracle Designer, S-Designer, etc.) 
    Key Tasks - The BW Data Architect is responsible for capturing the business requirements for the BW project. This effort includes:
    - Planning the business requirements gathering sessions and process 
    - Coordinating all business requirements gathering efforts with the BW Project Manager 
    - Facilitating the business requirements gathering sessions 
    - Capturing the information and producing the deliverables from the business requirements gathering sessions
    - Understanding and documenting business definitions of data
    - Developing the data model 
    - Ensuring integration of data from both SAP and non-SAP sources
    - Fielding questions concerning the data content, definition and structure 
    This role should also address other critical data sign issues such as: 
    - Granularity of data and the potential for multiple levels of granularity
    - Use of degenerate dimensions
    - InfoCube partitioning
    - Need for aggregation at multiple levels
    - Need for storing derived BW data 
    - Ensuring overall integrity of all BW Models 
    - Providing Data Administration development standards for business requirements analysis and BW enterprise modeling
    - Provide strategic planning for data management 
    - Impact analysis of data change requirements 
    As stated above, the BW Data Architect is responsible for the overall data design of the BW project. This includes the design of the: 
    - BW InfoCubes (Basic Cubes, Multi-cubes, Remote cubes, and Aggregates) 
    - BW ODS Objects 
    - BW Datamarts
    - Logical Models 
    - BW Process Models
    - BW Enterprise Models

    Scope of the Implementation Project

    What are the scope of work when SAP is implement?
    You should have a Project manager from your company, who will work or  should work closely with the implementing Company Project Manager.
    The Implementing company will supply the Technical/Functional  Consultants. Your company should provide so-called "Super Users / Key Users. These are persons within the company who know the business and its  processes inside out.
    The Key Users will work closely with the Technical Consultants in designing  the companies Business Process to fit with SAP. 
    The Technical consultants should pass on their knowledge to the Key  Users. The Key User will train the End Users, so they must be also trained in  SAP Business Process, this is either done by the Implementing company or your company can
    send them to SAP training.
    The driving person here is your Company Project Manager, in any case all company persons involved in the Implementation project should be on the project 100 percent.
    You should setup five phases : 
    1.  Preparation 
    2.  Blueprinting, 
    3.  Realization 
    4.  Preparation Go Live 
    5.  Go Live/Support
    From the Project Management side, there are many thing that must be set up prior to Blueprinting, such as Project Charter, Project Plan, Risk Management  Plan, Change Management plan etc.
    As far as a SOW, this is part of your Charter and there are many Templates of such documents, you need more than just the template, you must know how to analysis your overall environment (Company goals, landscape, Business objectives  etc) 
    in coming up with the Scope. More than one person is involved in the SOW defination.                       *-- Ricklay

    Types of Tickets in Production

    What are the types of ticket and its importance?
    This depends on the SLA. It can be like:
      1. Critical.
      2. Urgent.
      3. High.
      4. Medium
      5. Low.
    The response times and resolution times again are defined in the SLA based on the clients requirement and the charges.
    This is probably from the viewpoint of Criticality of the problem faced by the client as defined by SAP.
      1)      First Level Ticketing:
    Not severe problem. Routine errors. Mostly handled by Service desk arrangement of the company (if have one).
    Eg: a) Say Credit limit block in working on certain documents?
          b) Pricing Condition Record not found even though conditions are maintained?
          c) Unable to print a delivery document or Packing list?
    PS: In the 4th phase of ASAP Implementation Methodology( i.e Final Preparations for GO-LIVE) SAP has clearly specified that a Service desk needs to be arranged for any sort of Implementation for better handling of Production errors.
    Service desk lies with in the client.
    2)      Second Level Ticketing:
    Some sort of serious problems. Those Could not be solved by Service Desk. Should be referred to the Service Company (or may be company as prescribed in SLA).
    Eg: a) Credit Exposure (especially open values) doesn't update perfectly to KNKK Table.
          b) Inter company Billing is taking a wrong value of the Bill.
          c) Need a new order type to handle reservation process
          d) New product has been added to our selling range. Need to include this into SAP. (Material Masters, Division attachements, Stock Handling etc.)
    3)      Third Level Ticketing:
    Problems could not be solved by both of the above, are referred to Online Service Support (OSS) of SAP Itself. SAP tries to solve the Problem, sometimes by providing the perfect OSS Notes, fits to the error and rarely SAP logs into our Servers (via remote log-on)for post mortem the problem. (The Medical check-up client, connections, Login id and Passwords stuff are to be provided to SAP whenever they need or at the time of opening OSS Message.)
    There are lots of OSS Notes on each issue, SAP Top Notes and Notes explaining about the process of raising a OSS Message.
    Sometimes SAP Charges to the client / Service company depending on the Agreement made at the time of buying License from SAP.
    Eg: 1) Business Transation for the Currency 'EUR' is not possible. Check OSS Note  - This comes at the time of making Billing.
          2) Transaction MMPI- Periods cannot be opened – See OSS Note.
          There are many other examples on the issue.
    4)      Fourth Level Ticketing:
    Where rarely, problems reach this level.
    Those problem needs may be re-engineering of the business process due to change in the Business strategy. Upgradation to new Version.  More or less this leads to extinction of the SAP Implementation.

    The Tech details of Standard ODS / DSO in SAP DWH

    "An Operational Data Store object (ODS object) is used to store consolidated and cleansed data (transaction data or master data for example) on a document level (atomic level)" - Refered from SAP Docs.It describes a consolidated dataset from one or more Info Sources / transformations (7.0) as illustrated below in Fig.1.
    In this blog we will look at the Standard Data Store Object. We have other types namely Data Store Object with Direct Update (Transactional ODS in 3.x) and Write Optimized Data Store new with BI 7.x which contains only Active data table used to manage huge data loads for instance - Here is the link from Help portal Write optimised DSO





    Architecture of Standard ODS /DSO (7.x)
    "ODS Objects consist of three tables as shown in the Architecture graphic below" - Refered from SAP Docs:

    image
    Figure 1: ODS Architecture - Extracted from SAP Docs
    TIP: The new data status is written to the table with active data in parallel to writing to the change log taking the advantage of parallel processes which can be customized globally or at the object level in the system


    Lets go through a Scenario
    In this example we will take the Master data object material and plant (0MAT_PLANT compounded with 0PLANT) with a few attributes for the demonstration purpose. Now define a ODS / DSO as below where material and plant is a key and the corresponding attributes as data fields.

    image
    Figure 2: ODS / DSO definition


    Lets create a flat file data source or an info source with 3.x in this example to simplify the scenario with all the info objects we have defined in ODS structure
    image
    Figure 3: Info source definition



    Lets check the flat file records, remember that the key fields are plant and material and we have a duplicate record as in the below Fig.4. The 'Unique Data Records'option is unchecked which means it can expect duplicate records.
    image
    Figure 4: Flat file Records


    Check the monitor entries and we see that 3 records are transferred to update rules and two records are loaded in to NEWDATA table as we haven't activated the request yet. This is because we have a duplicate record for the key in the ODS which gets overwritten (Check the first two records in Fig 4)
    image
    Figure 5: Monitor Entries


    Now check the data in the NEWDATA / ACTIVATION QUEUE table, we have only two records as the duplicate records gets overwritten with the most recent record i.e. record 2 in PSA got overwritten as it has got the same key material and plant.
    image
    Figure 6: Activation Queue



    image
    Figure 7: PSA data for comparison
    Tip: The key figures will have the overwrite option by default, additionally we have the summation option to suit certain scenarios and the characteristics will overwrite always. The technical name of new data / Activation queue table is always for customer objects - /bic <name of ODS>140 and for SAP objects - /bio<name of ODS>140.


    Once we activate the data we will have two records in ODS Active Data table. As we see below the Active Data table always has contains the semantic key (Material, Plant)

    image
    Figure 8: Active Data Table
    TIP: The name of the active table /BIC/A<odsname>100 and /BI0 for SAP.


    The change log table has these 2 entries with the new image (N). Remember the record mode we will look in to it later. The technical key (REQID, DATAPACKETID, RECORD NUMBER) will be part of change log
    image
    Figure 9: Change Log Table
    TIP: The technical name is always /BIC/<internal generated number>.


    Now we will add two new records material 75 plant 1, Material 80 Plant 1 and change the existing record for the key Material 1 and plant 1 as below
    image
    Figure 10: Add more records


    When we look at the monitor there will be 3 records in the activation queue table as the duplicate records gets filtered out, in this example the first record in Fig.10
    image
    Figure 11: Monitor


    Look at the new data table (Activation Queue) and we will have 3 records that are updated as seen in the monitor
    image
    Figure 12: Activation Queue


    How the Change log works?
    We will check the change log table to see how the deltas are handled. The highlighted records are from the first request that is uniquely identified by technical key (Request Number, Data packet number, Partition value of PSA and Data record number)
    image
    Figure 13: Change log Table 1
    With the second load i.e. the second request the change log table puts the before and after Image for the relevant records (the non highlighted part from the Fig.13)

    In the above example Material (1) and Plant (1) has the before image with record mode "x"(row 3 in the above Fig) And all the key figures will be have the "-" sign as we have opted to overwrite option and the characteristics will be overwritten always.
    image
    Figure 14: Change log Table 2


    The after image " " which reflects the change in the data record (Check row 4 in the above fig). We have changed the characteristic Profit center with SE from SECOND and the Key figure Processing Time is changed from 1 to 2. A new record (last row in the above Fig) is added is with the Status "N" as it's a new record.
    Summary
    This gives us an overview of the standard ODS object and how the change log works. The various record modes available:
    image
    Figure 15: Record Modes



    Check the note 399739 about the details of the Record Mode. The record mode(s) that a particular data source uses for the delta mechanism largely depends on the type of the extractor. Check the table RODELTM about the BW Delta Process methods with the record modes available as well our well known table ROOSOURCE for the extractor specific delta method.


    For Instance LO Cockpit extractors use 'ABR' delta method that supplies After-Image, Before-Image, New Image and Reverse Image. Extractors in HR and Activity based costing uses the delta method 'ADD' i.e. with record mode 'A' and FI-GL,AR,AP extractors are based on delta method 'AIE' i.e. record mode space ' ' After image. The list goes on ..........